slony replication and pg pool ii

43
www.consistentstate.com [email protected] PG West, 2009 by Kevin Kempter SLONY Replication and PG POOL II Except where otherwise noted, this work is licensed under http://creativecommons.org/licenses/by/3.0/

Upload: nguyentuyen

Post on 04-Jan-2017

233 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Replication and PG POOL II

Except where otherwise noted, this work is licensed underhttp://creativecommons.org/licenses/by/3.0/

Page 2: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

Session Topics

● SLONY Replication● PG POOL II

Page 3: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Replication

● SLONY is a “Master to Multiple Slaves” Replication System

● Topics We'll Cover:– Installation and General Info

– Creating and activating a replication set

– Methods for Failover & Switchover

● Note: Thie is ONE way to setup SLONY, not necessarily the only way

Page 4: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Installation

● Download

● Uncompress

● Configure

– ./configure [options]

– ./configure –with-pgconfigdir=<dir> \ --with-pgbindir=<dir> \ --with-pgsharedir=<dir>● Make

● Make Install (as root)

Page 5: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY The SLONIK Preamble

● Each server to be included in the cluster must be prepared

● Slony will create a schema on each node (i.e. if your replication cluster is named customer then slony will create a schema on each node called _customer.)

● Allow slony to create this cluster schema itself (i.e. keep your hands off)

● The same preamble must be included in each slonik script to manage the cluster :

cluster name = customer;node 1 admin conninfo = 'dbname=custdb host=yoda user=slony';node 2 admin conninfo = 'dbname=custdb host=r2d2 user=slony';node 3 admin conninfo = 'dbname=custdb host=c3po user=slony';

Page 6: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY General Info

● The database must exist on all nodes

● The PL/pgSQL language must be installed on all nodes

● Connection Strings and passwords (use trust or a .pgpass file)

Page 7: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Create Variable File

● The use of a variables file can help you manage the setup process

● Example (slony_setup.env):export CLUSTERNAME=sample_rep

export MASTERDBNAME=slony_test

export MASTERHOST=localhost

export MASTERPORT=5444

export SLAVEDBNAME=slony_test

export SLAVEHOST=localhost

export SLAVEPORT=5445

export REPUSER=postgres

Page 8: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Create / Prepare Slave Node

● Create the database on the slave node

● Replicate users on the slave node

● Create the database structures (DDL) on the slave node

● Install PL/pgSQL on the slave node

Page 9: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Initialize the replication cluster

● Initialize the replication cluster

● Create a replication set

● Add tables to the replication set

● Store the slave node

● Store paths

Page 10: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Start the slon daemons

● $ slon [options] clustername [connection string]Options:

-h print usage message and exit

-v print version and exit

-d <debuglevel> verbosity of logging (1..4)

-s <milliseconds> SYNC check interval (default 10000)

-t <milliseconds> SYNC interval timeout (default 60000)

-o <milliseconds> desired subscriber SYNC processing time

-g <num> maximum SYNC group size (default 6)

Page 11: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Start the slon daemons – More Options

More Options:

-c <num> how often to vacuum in cleanup cycles

-p <filename> slon pid file

-f <filename> slon configuration file

-a <directory> directory to store SYNC archive files (SLONY Log Shipping)

-x <command> program to run after writing archive file

-q <num> Terminate when this node reaches # of SYNCs

-r <num> # of syncs for -q option

-l <interval> this node should lag providers by this interval

Page 12: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Start replication

● Subscribe set

SUBSCRIBE SET (

ID = 1,

PROVIDER = 1,

RECEIVER = 3,

FORWARD = YES

);

Page 13: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Switchover

lock set (id = 1, origin = 1);

wait for event (origin = 1, confirmed = 2);

move set (id = 1, old origin = 1, new origin = 2);

wait for event (origin = 1, confirmed = 2);

Page 14: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Failover

failover (id = 1, backup node = 2);

drop node (id = 1, event node = 2);

Page 15: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY Live Example

Page 16: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

SLONY

Summary

Page 17: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL II

Page 18: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIOverview

● Connection Pooling– Connections beyond max_connections are

queued instead of rejected

● Replication ● Load Balancing● Parallel Query

Page 19: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIInstallation Prerequisites

● PostgreSQL header files● libpq● make

Page 20: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIInstallation

● Install PostgreSQL on all nodes

● Install pgpool on node 1– ./configure --prefix=<install dir> \

--with-pgsql=<path to top level pg install dir>

– e.g.:

./configure --prefix=/usr/local/pgsql/pgpool \

--with-pgsql=/usr/local/pgsql/pg841

– $ make

– $ sudo make install

– Chown -R postgres:postgres <install dir>

Page 21: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration

● <install dir>/etc/pgpool.conf (copied from pgpool.conf.sample)

# connections

listen_addresses = 'localhost'

# Port number for pgpool

port = 9999

Page 22: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# number of pre-forked child process

num_init_children = 32

# Number of connection pools allowed for a child process

max_pool = 4

# If idle for this many seconds, child exits. 0 means no timeout.

child_life_time = 300

# If idle for this many seconds, connection to PostgreSQL closes.

# 0 means no timeout.

connection_life_time = 0

Page 23: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# If child_max_connections connections were received, child exits.

# 0 means no exit.

child_max_connections = 0

# If client_idle_limit is n (n > 0), the client is forced to be

# disconnected whenever after n seconds idle (even inside an explicit

# transactions!)

# 0 means no disconnect.

client_idle_limit = 0

Page 24: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# Maximum time in seconds to complete client authentication.

# 0 means no timeout.

authentication_timeout = 60

# Logging directory

logdir = '/tmp'

# pid file name

#pid_file_name = '/var/run/pgpool/pgpool.pid'

pid_file_name = '/usr/local/pgsql/pgpool/etc/pgpool.pid'

Page 25: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# Replication mode

replication_mode = false

# Load balancing mode, i.e., all SELECTs are load balanced.

# This is ignored if replication_mode is false.

load_balance_mode = false

# if there's a data mismatch between master and secondary

# start degeneration to stop replication mode

replication_stop_on_mismatch = false

Page 26: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# If true, replicate SELECT statement when load balancing is disabled.

# If false, it is only sent to the master node.

replicate_select = false

# Semicolon separated list of queries to be issued at the end of a session

reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'

# for 8.3 or newer PostgreSQL versions DISCARD ALL can be used as

# follows. However beware that DISCARD ALL holds exclusive lock on

# pg_listener so it will be a serious performance problem if there are

# lots of concurrent sessions.

# reset_query_list = 'ABORT; DISCARD ALL'

Page 27: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# If true print timestamp on each log line.

print_timestamp = true

# If true, operate in master/slave mode.

master_slave_mode = false

# If true, cache connection pool.

connection_cache = true

Page 28: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# Health check timeout. 0 means no timeout.

health_check_timeout = 20

# Health check period. 0 means no health check.

health_check_period = 0

# Health check user

health_check_user = 'nobody'

Page 29: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# Execute command by failover.

# special values: %d = node id

# %h = host name

# %p = port number

# %D = database cluster path

# %m = new master node id

# %M = old master node id

# %% = '%' character

#

failover_command = ''

Page 30: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# Execute command by failback.

# special values: %d = node id

# %h = host name

# %p = port number

# %D = database cluster path

# %m = new master node id

# %M = old master node id

# %% = '%' character

#

failback_command = ''

Page 31: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# If true, automatically locks a table with INSERT statements to keep

# SERIAL data consistency. If the data does not have SERIAL data

# type, no lock will be issued. An /*INSERT LOCK*/ comment has the

# same effect. A /NO INSERT LOCK*/ comment disables the effect.

insert_lock = true

# If true, ignore leading white spaces of each query while pgpool judges

# whether the query is a SELECT so that it can be load balanced. This

# is useful for certain APIs such as DBI/DBD which is known to adding an

# extra leading white space.

ignore_leading_white_space = true

Page 32: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# If true, print all statements to the log. Like the log_statement option # to PostgreSQL, this allows for observing queries without engaging in full # debugging. # log_statement = false log_statement = true

# If true, incoming connections will be printed to the log.

log_connections = false

# If true, hostname will be shown in ps status. Also shown in

# connection log if log_connections = true.

# Be warned that this feature will add overhead to look up hostname.

log_hostname = false

Page 33: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# if non 0, run in parallel query mode

parallel_mode = false

# if non 0, use query cache

enable_query_cache = false

#set pgpool2 hostname

pgpool2_hostname = '192.168.242.137'

Page 34: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# system DB info

#system_db_hostname = 'localhost'

#system_db_port = 5432

#system_db_dbname = 'pgpool'

#system_db_schema = 'pgpool_catalog'

#system_db_user = 'pgpool'

#system_db_password = ''

Page 35: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# backend_hostname, backend_port, backend_weight

# here are examples

backend_hostname0 = '192.168.242.138'

backend_port0 = 5432

backend_weight0 = 1

backend_data_directory0 = '/usr/local/pgsql/pg841/data'

#backend_hostname1 = 'host2'

#backend_port1 = 5433

#backend_weight1 = 1

#backend_data_directory1 = '/data1'

Page 36: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# - HBA -

# If true, use pool_hba.conf for client authentication. In pgpool-II

# 1.1, the default value is false. The default value will be true in

# 1.2.

enable_pool_hba = false

Page 37: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

PG POOL IIConfiguration (cont)

# - online recovery -

# NOTE: these values are used to re-attach (after failure) a node or to attach a new node# when pgpool replication is used# online recovery user

recovery_user = 'nobody'

recovery_password = ''

recovery_1st_stage_command = ''

recovery_2nd_stage_command = ''

# maximum time in seconds to wait for the recovering node's postmaster

# start-up. 0 means no wait.

recovery_timeout = 90

Page 38: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

Start PG POOL

● $ cd <install dir>/bin

● $ ./pgpool -n > log 2>&1 &

Page 39: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

Controlling PG POOL

Usage:

pgpool [ -c] [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ]

[ -n ] [ -d ]

pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ]

[ -m SHUTDOWN-MODE ] stop

pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] reload

Page 40: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

Controlling PG POOL (cont)

Common options:

-a HBA_CONFIG_FILE Sets the path to the pool_hba.conf configuration file

(default: /usr/local/pgsql/pgpool/etc/pool_hba.conf)

-f CONFIG_FILE Sets the path to the pgpool.conf configuration file

(default: /usr/local/pgsql/pgpool/etc/pgpool.conf)

-F PCP_CONFIG_FILE Sets the path to the pcp.conf configuration file

(default: /usr/local/pgsql/pgpool/etc/pcp.conf)

-h Prints this help

Page 41: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

Controlling PG POOL (cont)

Start options:

-c Clears query cache (enable_query_cache must be on)

-n Don't run in daemon mode, does not detach control tty

-d Debug mode

Stop options:

-m SHUTDOWN-MODE Can be "smart", "fast", or "immediate"

Shutdown modes are:

smart quit after all clients have disconnected

fast quit directly, with proper shutdown

immediate quit without complete shutdown; will lead to recovery on restart

Page 42: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

End2End Live Example(If Time Allows)

Page 43: SLONY Replication and PG POOL II

[email protected]

PG West, 2009by Kevin Kempter

End