mariadb replication manager and haproxy (haproxy paris meetup)

33
1 MariaDB-HaProxy

Upload: haproxy-technologies

Post on 11-Jan-2017

121 views

Category:

Software


2 download

TRANSCRIPT

1

MariaDB-HaProxy

2

1.

2.

3.

13.5 Billion year ago was nothing !

For backup and Point In Time recovery the

BINARY LOGS

was born

Early after the replication was born in MySQL 3.23.Replicas track binlog files position and get notified for new events !

IO Thread for copying events SQL thread for applying events

BINARY LOGS ARE CROSS ENGINES

AND STATEMENT BASED

But the story get more complex

GOTCHA WITH BINARY LOGS

DON’T SCALE FOR WRITES# FSYNC IS LIMITED ON DISK

Workaround was not to sync bin logs making repli CRASH UNSAFE

MariaDB Fixed it in 5.5 withTC + GROUP COMMIT

WRITE PERFORMANCE IS BACK REPLICAS CRASH SAFE in 10.0 According to Facebook first implementation of google patch for group commit

But the story get more complex

NOW LEADER IS TO FAST REPLICAS CAN’T CATCH

DON’T SCALE FOR WRITES

Need Parallel Replication Workaround was using Tungsten replicator, Galera Group replication , implement

prefetch events on slaves MariaDB Fixed it in 10.0 and improved it 10.1 with

In order parallel replication and Optimistic Commit

WRITE PERF IS BACK ON REPLICASAccording to Booking slave group commits enable to go faster vs the workload x4

http://www.slideshare.net/JeanFranoisGagn/mysql-parallel-replication-inventory-usecase-and-limitations

In the last microsecond of evolution !

REPLICATION

BECOME CRASH SAFE AND REPLICAS CAN CATCH

BINARY LOGS

STILL NO GOOD FOR REPLICAS ELECTION

But the story get more complex

Replicas track position in files and not unique TX ID

Need GTIDWorkaround: pseudo GTID, parse diff like MHA, TUNGSTEN or BINLOG server

MariaDB 10.0 Fixed it without gotcha Domain ID - Server ID - Transaction seq #

No downtime implementation

But the story get more complex

TRX on Leader not guaranteed to be on any replicas

Fixing can be seen as regression ?Workaround: Galera , NDB cluster, MongoDB

MariaDB 10.1 Not fixing only with Galera but improved Semi-SyncGoogle patches for parallel Semi-Sync with group commit

In the last nanosecond of evolution !

MariaDB was feature ready for LEADER ELECTION and so was born

Replication-Manager

13

Replication-Manager

Switchover Workflow

Failover - False Positive Detection

Failover Monitoring Workflow

Maxscale•••••

HaProxy ••

Features

•••••••••

•••••••••

Settings

# REPLICATION#skip_slave_start = 1plugin_load = "semisync_master.so;semisync_slave.so;sql_errlog.so"rpl_semi_sync_master = ONrpl_semi_sync_slave = ONloose_rpl_semi_sync_master_enabled = ONloose_rpl_semi_sync_slave_enabled = ONlog_slave_updates = ONslave_parallel_mode = optimisticslave_domain_parallel_threads = %%ENV:NODES_CPU_CORES%%slave_parallel_threads = %%ENV:NODES_CPU_CORES%%relay_log = ./.system/repl/relay-binrelay_log_index = ./.system/repl/relay-bin.indexrelay_log_space_limit = 1Glog_bin = ./.system/repl/mariadb-bin log_bin_index = ./.system/repl/mariadb-bin.index binlog_format = ROWbinlog_checksum = 1binlog_cache_size = 1Mbinlog_stmt_cache_size = 1Mexpire_logs_days = 5 sync_binlog = 1replicate_annotate_row_events = 1report_host='host_%%ENV:SERVER_ID%%'

# TOPOLOGYhosts = "%%ENV:SVC_CONF_ENV_BACKEND_IPS%%"user = "root:%%ENV:SVC_CONF_ENV_MYSQL_ROOT_PASSWORD%%"rpluser = "root:%%ENV:SVC_CONF_ENV_MYSQL_ROOT_PASSWORD%%"# LOGlogfile = "./dashboard/replication-manager.log"#verbose = true# HTTP & ALERTShttp-server = true http-bind-address = "0.0.0.0"http-port = "%%ENV:SVC_CONF_ENV_PORT_HTTP%%"http-root = "./dashboard"mail-from = "mrm@localhost"mail-smtp-addr = "localhost:25"mail-to = "[email protected]"# FAILOVERautorejoin = truereadonly = truewait-kill = 5000post-failover-script = ""pre-failover-script =""# CHECKcheck-type = "tcp"failcount = 5failover-limit = 3failover-time-limit = 10gtidcheck = truemaxdelay = 30# HA PROXY WRAPPER MODE# ---------------------haproxy = true haproxy-binary-path = "/usr/sbin/haproxy"haproxy-write-port= %%ENV:SVC_CONF_ENV_PORT_RW%%haproxy-read-port=%%ENV:SVC_CONF_ENV_PORT_R_LB%%

21

Demo

•••

○○○○○

22

MariaDBReplication

Manager

Agent Stack••••••

Collector Stack••

••

Step 1 : User account bootstrap•

Step 2 : Agent bootstrap•

wget -O/tmp/opensvc.deb http://repo.opensvc.com/deb/currentsudo dpkg -i /tmp/opensvc.deb

•sudo nodemgr set --param node.repopkg --value http://repo.opensvc.com/

sudo nodemgr set --param node.repocomp --value http://repo.opensvc.com/compliance/

sudo nodemgr set --param node.dbopensvc --value https://collector.opensvc.com

Step 2 : Agent bootstrap is generating a node conf file

•[node]repopkg = http://repo.opensvc.com/repocomp = http://repo.opensvc.com/compliance/dbopensvc = https://collector.opensvc.com/feed/default/call/xmlrpcdbcompliance = https://collector.opensvc.com/init/compliance/call/xmlrpchost_mode = PRDuuid = d5bccb78-a2b2-4809-b036-ac7c7bfa7101

[compliance]auto_update = trueschedule = @1440

[stats]schedule = @60

[gcedisks]scheduler = @120

Step 2 : Agent bootstrap•

sudo nodemgr register --user [email protected]

sudo nodemgr set --param node.uuid --value d5bccb78-a2b2-4809-b036-ac7c7bfa7101

sudo nodemgr pushasset sudo nodemgr pushdiskssudo nodemgr pushpkgsudo nodemgr pushpatchsudo nodemgr checkssudo nodemgr sysreport

Step 3 : Node configuration

sudo nodemgr compliance fix --attach --moduleset mariadb.node

sudo nodemgr compliance attach --moduleset mariadb.node

sudo nodemgr compliance check --moduleset mariadb.node

sudo nodemgr compliance fix --moduleset mariadb.node

Step 4 : Service deployment via agent templatedocker_user [] > svarsubnet_cidr [10.0.0.0/24] > backend_ips [10.0.0.231,10.0.0.232,10.0.0.233] > replication_manager_img [tanji/replication-manager] > ip_pod01 [10.0.0.229] > mysql_root_password [mariadb] > vip_netmask [10.0.0.1] > ip_pod02 [10.0.0.230] > port_r_lb [3308] > subnet_name [spdnet] > vip_addr [10.0.0.1] > nodes [{nodename}] > base_dir [/Users/{env.docker_user}/{svcname}] > maxscale_img [tanji/maxscale:keepalived] > port_http [10001] > port_rw [3306] > port_rw_split [3307] >

Step 4 : Service deployment via collector•

Step 4 : Service deployment•

sudo svcmgr -s mysvc pull

•sudo svcmgr -s mysvc provision

32

Q&A

Thank You

33