signserver enterprise cloud edition cluster configuration guide · 2018-11-01 · galera...
TRANSCRIPT
SignServer Enterprise
Cloud Edition Cluster
Configuration Guide
Print date: 2018-11-01
SignServer Enterprise Cloud Edition Cluster Configuration Guide
2( )23 © 2018 PRIMEKEY
Table of Contents
Introduction _______________________________________________________________________ 3
Documentation __________________________________________________________________ 3
Related Guides _______________________________________________________________ 3
AWS Operating Environment _________________________________________________________ 4
EC2 ___________________________________________________________________________ 4
VPC Configuration ________________________________________________________________ 4
Node Clusters _____________________________________________________________________ 5
Three Node Cluster Information _____________________________________________________ 5
Two Node Clusters _______________________________________________________________ 5
High Availability __________________________________________________________________ 5
Continuous Service Availability ______________________________________________________ 5
Security Groups ___________________________________________________________________ 7
Cluster Replication Configuration ______________________________________________________ 9
Replication Configuration on Node 1 __________________________________________________ 9
Replication Configuration on Node 2 _________________________________________________ 10
WildFly Configuration for Node 2 ___________________________________________________ 11
Replication Configuration on Node 3 _________________________________________________ 12
WildFly Configuration for Node 3 ___________________________________________________ 12
SSL Configuration for Secure Replication ______________________________________________ 13
Configure EJBCA _______________________________________________________________ 13
Key and Certificate Generation for SSL Replication _____________________________________ 14
Galera SSL Replication Node Specific Configuration ____________________________________ 15
Restarting and Verifying Cluster ______________________________________________________ 16
Restarting the Cluster ____________________________________________________________ 16
Verifying Cluster Connectivity ______________________________________________________ 16
Restarting SignServer and Creating new TLS Certificates _________________________________ 17
Troubleshooting __________________________________________________________________ 18
Example Configuration _____________________________________________________________ 19
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 3( )23
Introduction
This guide will assist a SignServer Enterprise Cloud Edition administrator with SignServer Galera
cluster configuration.
This configuration will assume that the user has procured three nodes in the AWS Marketplace
following the .SignServer ECE Launch Guide
Documentation
SignServer Enterprise Cloud Edition documentation is available on:
https://download.primekey.com/docs/SignServer-Enterprise-Cloud/latest
SignServer Enterprise Edition documentation is available on:
https://download.primekey.com/docs/SignServer-Enterprise/current
Additional information on SignServer Community Edition is available on: www.signserver.org
Related Guides
SignServer ECE Launch Guide
SignServer ECE Backup Guide
SignServer Enterprise Cloud Edition Cluster Configuration Guide
4( )23 © 2018 PRIMEKEY
AWS Operating Environment
EC2
Begin by starting two instances. In this example we will have the SignServer Enterprise Cloud Edition
following nodes:
Node 1 using IP 172.16.0.202 – US East 1 – 172.16.0.0/16 address space
Node 2 using IP 172.16.0.188 – US East 1 – 172.16.0.0/16 address space
Node 3 using IP 172.31.0.115 – US East 2 – 172.31.0.0/16 address space
Two of these nodes are in US-East-1 and the third is in US-East-2. For the purposes of this guide we
are going to be using the instance ID from Node 1 as the password. You can obtain this from the EC2
console in the instance details, or run the following command:
# curl -s http://169.254.169.254/latest/meta-data/instance-id
VPC Configuration
To get the nodes to communicate, it is assumed a VPC Peering Connection is setup and in place. For
assistance with configuring a VPC Peering Connection, refer to Amazon's VPC Peering Guide
Optionally, for testing purposes, all nodes can be setup within the same VPC. This is not ideal and
does not provide any availability guarantees if one of the AWS sites has an outage.
A Route Table needs to be created that allows these nodes to communicate over the Peering
Connection. For more information on configuring Route Tables between VPCs, refer to Amazon's
documentation on . Updating Your Route Tables for a VPC Peering Connection
A security group is also needed in each VPC. That configuration will be outlined below since it pertains
directly to the Galera communication.
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 5( )23
Node Clusters
Three Node Cluster Information
The cluster implementation used for Galera replication uses regular network connectivity over the main
instance interface for all cluster communication. This means that cluster nodes don’t have to be placed
physically close to each other as long as they have good network connectivity.
However, this also means that a node cannot distinguish between a node failure of another node and
broken network connectivity to the other node. To avoid the situation where the cluster nodes operate
independently and get diverging data sets (a split-brain situation), the cluster nodes take a vote and
will cease to operate unless they are part of the majority of connected nodes. This ensures that there is
only one data set that is allowed to be updated at the time. In the case of a temporary network failure,
disconnected nodes can easily synchronize their data to the majority’s data set and continue to
operate.
Two Node Clusters
Galera recommends three nodes to avoid a split-brain situation. If only two instances are chosen,
which is not recommended, make sure only one of them is getting written to and is a primary while the
other is for DR purposes. An arbitrator can also be configured to avoid split brain. For more
information, refer to Galera Documentation on .Galera Arbitrator
There is no real high availability in two node clusters. In the event that one of the node leaves the
cluster ungracefully it will take the database offline on the remaining node. Two node clusters are more
for redundancy than availability and must be manually intervened with to be functional again in the
event of a failure.
For more information, refer to Galera Documentation on .Two-node Clusters
High Availability
This setup requires three or more nodes. In case of a node failure, the remaining nodes will still be
able to form a cluster through a majority quorum vote and continue to operate.
The first cluster node always has a slightly higher quorum vote than the rest of the nodes. In a setup of
an even (4 or more) number of nodes where the nodes are divided over two sites, the site that has the
first node will continue to operate if the connectivity between the sites fails.
Continuous Service Availability
To ensure that service clients always connect to an operational node in the cluster, an external load-
balancer should be used for automatic fail-over and/or load distribution.
In the case a custom application being developed for consumption of the services provided by
SignServer Enterprise Cloud Editions’ external interfaces, this could also be handled by making the
custom application connect to any of the nodes that is found to be operational.
SignServer Enterprise Cloud Edition Cluster Configuration Guide
6( )23 © 2018 PRIMEKEY
If lower availability and manual interaction is acceptable in case of a node failure, this could also be
solved by redirecting a DNS name to the service.
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 7( )23
Security Groups
Galera replication uses the following ports for communication:
3306: For MySQL client connections and that use the State Snapshot Transfer
mysqldumpmethod.
4567: For Galera Cluster replication traffic, multicast replication uses both UDP transport and
TCP on this port.
4568: For .Incremental State Transfer (IST)
4444: For all other .State Snapshot Transfer
To create a security group that allows for Galera traffic within the VPCs, follow the steps below.
In this example, the VPC internal address space is 172.16.0.0/16 in US-East-1. The address space in
US-East-2 is 172.31.0.0/16.
Create a Security Group called "All Galera Traffic" with the following rules:
This will allow any connections outbound to any address and any inbound connection on ports 3306,
4567, 4568 and 4444 from any address on the 172.16.0.0/16 and 172.31.0.0/16 subnets. The same
rule in the other VPC will also need the same rule configured. These rules may be tightened as
required for the organization.
SignServer Enterprise Cloud Edition Cluster Configuration Guide
8( )23 © 2018 PRIMEKEY
To apply these Security Groups to the SignServer Enterprise Cloud Edition Nodes in each of the
VPCs, right-click the node, select and then .Networking ChangeSecurityGroups
Apply the security group to the instance so that it can communicate with the other nodes in the
cluster by checking the box next to the line item for the security group needed.
In the node details there is a link to . The associated IPs should be View Inbound Rules
something like the following (modified for your IP ranges subnets):
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 9( )23
1.
2.
Cluster Replication Configuration
The Cluster Replication configuration is covered in the following sections:
Replication Configuration on Node 1
Replication Configuration on Node 2
WildFly Configuration for Node 2
Replication Configuration on Node 3
WildFly Configuration for Node 3
Replication Configuration on Node 1
Designate the node to start with. This will be the node that is used to cluster its data to the other
nodes. When accessing the databases on the nodes for the first time use their instance ID as the
password. Once the data is replicated to the remaining nodes in the cluster from Node 1, they will all
use the same password as Node 1.
The MySQL configuration file is located at /etc/my.cnf.d/server.cnf. This file already has much of the
configuration needed to get a cluster working. Change the cluster name as required by editing the
wsrep_cluster_name="galera" value to the desired value. In this example, "signserver_cluster" is used.
This value should be the same on all nodes in the cluster.
Create a backup of the system by running:
# /opt/PrimeKey/support/system_backup.sh
For more information on backing up SignServer Enterprise Cloud instances refer to the
.SignServer ECE Backup Guide
Run the following commands to ensure that the remote systems and localhost can write to the
database. Change to the desired password and the "172.16.0.%" to be valid for <PASSWORD>
the VPC subnet used. The "%" character is a wildcard and can be used if desired. For example,
if the internal address space is "10.10.1.0/24" then "10.10.1.%" could be used.
If this configuration is being done in more than one VPC, change the subnet NOTE
space or IP address for each subnet with the commands below. Three separate statements for
each of the specific IP addresses for each node in the cluster can be created for tighter security
if desired.
# mysql -u root --password=<PASSWORD> -e "GRANT RELOAD, LOCK TABLES, REPLICATION
CLIENT on *.* to 'repl_user'@'172.16.0.%' identified by '<PASSWORD>';"
NEXT LINE ONLY NEEDED WHEN USING ADDITIONAL VPCS:
# mysql -u root --password=<PASSWORD> -e "GRANT RELOAD, LOCK TABLES, REPLICATION
CLIENT on *.* to 'repl_user'@'172.31.0.%' identified by '<PASSWORD>';"
SignServer Enterprise Cloud Edition Cluster Configuration Guide
10( )23 © 2018 PRIMEKEY
3.
1.
2.
3.
Edit the /etc/my.cnf.d/server.cnf file and look for the [galera] section, under the comment “#
Galera Cluster Configuration”. Add the following lines to the section, changing the two
“wsrep_cluster_address” IP addresses to the Node 2 and Node 3 IP addresses in the cluster,
the value for “wsrep_node_name” for Node 1 and the “wsrep_node_address” to be the IP
address for Node 1 if not already set:
wsrep_cluster_name=signserver_cluster
wsrep_cluster_address="gcomm://172.16.0.188,172.31.0.115"
wsrep_node_name=SignServerNode1
wsrep_node_address="172.16.0.202"
Replication Configuration on Node 2
SSH into Node 2 and perform a backup:
# /opt/PrimeKey/support/system_backup.sh
Stop mysql on this node:
# service mysql stop
Edit the /etc/my.cnf.d/server.cnf file and look for the [galera] section, under the comment “#
Galera Cluster Configuration”. Add the following lines to the section, changing the two
“wsrep_cluster_address” IP addresses to the Node 1 and Node 3 IP addresses in the cluster,
the value for “wsrep_node_name” for Node 2, and the “wsrep_node_address” to be the IP
address for Node 2 if not already set. Also change the wsrep_sst_auth to be the password
:from node 1
[mysql]
wsrep_cluster_name=signserver_cluster
wsrep_cluster_address="gcomm://172.16.0.202,172.31.0.115"
wsrep_node_name=SignServerNode2
wsrep_node_address="172.16.0.188"
wsrep_sst_auth=repl_user:<PASSWORD>
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 11( )23
WildFly Configuration for Node 2
Edit the Wildfly datasource.properties file and update the password to the password used in the
database:
# vim /opt/PrimeKey/wildfly_config/datasource.properties
NOTE Change DATABASE_PASSWORD= to the password of the main <PASSWORD>
node that replicated the data. In this case this is the . password from Node 1
SignServer Enterprise Cloud Edition Cluster Configuration Guide
12( )23 © 2018 PRIMEKEY
1.
2.
3.
Replication Configuration on Node 3
SSH into Node 3 and perform a backup:
# /opt/PrimeKey/support/system_backup.sh
Stop mysql on this node:
# service mysql stop
Edit the /etc/my.cnf.d/server.cnf file and look for the [galera] section, under the comment “#
Galera Cluster Configuration”. Add the following lines to the section, changing the two
“wsrep_cluster_address” IP addresses to the Node 1 and Node 2 IP addresses in the cluster,
the value for “wsrep_node_name” for Node 3 and the “wsrep_node_address” to be the IP
address for Node 3 if not already set:
[mysqld]
wsrep_cluster_name=signserver_cluster
wsrep_cluster_address="gcomm://172.16.0.202,172.16.0.188"
wsrep_node_name=SignServerNode3
wsrep_node_address="172.31.0.115"
wsrep_sst_auth=repl_user:<PASSWORD>
WildFly Configuration for Node 3
Edit the Wildfly datasource.properties file and update the password to the password used in the
database:
# vim /opt/PrimeKey/wildfly_config/datasource.properties
Change DATABASE_PASSWORD= to the password of the main node NOTE <PASSWORD>
that replicated the data. In this case this is the .password from Node 1
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 13( )23
SSL Configuration for Secure Replication
This step is optional but recommended. If SSL/TLS keys and certificates for encrypted Cluster Traffic
are not desired, skip to the section .Restarting and Verifying Cluster
To perform these steps using openssl follow the Galera guide available on: http://galeracluster.com
./documentation-webpages/ssl.html
To perform the configuration using EJBCA continue following the steps in this section.
Configure EJBCA
EJBCA needs to be configured to have a certificate profile that allows the certificate permissions
needed to be able to perform SSL replication functions.
Log into the EJBCA Admin GUI and select :Certificate Profiles
Click for the SslServerProfile. Enter a name for the certificate profile, for example Clone
"Galera_SSL_Profile".
Click .Create from Template
It will return back to the list of certificate profiles.
Click on the Galera_SSL_Profile.Edit
SignServer Enterprise Cloud Edition Cluster Configuration Guide
14( )23 © 2018 PRIMEKEY
Certificates from this profile will last 5 years by default. Cluster replication will stop when the
certificates expire. If it is desired to have certificates that last longer, edit the "Validity or end date of the
certificate" to be the desired length of time, such as "10y 1d".
In the section, multi-select (Ctrl + click) both and Extended Key Usage Client Authentication
, and click Server Authentication Save.
Navigate to , select the and click End Entity Profiles SslServerProfile, Edit End Entity Profile
(a new one can also be created).
In the section, multi-select (Ctrl + click) so that Available Certificate Profiles
is added to the list of available profiles, and click Galera_SSL_Profile Save.
Key and Certificate Generation for SSL Replication
Create a certificate and key for the nodes to communicate with via SSL. Galera needs access to the
key and certificate so soft keys are required for this. EJBCA or OpenSSL can be used for the following
steps. To use OpenSSL, refer to the .Galera Clustering documentation
SSH into your EJBCA instance and access the EJBCA CLI. Change the Subject Alternative
Names in the command to the IP addresses for each of the three nodes in the cluster. Change
the value to be the name of the Issuing CA used, if desired.--caname
# /opt/ejbca/bin/ejbca.sh ra addendentity --username repl_user --password "<PASSWORD>" --
dn CN=localhost --altname "ipaddress=172.31.0.115, ipaddress=172.16.0.188, ipaddress=172.
16.0.202" --caname "ManagementCA" --type 1 --token PEM --certprofile Galera_SSL_Profile --
eeprofile SslServerProfile
The command creates the end entity with the proper fields specified, so the certificates can be used on
all nodes in the cluster.
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 15( )23
Generate the certificate, key and CA cert onto the server for Galera to use by executing the
following:
# /opt/ejbca/bin/ejbca.sh ra setclearpwd --username repl_user <PASSWORD>
# /opt/ejbca/bin/ejbca.sh batch --username repl_user
This will output three certificates into the directory:/opt/ejbca/p12/pem
localhost-CA.pem: CA Certificate
localhost-Key.pem: System Key
localhost.pem: System Certificate
The certs and key are used on every node in this guide. A unique certificate can be created for
each node by altering the EJBCA command above (changing username, dn and addendentity
altname values) to be unique for each node.
Copy these files to each of the SignServer nodes to be clustered. This should be done over a
secure channel between the nodes, via SSH or whatever method meets the organizations
security needs. Copy the files to , then move them into the appropriate /home/ec2-user
position in on each node./etc/mysql
Once the copy is completed, move the files to the proper directory and change the permissions.
# mkdir /etc/mysql
# mv /home/ec2-user/localhost* /etc/mysql
# chown -R mysql.mysql /etc/mysql/
# chmod -R o-rwx /etc/mysql/
Change the permissions of the files on all nodes they are copied to.
Galera SSL Replication Node Specific Configuration
The following step needs to be done on all three SignServer nodes. Edit the /etc/my.cnf.d
and change the server.cnf file to add in the value with /server.cnf wsrep_provider_options
the proper certificate values. If the paths in this document have been used, the following can be copied
and pasted into the server.cnf file on each node:
# vim /etc/my.cnf.d/server.cnf
# wsrep_provider_options="socket.ssl_key=/etc/mysql/localhost-Key.pem;socket.ssl_cert=/etc
/mysql/localhost.pem;socket.ssl_ca=/etc/mysql/localhost-CA.pem;gcache.size=6G;gcache.
page_size=512M; pc.npvo=true;"
NOTE The line exists in the server.cnf already and can be wsrep_provider_options
uncommented if all the paths and file names have remained the same as in this documentation.
SignServer Enterprise Cloud Edition Cluster Configuration Guide
16( )23 © 2018 PRIMEKEY
Restarting and Verifying Cluster
Restarting the Cluster
If Galera is already configured, a bootstrap of the cluster is needed which will cause a brief outage.
This is best done on a new cluster. It is not possible to run some nodes with SSL and not others.
You will need to bootstrap the cluster by starting the first node differently from the rest.
Use the to do that in the following order:--wsrep-new-cluster
[root@node3 mysql]# service mysql stop
[root@node2 mysql]# service mysql stop
[root@node1 mysql]# service mysql stop
[root@node1 mysql]# service mysql start --wsrep-new-cluster
[root@node2 mysql]# service mysql start
[root@node3 mysql]# service mysql start
Once the bootstrapping is done, restart Node 1 with a standard service start as done on the other
nodes.
Verifying Cluster Connectivity
Run the following command to ensure that the cluster has all three nodes connected. This command
can be run on any node.
Make sure to change <PASSWORD> to the database cluster password:
# mysql -u root --password=<PASSWORD> -e "show status like 'wsrep_cluster_size';"
This should return a value of 3:
To see the full wsrep status use the following command:
# mysql -u root --password=<PASSWORD> -e "show global status like 'wsrep%';"
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 17( )23
Restarting SignServer and Creating new TLS Certificates
To restart SignServer and create new TLS Certificates, follow the steps below:
SSH into each node and run the following command:
# service wildfly restart
Generate new TLS certificates on each node so they have a new certificate from the new centralized
SignServer. Using the option will generate new certificates with the new public hostname for each -P
node. For more detailed information, refer to the SignServer Enterprise Cloud Edition TLS Certificate
.Generation Guide
Run the following command on each node:
# /opt/PrimeKey/support/new_tls_cert.sh -p
SignServer Enterprise Cloud Edition Cluster Configuration Guide
18( )23 © 2018 PRIMEKEY
Troubleshooting
If the following error is generated:
# [ERROR] mysqld: Can't create/write to file '/var/lib/mysql/tmp/ibraTjQe'
(Errcode: 2 "No such file or directory")#0122018-01-31 22:26:05
This is because Galera tried to join the node to the cluster and failed. This can happen because of a
permissions problem on Node 1. It cleans up all of the directories on the “joiner” node before it
transfers state from the “donor” node. If it does not succeed, it will then fail to start in subsequent
attempts since this directory no longer exists. Recreate the directory and then change the owner to
mysql with the following commands once the permissions problem is resolved:
# mkdir /var/lib/mysql/tmp
# chown mysql.mysql /var/lib/mysql/tmp/
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 19( )23
Example Configuration
The following displays a sample server.cnf configuration:
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
syslog
[mysqld]
#
# * Basic Settings
#
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
# Configure the tmp directory to reside on the data partion where we have enough space for
SSTs.
# (Xtrabackup will use this variable.)
tmpdir = /var/lib/mysql/tmp
character_set_server=utf8
collation_server=utf8_unicode_ci
lc_messages_dir = /usr/share/mysql
lc_messages = en_US
SignServer Enterprise Cloud Edition Cluster Configuration Guide
20( )23 © 2018 PRIMEKEY
skip-external-locking
wsrep_on=ON
# gcache.size is how much data we will cache and use for IST.
# If more data has been produced since the node was disconnected an SST will be triggered.
# By setting it to 16G a customer can issue about 0.5M certs before connecting a second
node without SST.
# (Nothing wrong with SSTs, but during IST it easer to follow the sync progress and it
looks more user friendly.)
# pc.npvo=true: Recent primary component overrides older ones in case of conflicting prims.
# ## Can only be set in URL due to a bug: pc.wait_prim=false: The node waits for primary
component forever
# pc.weight=2: Default weight for the node that can be overridden by parameters in the
gcomm URL
#Uncomment this and you are setting up Galera recommendations with the default paths in
the SignServer Cloud Galera Clustering Guide otherwise add your own
#wsrep_provider_options="socket.ssl_key=/etc/mysql/localhost-Key.pem;socket.ssl_cert=/etc
/mysql/localhost.pem;socket.ssl_ca=/etc/mysql/localhost-CA.pem;gcache.size=6G;gcache.
page_size=512M; pc.npvo=true;"
# Galera Cluster Configuration
wsrep_cluster_name=signserver_cluster
#wsrep_cluster_address="gcomm://"
wsrep_cluster_address="gcomm://172.16.0.103"
wsrep_node_name=SignServerNode1
wsrep_node_address="172.16.0.36"
# Galera Synchronization Configuration
wsrep_sst_auth=repl_user:i-04811bfcfa454383e
wsrep_sst_method=mariabackup
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address = 127.0.0.1
bind-address = 0.0.0.0
#
# * Fine Tuning
#
max_connections = 200
connect_timeout = 5
# The number of seconds the server waits for activity on a noninteractive connection
before closing it. (Default 28800s)
wait_timeout = 3600
max_allowed_packet = 256M
thread_cache_size = 128
sort_buffer_size = 4M
bulk_insert_buffer_size = 16M
tmp_table_size = 32M
max_heap_table_size = 32M
# Turn off reverse DNS lookup of clients
skip-name-resolve
#
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 21( )23
# * MyISAM
#
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched. On error, make copy and try a repair.
myisam_recover = BACKUP
key_buffer_size = 32M
#open-files-limit = 2000
table_open_cache = 400
myisam_sort_buffer_size = 128M
concurrent_insert = 2
read_buffer_size = 2M
read_rnd_buffer_size = 1M
#
# * Query Cache Configuration
#
# Disable to query cache to avoid locking
query_cache_limit=0
query_cache_type=0
query_cache_size=0
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
general_log_file = /var/log/mysql/mysql.log
general_log = 0
#
#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# we do want to know about network errors and such
log_warnings = 2
#
# Disable slow log
slow_query_log=0
#slow_query_log_file = /var/log/mysql/mariadb-slow.log
#long_query_time = 10
#log_slow_rate_limit = 1000
#log_slow_verbosity = query_plan
#log-queries-not-using-indexes
#log_slow_admin_statements
# Disable bin-logging, since we don't use regular replication
sync_binlog = 0
#log_bin = /var/lib/mysql/mariadb-bin
#log_bin_index = /var/lib/mysql/mariadb-bin.index
#expire_logs_days = 10
#max_binlog_size = 100M
# ROW is required by Galera (intercepts binlogs, but binlogs does not have to be written
to disk)
binlog_format=ROW
SignServer Enterprise Cloud Edition Cluster Configuration Guide
22( )23 © 2018 PRIMEKEY
# * InnoDB
default_storage_engine = InnoDB
innodb_buffer_pool_size=2G
# Use one pool per GiB of 'innodb_buffer_pool_size'
innodb_buffer_pool_instances = 2
# If SHOW GLOBAL STATUS LIKE 'innodb_log_waits'; starts returning a non-zero value
# transactions are to large to fit in the innodb_log_buffer_size and uses disk IO.
innodb_log_buffer_size = 32M
# Recommended to be 25% of innodb_buffer_pool_size. Larger file however means slower
recovery.
# Changing this value requires a delete of the old files while shutdown.
# (sudo rm /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile1)
innodb_log_file_size=512M
innodb_file_per_table = 1
innodb_open_files = 400
innodb_io_capacity = 400
innodb_flush_method = O_DIRECT
# Always flush it to Galera
innodb_flush_log_at_trx_commit=1
# Parallel slave thread processing requires the following settings:
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
# Galera Provider Configuration
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
# Number of threads on the "slave" side applying incoming data.
# wsrep_slave_threads = 4*number of cores
wsrep_slave_threads=16
# In case of conflict during the full state transfer, overwrite the slave
slave_exec_mode=IDEMPOTENT
[mysqldump]
quick
quote-names
max_allowed_packet = 256M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
SignServer Enterprise Cloud Edition Cluster Configuration Guide
© 2018 PRIMEKEY 23( )23
[mariadb]
# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.1]