edb postgres containers and integration with openshift · edb postgres containers and integration...
TRANSCRIPT
EDB™ Postgres Containers and Integration with OpenShift
Version 1.0
October 17, 2017
Copyright © 2017 EnterpriseDB Corporation. All rights reserved.
2
EDB Postgres Containers and Integration with OpenShift, Version 1.0 by EnterpriseDB® Corporation
Copyright © 2017 EnterpriseDB Corporation. All rights reserved.
EnterpriseDB Corporation, 34 Crosby Drive, Suite 100, Bedford, MA 01730, USA
T +1 781 357 3390 F +1 978 589 5701 E [email protected] www.enterprisedb.com
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved.
Table of Contents
1 Introduction ................................................................................................................. 2
1.1 Typographical Conventions Used in this Guide ................................................. 3
2 Installing an Advanced Server Container Deployment .............................................. 4
2.1 Creating an Advanced Server 9.5 Container ....................................................... 4
2.2 Customizing a Container Deployment ................................................................ 6
2.2.1 Accessing a Repository ................................................................................... 6
2.2.2 Defining a Volume .......................................................................................... 8
2.2.3 Deploying a Container Image ......................................................................... 8
2.3 Removing a Project ........................................................................................... 11
2.3.1 Retaining a Project with No Pods ................................................................. 11
3 Using the OpenShift Console.................................................................................... 12
3.1 Scaling an Advanced Server Deployment ........................................................ 20
3.2 Connecting with the psql Client ........................................................................ 22
3.3.2 File Locations................................................................................................ 25
3.3 Creating a Custom Configuration within a Pod ................................................ 26
3.4 Performing a Rolling Update ............................................................................ 27
4 Managing a Container at the Command Line ........................................................... 28
4.1 Deploying a Container with a Docker Command ............................................. 29
4.1.1 Specifying Container Preferences in an Environment File ........................... 32
4.1.2 Using Docker to Connect to an Advanced Server Container ....................... 32
4.2 Deploying and Managing an Advanced Server Container at the Atomic
Command Line.............................................................................................................. 33
4.2.1 Specifying Container Preferences in an Environment File ........................... 33
4.2.2 Using the Atomic Command Line to Stop or Uninstall a Container ............ 34
4.2.3 Supported LABELS - Reference .................................................................. 35
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 2
1 Introduction
EDB™ Postgres Platform for Containers allows you to use a Docker-formatted container
to deploy and manage EDB Postgres Advanced Server (Advanced Server) in a Red Hat
OpenShift environment. OpenShift automation provides an environment in which you
can easily:
Deploy or disable Advanced Server instances as needed.
Automatically scale an Advanced Server instance to meet application
requirements.
Easily ensure Failover Manager protection for your data.
Utilize load balancing to distribute read/write requests across available servers.
Manage Advanced Server instances with custom configurations in a container
environment.
The EDB Postgres Platform for Containers automates the deployment of containers that
include Advanced Server and the following supporting components:
EDB Failover Manager
pgPool (connection pooling for Postgres databases)
The EDB Postgres Platform for Containers also automates the deployment of Docker
containers that install the EDB Postgres Backup and Recovery Tool (BART). BART
provides simplified backup and recovery management for Advanced Server.
For detailed information and documentation for each component, please visit the
EnterpriseDB website at:
http://www.enterprisedb.com/products-services-training/products/documentation
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 3
1.1 Typographical Conventions Used in this Guide
Certain typographical conventions are used in this manual to clarify the meaning and
usage of various commands, statements, programs, examples, etc. This section provides a
summary of these conventions.
In the following descriptions a term refers to any word or group of words that are
language keywords, user-supplied values, literals, etc. A term’s exact meaning depends
upon the context in which it is used.
Italic font introduces a new term, typically, in the sentence that defines it for the
first time.
Fixed-width (mono-spaced) font is used for terms that must be given
literally such as SQL commands, specific table and column names used in the
examples, programming language keywords, etc. For example, SELECT * FROM emp;
Italic fixed-width font is used for terms for which the user must
substitute values in actual usage. For example, DELETE FROM table_name;
A vertical pipe | denotes a choice between the terms on either side of the pipe. A
vertical pipe is used to separate two or more alternative terms within square
brackets (optional choices) or braces (one mandatory choice).
Square brackets [ ] denote that one or none of the enclosed term(s) may be
substituted. For example, [ a | b ], means choose one of “a” or “b” or neither
of the two.
Braces {} denote that exactly one of the enclosed alternatives must be specified.
For example, { a | b }, means exactly one of “a” or “b” must be specified.
Ellipses ... denote that the proceeding term may be repeated. For example, [ a |
b ] ... means that you may have the sequence, “b a a b a”.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 4
2 Installing an Advanced Server Container Deployment
Advanced Server containers are supported on OpenShift Origin version 3.0 or later. For
information about obtaining and installing OpenShift, please visit:
https://www.openshift.com/
2.1 Creating an Advanced Server 9.5 Container
The following quick-start tutorial walks you through the process of deploying an
Advanced Server 9.5 Container. Please see Section 2.2 for detailed information
deployment instructions if you wish to:
Use the deploy.sh script to remove the project.
Use a local repository when deploying the container.
Install the EDB Postgres Backup and Recovery Tool (BART).
Step 1 – Obtain Repository Credentials and Registry Access
To deploy an Advanced Server container you must have access credentials to the GitHub
repository at:
https://github.com/EnterpriseDB/container_metadata
You must also be able to access the Docker registry at:
containers.enterprisedb.com
If you require access to one or both, please contact EnterpriseDB at:
http://www.enterprisedb.com/general-inquiry-form
Step 2 – Create an NFS Shared Volume
A system administrator must define a volume on the master node that can be shared via
NFS that allows read and write operations; the default mountpoint is:
/volumes/edb-95.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 5
Step 3 – Download the .yaml Files
After obtaining the required credentials, connect to the GitHub repository, navigate to the
9.5 folder and download the following files:
edb-bart.yaml
persistent-volume-claim.yaml
persistent-volume.yaml
ppas95-persistent.yaml
Step 4 – Deploy the Container Image
Login to the Docker registry:
docker login containers.enterprisedb.com
When prompted, enter the user name and password provided by EnterpriseDB
Pull the EDB Advanced Server 9.5 container image from the docker registry and tag it:
docker pull containers.enterprisedb.com/edb/edb-as:9.5
docker tag containers.enterprisedb.com/edb/edb-as:9.5 edb-as:9.5
Step 5 – Configure OpenShift
The following tasks assume that you are logged into OpenShift and have the necessary
privileges.
Create a new OpenShift project with the name ppas-95:
oc new-project ppas-95
Define a persistent volume and a persistent volume claim using the sample .yaml files:
oc create -f persistent-volume.yaml
oc create -f persistent-volume-claim.yaml
Use the ppas95-persistent.yaml file to create a template:
oc create -f ppas95-persistent.yaml
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 6
2.2 Customizing a Container Deployment
The deploy.sh script creates a container with advanced management features (including
simplified container removal and template creation); to review a complete list of the
deploy.sh options, see Section 2.2.3. You can use a local or Docker-hosted repository
to create a container with deploy.sh that includes Advanced Server and supporting
components in a customized installation.
To customize a container deployment you must:
1. Ensure that your host has access to a Docker-hosted or a local repository.
2. Define a volume on the container host that contains the deployment files.
3. Use the deploy.sh script to specify custom options for your container.
After deploying your container, you will be ready to use the OpenShift console or a
command line client to manage the deployment.
2.2.1 Accessing a Repository
You can use images from the Docker repository or a local repository when creating a
container.
Logging in to a Repository
Before deploying a container, you must use the docker login command to log in to
either the EnterpriseDB repository (containers.enterprisedb.com) or the RedHat
repository (registry.connect.redhat.com). Use the following command to log in:
docker login registry_address [–u username] [–p password]
Where:
registry_address is the address of the registry you wish to use.
username is the user name you use to log in to the registry.
password is the password associated with the user account.
Please note: If you do not provide a user name and password, you will be prompted for
credentials when you log in. For connection credentials to the EnterpriseDB repository,
please contact EnterpriseDB at:
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 7
https://www.enterprisedb.com/general-inquiry-form
Creating a Local Repository
To create a local repository that contains EDB Postgres images, you must create a local
Docker registry. For information about creating a local Docker registry, please see the
Docker documentation at:
https://docs.docker.com/registry/deploying/#storage-customization
You can use the deploy.sh script to start the registry; when invoking the script, include
the –sr switch. By default, the –sr switch will start a registry on localhost:5000:
deploy.sh –sr
Then, use the deploy.sh script to push an image to the repository; use the command:
./deploy.sh -c component -rp repository -dc -it 9.5
-lr local_repo:port –pi
Where:
component specifies the name of the component you wish to push to your local
repository.
repository specifies the repository from which you will obtain images.
local_repo specifies the address of your local repository, and port specifies
the port that will be used by the repository.
For example, the following command will push the edb-as image from the EnterpriseDB
repository to a local repository (localhost:5000):
./deploy.sh -c edb-as -rp containers.enterprisedb.com -dc
-it 9.5 -lr localhost:5000 -pi
To review more options for the deploy.sh script, please see Section 0.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 8
2.2.2 Defining a Volume
Before using the OpenShift console, an administrative OpenShift user must define a
persistent volume and a persistent volume claim. The definitions should contain
information that will be passed to the OpenShift PersistentVolume API and the
OpenShift PersistentVolumeClaim API when creating your project.
The installer provides the following sample files that you can refer to when defining the
persistent volume and persistent volume claim:
persistent-volume-claim.yaml
persistent-volume.yaml
The installation files provide a script (deploy.sh) that you can use to deploy an
Advanced Server container and create a container template.
2.2.3 Deploying a Container Image
Before deploying a container, you must have logged in to a Docker registry and identified
an NFS mount point that will be used to store the data files, log files, and any supporting
files for your deployments. After identifying the NFS mount point, you can use the
deploy.sh script to deploy a container.
The command to deploy a container takes the form:
deploy.sh -c component_name -rp registry –dc [-it]
Where:
component_name specifies the name of the component. Specify:
edb-as to deploy EDB Postgres Advanced Server
edb-bart to deploy EDB Postgres Backup and Recovery Tool
registry specifies the name of the registry you wish to use; please note that you must
log in to the registry with the docker login command before deploying an image.
Include the –it flag to specify that the component version that will be deployed. You
can specify:
a major version number to deploy the most recent update available for a major
release; for example, -it 9.5.
a specific version number to deploy the specific version appended; for
example, -it 9.5.6.11-1.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 9
latest to deploy the most recent version of the newest major version of the
product available in container form; for example, -it latest.
Please note that if you specify latest, you must update your .yaml files to
identify the deployed version, and that the most recent major version available
may change without notice.
For example, the following command deploys a container that contains the most recent
version of Advanced Server 9.5:
deploy.sh –c edb-as –rp containers.enterprisedb.com –dc
–it 9.5
After deploying the container image, you can use the deploy.sh script to create a
template that allows you to use the OpenShift console. Include the -ct option to create a
template:
deploy.sh –c edb-as –ct
After deploying the container and creating the template, you can use your web browser to
connect to the OpenShift environment, and create Advanced Server pods.
The deploy.sh script includes options that allow you to manage your EDB container or
specify details that will be incorporated into your project. Include the -h option when
invoking the deploy.sh script to review a complete list of the supported options:
deploy.sh -h
Command Line Options for deploy.sh
You can include the following options with the call to deploy.sh to specify your
preferences:
Command Option Description -c|--components Include the –c or --components option and a comma-separated list
of components that you wish to deploy. The default value is edb-as,edb-bart.
-ct|--create-template Include the –ct or --create-template option to create a template. -dc|--deploy-container Include the –dc or --deploy-container option to deploy a
container. -dn|--dbname Include the –dn or --dbname option and the name of a database that
you wish to remove. The default value is edb. -ft|--force-tagging The –ft or --force-tagging option is deprecated, but is
supported for backward compatibility with Docker. -h|--help Include the –h or --help option to display help for the deploy.sh
script.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 10
-it|--image-tag Include the –it or --image-tag option to specify the name of an image. The default is latest.
-lr|--local-registry Include the –lr or --local-registry option to identify a local registry for your container; this option will work with the –pi option. The default value is localhost:5000.
-pi|--push-image Include the –pi or --push-image option to push an image to a local repository.
-p|--project Include the –p or --project option and the project name to specify a name for your project.
-r|--remove Include the –r or --remove option and the name of one or more components you wish to remove in a comma-separated list. The value may include ppas or edb-bart.
-rn|--registry-namespace Include the -rn or --registry-namespace option to specify a registry namespace. The default is edb.
-rp|--registry-prefix Include the –rp or --registry-prefix option to specify the name of a registry. For example, containers.enterprisedb.com.
-sr|--startregistry Include the –sr or –startregistry option to start the registry. -u|--update Include the –u or --update option and the name of the component
that you wish to update. The value may include ppas or edb-bart.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 11
2.3 Removing a Project
You can use the deploy.sh script to remove a ppas container project. When invoking
the deploy.sh script, include the -r option, and specify the installed component, the
project name, and the database name:
deploy.sh -r component –p project_name –dn db_name
Where:
component specifies the component name.
project_name specifies the name of the project.
db_name specifies the database name.
For example, the following command removes an Advanced Server 9.5 project, with a
database name of acctg:
deploy.sh -r ppas –p ppas95 –dn acctg
Please note that removing the project does not delete the data files.
If you use the OpenShift console or the OpenShift command line (the oc delete
command) to remove a project, you must manually remove the
/volumes/edb/project_name/.db_name-master file before launching another
pod with the same component and version.
2.3.1 Retaining a Project with No Pods
If you scale a cluster down to 0 pods, but retain the project for later use, you must
manually remove the /volumes/edb/project_name/.db_name-master file before
adding a pod to the cluster.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 12
3 Using the OpenShift Console
After using the deployment script to create a template, you can use the OpenShift console
to create and manage Advanced Server projects. To create a project, open your web
browser, and navigate to the connection address of your OpenShift console (by default,
https://10.1.2.2:8443/console).
Provide your OpenShift connection credentials and click the Log In button to connect.
When you've successfully authenticated with OpenShift, the console displays the
Projects page (see Figure 3.1).
Figure 3.1 – The OpenShift console Projects page.
Select your project (or ppas-95) from the Projects list; the OpenShift console will
navigate to the project management page (see Figure 3.2).
Figure 3.2 – The OpenShift console project management page.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 13
Click the Add to Project button to open the Select Image or Template page (see
Figure 3.3).
Figure 3.3 – The OpenShift console Select Image or Template page.
Select the button that is labeled with the name of the Advanced Server template. The
OpenShift console opens a page that allows you to specify details for your Advanced
Server deployment (see Figure 3.4).
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 14
Figure 3.4 – The OpenShift project Parameters page.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 15
Use the fields displayed under the Parameter heading to provide installation details for
the deployment. The details provided will be used during pod initialization; it is
important to note that password changes are not allowed after a pod has been initialized.
Use the Database Name field to provide the name of the database that will be
created when the database cluster is initialized.
Use the Default database user field to specify the name of a database
superuser that will be created when the database is initialized; by default, the
database superuser is named enterprisedb.
If you accept the default (enterprisedb), the user will be associated
with the password provided in the EnterpriseDB Password field.
If you specify an alternate name for the database superuser, the user will
be associated with the password provided in the Password for default
database user field.
Use the Password for default database user field to specify the password
associated with the database superuser named in the Default database user
field if you specify a user name other than enterprisedb.
Please note that this password should not be changed after the pod is initialized.
During the installation process, the container creates a database superuser named
enterprisedb. Use the EnterpriseDB Password field to provide the
password associated with the default database superuser (enterprisedb).
Please note that this password should not be changed after the pod is initialized.
Use the Repl user field to specify the name of the replication user; the default
name is repl.
Use the Repl Password field to specify a password for the replication user; if
you do not provide a password, a password will be generated by the server.
Use the Locale field to specify the locale that will be used by the cluster; by
default, the locale used is the system locale.
Use the Host Cleanup Schedule to specify the execution schedule for a
cleanup script. The cleanup script will review the data directories, and mark any
directory for deletion that has not been used in the last 24 hours. If you do not
provide a value in this field, the cleanup script will not execute.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 16
Use the Yum Repository URL field to specify the connection properties for the
EnterpriseDB yum repository.
Replace <username> with the name of a user with connection privileges
to the repository.
Replace <password> with the password associated with the user name.
Use the Email field to provide the email address that will receive any
notifications sent by Failover Manager. For detailed information about Failover
Manager event notifications, please see the EDB Postgres Failover Manager
Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
Use the EFM Product Key field to provide the Failover Manager product key.
For more information about Failover Manager licenses, visit the project website
at:
https://www.enterprisedb.com/products/edb-postgres-platform/edb-postgres-
failover-manager
Use the Name Server for Email parameter to provide the identity of a name
server that will be used for notifications from Failover Manager.
Use the Persistent Volume field to specify the name of the persistent volume
definition file.
Use the Persistent Volume Claim to specify the name of the persistent
volume claim definition file.
When you've completed the Parameters dialog, click the Create button to deploy an
Advanced Server project.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 17
Figure 3.5 – Continue to the project overview.
When the OpenShift console acknowledges that the application has been created; click
the Continue to overview banner (see Figure 3.5).
Figure 3.6 – The template objects have been created successfully.
OpenShift confirms that all of the items described in the template have been created
before deploying Advanced Server pods (see Figure 3.6).
By default, an Advanced Server deployment will consist of four pods, with EDB Failover
Manager protection enabled. If you wish to disable failover protection and spin up a pod
with a single replica, you can modify the xpas95-persistent.yaml file, setting the
value for replicas to 1.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 18
Figure 3.7 – The pods are being deployed.
As OpenShift spins up the pod, the progress indicator displayed on the Overview will
change from light blue to darker blue (see Figure 3.7).
Figure 3.8 – The cluster is ready for use.
When the progress indicator is solid blue and indicates that 4 pods have been created,
Advanced Server is ready for use (see Figure 3.8).
Failover Manager will send email notifications to the address specified when you
configured your project, keeping you informed of the state of your pods (see Figure 3.9).
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 19
Figure 3.9 – Email notifications from Failover Manager.
You can use Failover Manager notifications to easily identify the Master node of your
replication scenario. The Subject line identifies each node in the cluster as a Master or
Standby. Locate the Master agent, and then compare the address shown in the From
column of your email to the names in the Pods list (accessed via the Browse menu) in
the OpenShift console to identify the Master node (see Figure 3.10).
Figure 3.10 – A list of pods, displaying the pod names.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 20
3.1 Scaling an Advanced Server Deployment
The default configuration of EDB Postgres Advanced Server for OpenShift uses EDB
Postgres Failover Manager to ensure high-availability for your deployment. If a pod
fails, Failover Manager detects the failure, and replaces the pod with a running node. If
the failed node is the master in your replication scenario, Failover Manager promotes a
standby node to the role of master before adding a replacement standby to the scenario.
For detailed information about Failover Manager, please visit the EnterpriseDB website
at:
http://www.enterprisedb.com/products/edb-failover-manager
To prevent disruptions in Failover Manager monitoring, an Advanced Server deployment
must have at least four pods; by default, each new Advanced Server project will have
four pods.
Figure 3.11 – Use the arrows to the right of the blue circle to scale a deployment.
Please note: by default, the container environment will support up to 9 pods; to support
10 or more pods, you must modify the server configuration. For detailed information, see
Section 3.3, Creating a Custom Configuration within a Pod.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 21
Manually Scaling a Pod
You can use the up arrow (to the right of the blue circle) to add new pods to your
deployment when processing requirements are higher, or use the down arrow to remove
unneeded pods from the deployment when processing requirements are light (see Figure
3.11). Please note that when removing a pod from your deployment, OpenShift may
remove the master node in your replication scenario. If Failover Manager is enabled, and
the master node is removed during scaling, a standby node will be promoted to the role of
master.
If you plan to remove multiple pods from a deployment, you should allow time for each
pod to be completely removed before removing each additional pod to avoid interfering
with Failover Manager protection.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 22
3.2 Connecting with the psql Client
Connections to Advanced Server and supporting components are managed by OpenShift;
for more information about managing and connecting to an OpenShift pod, refer to the
OpenShift documentation at:
https://docs.openshift.com/enterprise/3.0/welcome/index.html
OpenShift documentation is also easily accessed through the Documentation link in the
upper-right corner of the OpenShift console.
You can use the OpenShift console Terminal to open a psql client to connect to
Advanced Server and query the server directly. To connect with psql, select Pods from
the Browse menu.
Figure 3.12 – The list of available pods.
When the Pods dialog opens (see Figure 3.12), click the name of a pod to access detailed
information about the selected pod.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 23
Figure 3.13 – Detailed information about a pod
To open the psql client, select the Terminal tab (see Figure 3.13).
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 24
Figure 3.14 – Connecting with the psql client on the Terminal tab
When the Terminal tab opens (as shown in Figure 3.14), use the psql command to
open the client. A psql command takes the following form:
psql –d database_name –U user_name
Where:
database_name specifies the name of the database to which you wish to
connect.
user_name specifies the name of the connecting user.
For detailed information about using the psql client, see the PostgreSQL online
documentation at:
https://www.postgresql.org/docs/9.6/static/app-psql.html
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 25
3.3.2 File Locations
By default, Advanced Server files are located in the directories listed in the table below:
EDBAS Component Path to Installation Directory
Data directory /edbvolume/9.x/pgdata/HOSTNAME
postgresql.conf file /edbvolume/9.x/conf/edb/postgresql.conf
pg_hba.conf file /edbvolume/9.x/conf/edb/pg_hba.conf
Executables /usr/ppas-9.x/bin
Libraries /usr/ppas-9.x/lib64
Contrib /usr/ppas9.x/share/contrib
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 26
3.3 Creating a Custom Configuration within a Pod
Configuration files on an Advanced Server installation determine user authentication
methods and server preferences for that server. You can customize your installation by
modifying those files, and then replacing the instances within your project with new pods
that use the modified configuration files.
Parameter changes made at the command line will not be replicated to standby servers;
you must save modifications to the configuration file of the master node of the replication
scenario. By default, Advanced Server files are located in the directories listed in the
table below:
EDBAS Component Path to Installation Directory
postgresql.conf file /edbvolume/9.x/conf/edb/postgresql.conf
pg_hba.conf file /edbvolume/9.x/conf/edb/pg_hba.conf
To create a custom Advanced Server pod:
1. Use a standard template to create an Advanced Server pod.
2. Use your choice of editor to modify the appropriate configuration files. By
default, configuration files are located in edbvolume/9.x/conf/db_name.
3. Use the OpenShift console to add new pods to the cluster; as each new pod is
added, the new pod will use the customized configuration file.
4. Remove pods that were instantiated with old configuration files (including the
original master node of the replication scenario).
Please note: To preserve the integrity of your Failover Manager scenario, you should not
let the total pod count of the deployment drop below four when replacing those clusters
with an old configuration with clusters with the new configuration files.
The default setting for the max_wal_sender parameter (in the postgresql.conf file)
is 10; this allows you to deploy up to 9 pods. If you would like to create more than 9
pods, you must adjust the value of max_wal_sender to at least 1 greater than the
number of pods you would like to create. Please note that the value of
max_connections must be greater than the max_wal_sender value.
For detailed information about customizing the postgresql.conf file, please refer to
the Postgres core documentation, available at:
https://www.postgresql.org/docs/9.6/static/config-setting.html#CONFIG-SETTING-
CONFIGURATION-FILE
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 27
For detailed information about customizing the pg_hba.conf file, please refer to the
Postgres core documentation, available at:
https://www.postgresql.org/docs/9.6/static/auth-pg-hba-conf.html
3.4 Performing a Rolling Update
When an updated version of Advanced Server becomes available, you can use a rolling
update technique to upgrade your cluster. EnterpriseDB's Docker repository will always
make available the most recent version of the server; to update the server version used in
your deployment, you can simply:
1. When an updated version of Advanced Server becomes available, use the
OpenShift console to add new pods to the cluster; as each new pod is added, the
new pod will use the updated server version.
2. Remove pods that were instantiated using the old server version (including the
original master node of the replication scenario).
Please note: To preserve the integrity of your Failover Manager scenario, you should not
let the total pod count of the deployment drop below four when performing a rolling
update.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 28
4 Managing a Container at the Command Line
You can use the Docker or Atomic command line client to deploy, manage, and use an
Advanced Server container. We recommend including the following docker command
options when using the command line:
-d
The –d option forces docker to run as a daemon. This option is mandatory when
invoking an atomic command, but optional for docker commands.
--privileged
The --privileged option may be required if local security settings do not
permit mounting a local volume with read-write options. As an alternative, we
recommend allowing your security settings to permit the container to have read-
write access to the mounted volume. If applicable, adjust your SELinux settings
to allow access.
--restart=always
This option configures your container to automatically restart.
For more information about docker commands and command options, please see the
documentation at:
https://docs.docker.com/engine/reference/commandline/docker/
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 29
4.1 Deploying a Container with a Docker Command
To use the Docker command line to create a container deployment of Advanced Server
that enables Failover Manager, you must invoke the docker run command four times,
creating a master node, and three standby nodes. The first docker run command will
create a master node. Each subsequent docker run command will create a standby
node.
You can specify environment variables at the command line (when invoking the docker
run command), or in a file located in the network mountpoint for the container. The
environment file for:
EDB Postgres Advanced Server is named atomic-env.sh.
EDB Postgres Backup and Recovery Tool is named atomic-bart-env.sh.
For example, a docker run command that deploys an EDB Postgres Advanced Server
container with environment variables specified in the atomic-env.sh file takes the form:
docker run --name node_name -v network_mountpoint
Where:
node_name is the name of the node.
network_mountpoint is the volume in which the container will reside.
You can include other docker run command options when deploying a container;
consult the docker documentation for a complete list of options for the command:
https://docs.docker.com/engine/reference/commandline/run/
You can also specify the options and environment variables at the command line when
deploying a container. The following command uses the docker command line client to
create a container:
docker run --restart option
--name node_name
-v network_mountpoint -e DATABASE_NAME="database_name"
-e PGPORT="as_listener_port"
-e DATABASE_USER="db_user_name"
-e DATABASE_USER_PASSWORD=" db_user_password"
-e ENTERPRISEDB_PASSWORD=" user_password "
-e REPL_USER="repl_user_name"
-e REPL_PASSWORD="repl_user_password"
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 30
-e LOCALEPARAMETER="locale"
-e MASTER_HOST=
"none
|`docker inspect -f {{.NetworkSettings.IPAddress}} master`"
-e RESTORE_FILE="path_to_restore_file"
${db_name}/latest.tar"
-e NAMESERVER="nameserver 8.8.8.8"
-e CLEANUP_SCHEDULE="0:0:*:*:*"
-e EFM_EMAIL="[email protected]"
-d edb-as:95
Where:
Include the docker run --restart option environment variable to specify
your restart preferences for the container. Please refer to the docker
documentation for details about the supported options.
Include the --name node_name option to specify the name of the replication
cluster node. For example, master, replica1, replica2, and replica3
might identify the nodes of a cluster.
Include the -v network_mountpoint option to specify the volume in which
the container will reside.
Use the DATABASE_NAME="database_name" environment variable to specify
the name of the Advanced Server database.
Use the PGPORT="as_listener_port" environment variable to specify the
listener port of the Advanced Server database (by default, 5444).
Use the DATABASE_USER="db_user_name" environment variable to specify
the name of a database superuser that will be created when the database is
initialized; by default, the database superuser is named enterprisedb.
If you specify the default (enterprisedb), the user will be associated
with the password provided in the EnterpriseDB Password field.
If you specify an alternate name for the database superuser, the user will
be associated with the password provided in the Password for default
database user field.
Use the DATABASE_USER_PASSWORD=" db_user_password" environment
variable to specify the password associated with the database superuser if you
specify a db_user_name other than enterprisedb. Please note that this
password should not be changed after the pod is initialized.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 31
Use the ENTERPRISEDB_PASSWORD=" user_password" environment
variable to specify the password associated with the default database superuser
(enterprisedb). During the installation process, the container creates a
database superuser named enterprisedb. Please note that this password should
not be changed after the pod is initialized.
Use the REPL_USER="repl_user_name" environment variable to specify the
name of the Postgres streaming replication user.
Use the REPL_PASSWORD="repl_user_password" environment variable to
specify the password associated with the replication user.
Use the LOCALEPARAMETER="locale" environment variable to specify the
locale that will be used by the container.
Use the MASTER_HOST environment variable to indicate if the node is a master or
a standby.
Specify none if the node is the master node within the replication cluster.
Specify MASTER_HOST=`docker inspect -f
{{.NetworkSettings.IPAddress}} master' if the node is a
standby node within the replication cluster. The clause uses a call to
docker inspect to retrieve the address of the master node of the cluster.
Please note: by default, the container environment will support up to 9
pods; to support 10 or more pods, you must modify the server
configuration. For detailed information, see Section 3.3, Creating a
Custom Configuration within a Pod.
Use the RESTORE_FILE="path_to_restore_file" environment variable to
specify the complete path to the restore file for the cluster.
Use the NAMESERVER="nameserver 8.8.8.8" environment variable to
specify the identity of a name server that will be used for notifications from
Failover Manager.
Use the CLEANUP_SCHEDULE="0:0:*:*:*" environment variable to provide
an execution schedule for a cleanup script. The cleanup script will review the
data directories, and mark any directory for deletion that has not been used in the
last 24 hours. Specify the value in a cron format; if you do not provide a value,
the cleanup script will not execute.
Use the EFM_EMAIL="[email protected]" environment variable to specify
the email address that will receive any notifications sent by Failover Manager.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 32
For detailed information about Failover Manager event notifications, please see
the EDB Postgres Failover Manager Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
4.1.1 Specifying Container Preferences in an Environment File
The EDB container deployment script uses environment variables to define the properties
of your Advanced Server and BART container. You can specify values for the
environment variables on the command line, or in an environment variable file. Sample
files are created in the network mountpoint by the installer:
atomic-env.sh specifies the properties for an Advanced Server container.
atomic-bart-env.sh specifies the properties of an edb-bart container.
You must modify the environment file before using the docker or atomic command line
client to deploy a container.
When you are about to deploy the master node of your containerized cluster, please note
that the MASTER_HOST property should be set to none. When you deploy a container,
the atomic-env.sh file is automatically updated to include the
NetworkSettings.IPAddress values of the master node when the deployment
completes. Each subsequent node that you create will be a standby; if a failover occurs,
the MASTER_HOST property will be updated with the IP address of the new master node.
4.1.2 Using Docker to Connect to an Advanced Server Container
After creating a container, you can use the docker exec command to connect to the
master node of the cluster:
docker exec node
Where node is the name of the node to which you are connecting (master, replica1,
replica2, or replica3).
The shell will start in the /usr/ppas-95/bin directory. After connecting, you can use
the psql command line client to access the database:
./psql -U database_user_name -d database_name
For example, to connect to a database named edb, as the user enterprisedb, use the
command:
./psql -U enterprisedb -d edb
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 33
4.2 Deploying and Managing an Advanced Server Container at the Atomic Command Line
You can use the Atomic command line client to deploy or manage an Advanced Server
Docker container. The Atomic command line is restricted to three arguments:
OPT1 contains optional docker switches to be passed.
OPT2 is the name of the cluster node that you are creating.
OPT3 is the NFS mount directory.
Use an environment file to specify additional container properties.
You can use either the atomic install or atomic run command to create an
EnterpriseDB Advanced Server container. For example:
atomic install --opt1='--restart always' --opt2=master --
opt3=/volumes/edb-ppas ppas95
atomic run --opt1='--restart always' --opt2=master --
opt3=/volumes/edb-ppas ppas95
4.2.1 Specifying Container Preferences in an Environment File
The EDB container deployment script uses environment variables to define the properties
of your Advanced Server and BART container. You can specify values for the
environment variables on the command line, or in an environment variable file. Sample
files are created in the network mountpoint by the installer:
atomic-env.sh specifies the properties for an Advanced Server container.
atomic-bart-env.sh specifies the properties of an edb-bart container.
You must modify the environment file before using the docker or atomic command line
client to deploy a container.
When you are about to deploy the master node of your containerized cluster, please note
that the MASTER_HOST property should be set to none. When you deploy a container,
the atomic-env.sh file is automatically updated to include the
NetworkSettings.IPAddress values of the master node when the deployment
completes. Each subsequent node that you create will be a standby; if a failover occurs,
the MASTER_HOST property will be updated with the IP address of the new master node.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 34
4.2.2 Using the Atomic Command Line to Stop or Uninstall a Container
You can use the atomic command line to stop or uninstall your container. To stop a
container, invoke the following command:
atomic stop --opt1=node_name container_name
Where:
node_name is the name of the node (within the replication cluster that you wish
to stop.
container_name is the name of the container that you wish to stop.
Use the following command to stop a container, and then unload the image from your
local repository:
atomic uninstall --opt1= node_name container_name
Where:
node_name is the name of the node (within the replication cluster that you wish
to stop.
container_name is the name of the container that you wish to stop.
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 35
4.2.3 Supported LABELS - Reference
Docker makes some values available for use by atomic commands through the use of
LABEL instructions. The Advanced Server container supports the labels that are described
below. Within each label:
OPT1 can be any valid option for a Docker command used within the label. For a
complete list of the Docker options for a specific command, please use the
docker --help command:
docker command --help
Where command is the docker command invoked by the label.
OPT2 is the name of the cluster node that you are managing.
OPT3 is the network mountpoint of the volume (or local directory) that hosts the
configuration files and/or the data directory.
You must be a superuser to invoke an atomic command.
RUN
The RUN label sends an atomic run ppas command to the container host, running the
container. The syntax is:
LABEL RUN=docker run ${OPT1} --name ${OPT2} -v
${OPT3}:/edbvolume:z -d ppas:latest
INSTALL
The INSTALL label invokes the atomic install ppas command, installing and
running the container. The syntax is:
LABEL INSTALL=docker run ${OPT1} --name ${OPT2} -v
${OPT3}:/edbvolume:z -d ppas:latest
STOP
The STOP label invokes the atomic stop ppas command, stopping the container. The
syntax is:
LABEL STOP=`docker stop ${OPT1}`
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 36
UNINSTALL
The UNINSTALL label invokes the atomic uninstall ppas command, uninstalling the
container. Please note that the container image will persist on the host. The syntax is:
LABEL UNINSTALL=`docker stop ${OPT1}`
Please note: The following label values are subject to change; for the content that is
applicable to your container, please see the atomic info file:
atomic info container_type
Where container_type is ppas or edb-bart.
Name
The name of the container image. For example, for Advanced Server:
Name=ppas
or, for BART:
Name=edb-bart
Version
The version number of Advanced Server. For example, for Advanced Server:
Version=9.5.5.10-1
or, for BART:
Version=1.1.1-1
Release
The release number of the container. For example:
Release=1
Vendor
The name of the container vendor. For example:
Vendor=EnterpriseDB
EDB™ Postgres Containers and Integration with OpenShift
Copyright © 2017 EnterpriseDB Corporation. All rights reserved. 37
Architecture
The architecture of operation sysem. For example:
Architecture=x86_64
Build-date
The build date and time of the Advanced Server container. For example:
Build-date=build-date: 2016-11-30-134948
Description
The description of the EDB Postgres Advanced Server container. For example:
Description= EDB Postgres Advanced Server 9.5. This container
includes EDB Failover Manager for High Availability and pgPool
for automatic load balancing of read requests across cluster
members.
Summary
Summary information describing the Advanced Server container. For example:
Summary= The EDB Postgres Advanced Server 9.5 container will
install all required packages via yum and offers the ability to
scale the number of nodes in the cluster while automatically
updating the load balancer to route requests to the appropriate
master or replica node. It also handles high availability by
automatically promoting one of the replicas to become a master if
the original master node fails.
Copyright
Copyright information for the EDB Postgres Advanced Server container. For example:
Copyright=2017