a guide to setup your own kubernetes cluster with ......a guide to setup your own kubernetes...

74
A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation Release 1.0.0 David Berardozzi May 15, 2017

Upload: others

Post on 30-May-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own KubernetesCluster... With GestiClean Up!

DocumentationRelease 1.0.0

David Berardozzi

May 15, 2017

Page 2: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that
Page 3: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

Contents

1 Introduction 1

2 Goal 3

3 Credits 5

4 Author 7

5 Openstack 95.1 Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.2 SSH keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.3 Download secrets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.4 Horizon dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115.5 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115.6 API’s clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

6 Juju 136.1 Install Juju . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.2 Cloud configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.3 Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.4 High availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176.5 Backup and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176.6 Neutron networks (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186.7 Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196.8 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

7 Kubernetes 217.1 Deploying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227.2 High availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267.3 kubectl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267.4 Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267.5 Namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267.6 We are working on Openstack and we love logs (Optional for Openstack part, see next chapter) . . . 277.7 Ceph persistent storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297.8 kubectl exec and port-forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.9 Pulling private images from Dockerhub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317.10 Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

i

Page 4: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

7.11 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.12 Deeeeeestrrroooy tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.13 YES SECURITY MATTERS! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

7.13.1 Your API and dashboard are accessible... from ANYONE!! . . . . . . . . . . . . . . . . . . 357.13.2 Ok you have done great, but your password is WEAK!!! . . . . . . . . . . . . . . . . . . . 367.13.3 Paranoid security issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

8 Self maintained high availability Postgresql Cluster 378.1 Secure store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.1.1 Option A: Going with Consul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388.1.2 Option B: Going with etcd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

8.2 Stolon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408.2.1 Stolon preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408.2.2 Stolon deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428.2.3 Tests and operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

8.3 Connection pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

9 File sharing system 459.1 Allow privileged containers and security precautions and allow nfs on nodes . . . . . . . . . . . . . 469.2 Deploy NFS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479.3 Creating and claiming NFS persistent volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

10 Postgresql connection pooling 5310.1 Prepare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5310.2 Deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5810.3 Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5910.4 Pooling modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

11 Reverse proxy 6111.1 Manage certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6211.2 Activate default ingress controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6211.3 Config maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6211.4 Ingress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6311.5 Deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6411.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6411.7 External load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

12 GestiClean Up’ at last! (or your own magnificent incredible personal app) 6512.1 Docker image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6512.2 Database prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6712.3 Config map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6812.4 Deployment preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6812.5 Deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6912.6 Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

ii

Page 5: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 1

Introduction

Dear reader,

Here is the very first version of “a guide to setup a Kubernetes cluster on OVH”. It was written to deploy a 1.5.xKubernetes Cluster. We are now in the process to test the new version 1.6.x, therefore some parts may already beoutdated for this latest series. This doc was initially centered on our product GestiClean Up’, I tried to remove partsthat are too much specific. Anyway, I think it could be adapted for other apps.

Remember I have not the pretention to write a user manual nor the ultimate serious guide for Kubernetes. Instead, youmay take this doc as a “personal diary” containing what I learnt testing Kubernetes and docker infrastructure for ourown usage. I am not an english native speaker, sorry for the numerous spelling mistakes you will find. There must bea lot of things that you will do or explain much better than me. That is why you are very welcome to contribute freelyin this bitbucket repo if you want to correct some of my false statements ;)

Our stack is composed of:

• javascript client

• nginx reverse proxy and load balancer

• nginx, uwsgi (emperor with pg module enabled), twisted for asynchronous communication (IoT) and python 2.7(version 2.7 because twisted is not fully available in 3 series)

• Postgresql

This is a multi tenant app (one base code for all customers) controlled by a higher controller named SaasAdmin fornow.

Maybe apart for Twisted, this stack is pretty similar to a classic Flask, Web2Py, Django, Pyramid... setup, so I madethis document public in case it could be useful to you (yes you who spent so much time googling around to find thelight or at least the beginning of a candle).

This work is the result of extensive researches, incommensurable tries and fails, wrong path, U turns, joy and sadness.I hope this would be useful to those who are taking the path to deploy their all brand new cloud ready web app. Someof my statements may (should) be balanced with specialists advices, but this is just “our path to success” in buildingour production infrastructure for our company named Inforum. Feel free to comment and enhance. It is sprinkled withmy modest, subjective and very restrictive point of view on several tools and methods that I balanced as much as I

1

Page 6: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

could with alternatives that I enjoined you to explore. You are very welcome to complete my thoughts with your ownexperiences or making corrections to my false statements if there are any.

For information we also considered Cloud Foundry (through Pivotal services), Marathon (through OVH Paas services),Docker Swarm and the excellent Jelastic (proprietary). Regarding Jelastic, you should definitely give a try to this onewith one of their numerous providers if you are looking for a simple (almost magical) way to deploy your codewith minimal adaptations. I won’t extend this (well long enough) document with the factors that made us chooseKubernetes. This may be detailed in an other post if require later.

The choice has been made to provide a solution self packaged and ready for the latest web technologies in order topower our software GestiClean Up’. GestiClean Up’ is a point of sale software specialized for the dry cleaning market(yes there is also a small room for this). But it can accept modules (community or corporate) to adapt it to any market.It is a Saas app, accessible through any modern web browser. We built our own framework specialized in managingprocess steps. Sorry this is not open source and still in extensive development. But we plan to provide all tools to leta community build their own modules. Our software includes a custom proxy for connected objects called the UCS(Unified Communication System) and the so called EasyPlug. The EasyPlug can transform any old fashion RS232device (or other interfaces) in a connected object (yes one could find a lot of specific peripherals in a dry cleaning shoplike conveyors, marking printers that are not at all prepared for the internet of things).

You will noticed that we are using here the excellent OVH Public Cloud. This is one of the consequence of being avery satisfied customer for years and having the great opportunity to be part of their Digital Launchpad Program. Thiswork was initially shared with the OVH’s Digital Launchpad Program members as a “gift” to this community.

2 Chapter 1. Introduction

Page 7: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 2

Goal

Give all the resources to build a high availability GestiClean Up’ (or whatever app) cluster fully portable and agnosticto cloud providers, providing that Kubernetes is installable (see our website for details about GestiClean Up’).

The global “philosophy” was to use modern technologies, and to find all the possible tools that could ease theirdeployment, and most important to me, their maintenance and scalability. This is the answer to the question “whyusing Juju instead of being satisfied with Openstack’s API?” for example. Oh, and I forgot that it should minimizecosts of ownership.

Note: A Kubernetes user doc. Yes you will discover a lot of concepts regarding Kubernetes, but this will be throughcreating a real cluster.

3

Page 8: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

4 Chapter 2. Goal

Page 10: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

• Twisted

• Python

• ReadTheDocs theme

• ReadTheDocs

• Sphinx

• GestiCleanup

I may have miss some references, in this case I will be glad to add them there.

6 Chapter 3. Credits

Page 11: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 4

Author

David Berardozzi CEO and CTO of Inforum.

Inforum is the company behind GestiClean Up’ and Gesticlean. The dry cleaning specialized software.

7

Page 12: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

8 Chapter 4. Author

Page 13: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 5

Openstack

This step is optional if using Google Cloud Engine or Amazon Web Services but generic for any openstack cloudproviders (Not explained in this document). It could anyway be useful if creating an Openstack cloud over GCE orAWS (Not explained here but Juju would definitely be a good choice)

Openstack is a tool for creating clouds. It can be shipped “as it” by your cloud provider or installed by yourself onprivates servers or public instances if you want to benefit from its advantages. OVH public clouds uses Openstack. Wechoose OVH public cloud (among the fact that we are an old and satisfied customer) for its great integration with ourother services (mails, domains, IPs...), its good support and excellent quality/price ratio.

Having used Xenserver (now opensource) to manage bare metal servers. I made the choice to forget physical serversmanagement to stay focus on services. Openstack through OVH was my answer for this. If I had to manage a physicalpool of servers, I would anyway certainly give a try to MAAS from Canonical for its tight integration Juju. But thatis another story (and I want to sleep again one day:). To be honnest I have no real opinion regarding Heroku, AWSand consorts apart that my very subjective point of view is that I could understand faster OVH offer (much faster thanAWS for my simple mind!).

Documented for OVH cloud provider only. This part will guide you through:

• Project

• SSH keys

• Download secrets

• Horizon dashboard

• Prerequisites

• API’s clients

Project

We will build the cluster in a dedicated project, in order to isolate configuration profiles and security rules.

9

Page 14: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Go to OVH manager (or equivalent for other providers, possible even through the API). Select cloud section and add anew project specific for our GestiClean Up’ cluster deployment. Let’s say gupcluster for the rest of this document.

SSH keys

When project is created, it is easier to set an SSH key for the administrator computer. Each new instance provisionedcould be accessed via SSH without authentication prompt. With OVH, at time of writing this document, you can addas many key you want but only one could be use (need to be verified if those provided before instance provisioningcould also be used and updated automatically when changed)

Generate your SSH keys if not already done (not explained here) and print your public key to your terminal

cat ~/.ssh/id_rsa.pubssh-rsa→˓AAAAB3NzaC1yc2EAAAADAQABAAACAQXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.→˓......

Go to “SSH keys” tab of the newly created project, click on “add a key”, name your key and copy/paste here yourid_rsa.pub.

Download secrets

In order to use Openstack’s APIs clients, we need to get the secrets and identification configuration of our cloudprovider.

Go to the “Openstack” section of the project. Click on “Add a user” if none present. Provide mandatory information.Keep the password provided for later usage. Click on settings button for this user and select “Download an Openstackconfiguration file”. You will get an openrc.sh file. Move it to a secure place and open it:

mkdir ~/gupclustermv ~/Downloads/openrc.sh ~/gupcluster/nano ~/gupcluster/openrc.sh

In order to connect without authentication prompting, we will provide our password in the configuration file. Addit with the line export OS_PASSWORD=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx to your openrc.sh filelike in this example, comment out authentication prompting part:

#!/bin/bash

# To use an Openstack cloud you need to authenticate against keystone, which# returns a **Token** and **Service Catalog**. The catalog contains the# endpoint for all services the user/tenant has access to - including nova,# glance, keystone, swift.## *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We# will use the 1.1 *compute api*export OS_AUTH_URL=https://auth.cloud.ovh.net/v2.0/

# With the addition of Keystone we have standardized on the term **tenant**# as the entity that owns the resources.export OS_TENANT_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxexport OS_TENANT_NAME="xxxxxxxxxxxxxxx"

# In addition to the owning entity (tenant), openstack stores the entity

10 Chapter 5. Openstack

Page 15: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

# performing the action as the **user**.export OS_USERNAME="xxxxxxxxxxxxxx"

# With Keystone you pass the keystone password.#echo "Please enter your OpenStack Password: "#read -sr OS_PASSWORD_INPUTexport OS_PASSWORD=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# If your configuration has multiple regions, we set that information here.# OS_REGION_NAME is optional and only valid in certain environments.export OS_REGION_NAME="XXX"# Don't leave a blank variable, unset it if it was emptyif [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

Now source this file to your terminal:

source ~/gupcluster/openrc.sh

Horizon dashboard

OVH (and certainly other providers), allows you to get access to the Openstack dashboard Horizon. Very useful tocheck advanced features like security groups or volumes ids.

In order to connect to your dashboard, click on settings button on your user in the Openstack section of the project.Select “Open Openstack Horizon”. You will be redirected to the dashboard. Use your credentials and your ready togo.

Prerequisites

A Kubernetes deployment through Juju needs some openstack resources that we must check before beginning. Let’suse the Horizon dashboard to do this (use the hard way through the API if you want).

To resume we have to check the quotas allowed by our provider let us provision enough instances, VCPUs, RAM,Security groups. For example a 3 nodes Kubernetes cluster provisioned via Juju will consume at least 10 instances,10 VCPUs, 13 security groups, about 25Go of RAM. Do not worry to much about volumes and space as it should besufficient for our needs with default configuration.

Each new Kubernetes node will consume 2 new security groups. For now, with OVH, the maximum quotas you canget for one project is 20 security group. This may become a limitation for a growing cluster.

As an example we can obtain those quotas with OVH (max as far as I know), that are far enough satisfying to begin:

• 50 instances

• 64 VCPUs

• 496Go of RAM

• 50 floating IPs

• 20 security groups

• 100 volumes

• 19,5To

5.4. Horizon dashboard 11

Page 16: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

And after querying the support team, I was gracefully endowed with 50 security groups. No real limits to our firstmodest deployment ahead!

API’s clients

Let’s assume that you are working on an ubuntu computer (for other operating system, you will certainly find equiva-lents or distributed packages). In order to talk to our Openstack cloud, we need several clients:

• nova that we will use to list Openstack cloud and work on networks (we could also use glance)

• cinder for volumes provisioning

• swift for storage container provisioning

To install:

sudo apt-get install python-novaclient python-cinderclient python-swiftclient

Check if you can connect to Openstack cloud

#Don't forget to source your openrc.sh file before using nova clientsource ~/gupcluster/openrc.shnova list

Should return something like (empty list for the moment):

+---------------------------+--------------------------+--------+------------+--------→˓-----+----------------+| ID | Name | Status | Task State | Power→˓State | Networks |+---------------------------+--------------------------+--------+------------+--------→˓-----+----------------+| | | | - |→˓ | |+---------------------------+--------------------------+--------+------------+--------→˓-----+----------------+

Idem for cinder and swift (swift won’t return nothing if there is no container created):

cinder listswift list

We are ready to operate our Openstack cloud!

12 Chapter 5. Openstack

Page 17: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 6

Juju

This part is generic to most cloud provider. Points 2 and 3 are optionals for known clouds (by Juju) like AWS, Herokuor GCE. Juju eases the process of deploying a Kubernetes cluster (and for those who tried “by hand”, this may be aneuphemism).

Juju is an orchestrator that allows to remotely operates clouds provider’s solutions. This is brewed by Canonical andwell integrated with most known cloud providers. I made up my choice on Juju because it is cloud agnostic and becauseCanonical made the great effort to provide what they call a bundle suitable for Kubernetes cluster in production. I gavea flash try to Ansible and Puppet. Actually, Ansible and Puppet are not real equivalents to Juju. They could be usedto do what Juju do and Juju could deploy Ansible/Puppet services. Juju is focused on managing services and affectingthem to machines/VMs/LXDs. If you want more see this post where Puppet CEO and Canonical’s cloud strategyresponsible explain what both tools do.

I did not dig enough to find a real equivalent to Juju as it feats the need far well enough.

Contents

• Juju

– Install Juju

– Cloud configuration

– Bootstrapping

– High availability

– Backup and restore

– Neutron networks (Optional)

– Dashboard

– Model

13

Page 18: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Install Juju

Just use the distribution package:

sudo apt-get install juju

And try your new toy to make sure you installed version 2 at least:

juju version2.1.2-yakkety-amd64

Cloud configuration

By default Juju knows only “leading cloud providers”. To see those ones:

juju list-cloudsCloud Regions Default Type Descriptionaws 12 us-east-1 ec2 Amazon Web Servicesaws-china 1 cn-north-1 ec2 Amazon Chinaaws-gov 1 us-gov-west-1 ec2 Amazon (USA Government)azure 18 centralus azure Microsoft Azureazure-china 2 chinaeast azure Microsoft Azure Chinacloudsigma 5 hnl cloudsigma CloudSigma Cloudgoogle 4 us-east1 gce Google Cloud Platformjoyent 6 eu-ams-1 joyent Joyent Cloudrackspace 6 dfw rackspace Rackspace Cloudlocalhost 1 localhost lxd LXD Container Hypervisor

Yes! your computer localhost is a “very well known” cloud provider as a LXD Container Hypervisor. This meanswe will use this option to deploy development environments as if we were on a production server.

In order to be able to “bootstrap” our cloud provider with Juju, we need to make Juju learn our configuration. Letscreate a simple yaml file with:

nano ~/gupcluster/ovh.yml

And fulfill it with (adapt it to your needs if your not using the same region or another Openstack cloud provider):

clouds:ovh-public-cloud:type: openstackauth-types: [access-key, userpass]regions:

GRA1:endpoint: https://auth.cloud.ovh.net/v2.0/

BHS1:endpoint: https://auth.cloud.ovh.net/v2.0/

SBG1:endpoint: https://auth.cloud.ovh.net/v2.0/

We are just learning Juju how to name this new cloud, that it uses Openstack (it could be a lot of other tools and evena custom one), how to authenticate and to what URL we should use to authenticate for the regions we use.

Add this new cloud definition to known clouds. The name of the cloud MUST be the same that the name included inyaml file, we will load our Openstack credentials in the process:

14 Chapter 6. Juju

Page 19: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

source ~/gupcluster/openrc.shjuju add-cloud ovh ~/gupcluster/ovh.ymljuju autoload-credentials

Now a new juju list-clouds should print out your now famous and well known ovh cloud.

Bootstrapping

Our cloud is famous now but still obscure in many aspects for Juju. We need to let it know about the images that wecan use in order to provision new instances.

We will store those images definitions in a simplestreams folder:

mkdir -p ~/gupcluster/simplestreams/images

We need to gather images IDs that we would like to use. We can ask our nova client to do this for us or find them inHorizon dashboard. nova is perfect for that (you need to source your openrc.sh file)

nova image-list+--------------------------------------+-------------------------------+--------+-----→˓---+| ID | Name | Status |→˓Server |+--------------------------------------+-------------------------------+--------+-----→˓---+| 6340347a-7e36-43e0-996c-07fce105e70c | Centos 6 | ACTIVE |→˓ || 8a3f48c8-c3fb-439f-be00-3bf9f26f6fc5 | Centos 7 | ACTIVE |→˓ || 884794e0-a47b-489e-a911-e71d3d546453 | CoreOS Stable | ACTIVE |→˓ || 4d1a7923-8720-4569-8b5d-11ec5c9ab255 | Debian 7 | ACTIVE |→˓ || d0e79eb7-5dbe-4ff2-84f9-d5ce26ef074e | Debian 8 | ACTIVE |→˓ || 7250cc02-ccc1-4a46-8361-a3d6d9113177 | Fedora 19 | ACTIVE |→˓ || 57b9722a-e6e8-4a55-8146-3e36a477eb78 | Fedora 20 | ACTIVE |→˓ || 004c38a1-67e8-46d8-8225-4bb789a80ae3 | Fedora 24 | ACTIVE |→˓ || 5a0dd931-1f0a-4382-add7-cda081bb12e1 | Fedora 25 | ACTIVE |→˓ || a5006914-ef04-41f3-8b90-6b99d0260a99 | FreeBSD 10.3 UFS | ACTIVE |→˓ || 5a13b9a6-02f6-4f9f-bbb5-b131852888e8 | FreeBSD 10.3 ZFS | ACTIVE |→˓ || 03be11dd-a44f-434c-8cb5-5328788baba4 | FreeBSD 11.0 UFS | ACTIVE |→˓ || d74969a8-b3c5-459d-a8f7-0dbd21f76f61 | FreeBSD 11.0 ZFS | ACTIVE |→˓ || 3031ed24-8337-4b09-94b5-e51c54bec6c8 | Ubuntu 12.04 | ACTIVE |→˓ || b0e68d0f-e963-44fb-b347-64d0214c3fa1 | Ubuntu 14.04 | ACTIVE |→˓ || 5b03f136-4fbc-472d-931a-1be08d9c506c | Ubuntu 15.10 | ACTIVE |→˓ |

6.3. Bootstrapping 15

Page 20: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

| d79802bf-0b36-47a4-acb6-76a293b0c037 | Ubuntu 16.04 | ACTIVE |→˓ || a4fd9ec9-1692-4ea0-bd0b-7ec8072048a8 | Ubuntu 16.10 | ACTIVE |→˓ || ff09fa06-b8d2-4b2b-a2af-fbfde55b7464 | Windows 2016 Standard Desktop | ACTIVE |→˓ || eebec8ed-1444-4808-b1d8-f85a86d26fb7 | Windows 2016 Standard Server | ACTIVE |→˓ || f19c6405-7e90-49ac-9f22-9c5293d8170f | Windows-Server-2012-r2 | ACTIVE |→˓ || 0c39fd02-c44a-41c2-bd71-d09451e16d21 | rescue-ovh | ACTIVE |→˓ |+--------------------------------------+-------------------------------+--------+-----→˓---+

In my case, I only want Yakkety (16.10) image and Xenial (16.04) as a lot of charms are provided for Xenial (LTS) bydefault.

Caution Be careful to actually list yourself the image ids when preparing your Juju install as ids maychange (yes it happened to me).

Then generate juju images with (example provided for yakkety, repeat for each image you need and each region):

juju metadata generate-image -d ~/gupcluster/simplestreams -i a4fd9ec9-1692-4ea0-bd0b-→˓7ec8072048a8 -s yakkety -r GRA1 -u https://auth.cloud.ovh.net/v2.0/

Not sure if this is necessary but you can also generate tools for each images (repeat for each image and region):

juju metadata generate-tools -d ~/gupcluster/simplestreams

Those metadate can be put inside a swift container so that they remain available for later use by any model. This couldbe useful if a new instance type is created by OVH after first bootstrap or if you forgot some images. You can easilyupdate this container. First create it (you still have to source first the openrc.sh file):

openstack container create simplestreams

Then fill it with your images:

cd ~/gupcluster/simplestreamsswift upload simplestreams *

If you have later updates, just redo the same operation (after new juju metadata generate-image com-mand).

Make this container public:

swift post simplestreams --read-acl .r:*

Gather container access link with (will be first line):

swift stat simplestreams -v

And keep it for later use.

Finally bootstrap using your local images definitions:

juju bootstrap ovh-public-cloud ovh-openstack-gra1 --config image-metadata-url=https:/→˓/storage.gra1.cloud.ovh.net/v1/AUTH_bc197efbd58b4e8bb0bcc8d8354767c8/simplestreams/→˓images/ --bootstrap-series yakkety --show-log --debug

16 Chapter 6. Juju

Page 21: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

This command creates the so called juju-controller by provisioning a dedicated server using yakkety image. The show-log and debug options allow a high verbosity during the process that will take several minutes. Be patient. You shouldnotice that you MUST specify a network parameter if you have several networks. Since OVH allows to create easilywhat they call vracks that are simple neutron networks for Openstack. If there is one network, it will be picked bydefault. Of course do not attempt to use a local network to bootstrap as Juju will not be able to ssh in the controllerinstance.

To list networks available just enter:

nova net-list+--------------------------------------+-----------+------+| ID | Label | CIDR |+--------------------------------------+-----------+------+| 810aa4db-3fc7-4612-8e7a-xxxxxxxxxxxx | Cloud GUP | None || 8d3e91fd-c533-418f-8678-xxxxxxxxxxxx | Ext-Net | None |+--------------------------------------+-----------+------+

High availability

In theory enabling ha should be as simple as this:

Warning: DO NOT DO THAT UNTIL YOU READ ALL THIS CHAPTER)

juju enable-ha

This should automatically provision 2 more machines, what it tries to do but failed in my case. Falling into thisreported bug and breaking provisioning feature for new machines. In my case I had to rebuild EVERYTHING, oh!sadness!. I finally found that this was due to the fact I added a new Neutron network after initial bootstrap causing thecontroller to be confused with what network to use. So do this at the beginning and if you plan to add new networkstry to anticipate or to pass them as parameter when provisioning new instances.

Backup and restore

A full detailed backup and restore procedure is explained here. To make it simple all you have to do is:

juju create-backupdownloading to juju-backup-20170204-105651.tar.gzjuju list-backups20170204-105651.7f2bca6b-2505-4cf5-886c-6320ca67dc76juju download-backup 20170204-105651.7f2bca6b-2505-4cf5-886c-6320ca67dc76

As we are in HA, the “standard” scenario would be that you have lost all your controllers, therefore you will have tobootstrap again using a backup file, and then re enabling HA:

juju restore-backup -b --file=backup.tar.gzjuju enable-ha

6.4. High availability 17

Page 22: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Neutron networks (Optional)

Openstack allows to have several networks attached to a project. In my case, this can be done using the “vrack”provided by OVH. With some great advantages like allowing a private OVH server to be in the same local network “asif” it was in the same bay than ou public cloud project. In my case I decided to withdraw setting up multiple networksas it made me systematically fails to keep an up and running cluster without having to set lots of manual parameters.You can find hereafter some of my thoughts on this subject.

The fact is that I did not find any way to automate this during provisioning of Juju controllers and futures charmsdeployments using juju deploy. I asked a question on ask ubuntu. regarding this. Juju spaces management lookpromising but is available only on MAAS. I found this excellent post if you want to go far way further on networkingwith Juju.

So lets do it by hand. Firstly, list your networks:

nova net-list+--------------------------------------+-----------+------+| ID | Label | CIDR |+--------------------------------------+-----------+------+| 810aa4db-3fc7-4612-8e7a-xxxxxxxxxxxx | Cloud GUP | None || 8d3e91fd-c533-418f-8678-xxxxxxxxxxxx | Ext-Net | None |+--------------------------------------+-----------+------+

My local network is the “Cloud GUP” one (I know the nae is not well chosen). It was configured automatically byOVH with DHCP enabled.

Then list your instances:

nova list+--------------------------------------+--------------------------+--------+----------→˓--+-------------+----------------------+| ID | Name | Status | Task→˓State | Power State | Networks |+--------------------------------------+--------------------------+--------+----------→˓--+-------------+----------------------+| 9994d93a-fe8c-4225-96dc-xxxxxxxxxxxx | juju-67dc76-controller-0 | ACTIVE | -→˓ | Running | Ext-Net=xxx.xx.xx.xxx|| 0721b23b-2085-4418-afef-xxxxxxxxxxxx | juju-67dc76-controller-1 | ACTIVE | -→˓ | Running | Ext-Net=xxx.xx.xx.xxx|| 87ae62b9-9988-4df6-a698-xxxxxxxxxxxx | juju-67dc76-controller-2 | ACTIVE | -→˓ | Running | Ext-Net=xxx.xx.xx.xxx|+--------------------------------------+--------------------------+--------+----------→˓--+-------------+----------------------+

In this case I have only my provisioned controllers (3 because ha is enabled).

Now connect each instance to the local network (example provided for first controller):

nova interface-attach --net-id 810aa4db-3fc7-4612-8e7a-xxxxxxxxx 9994d93a-fe8c-4225-→˓96dc-xxxxxxxxxxx

A new nova list will print the new DHCP assigned IP for the instance with:

nova list+--------------------------------------+--------------------------+--------+----------→˓--+-------------+-----------------------------------------------+| ID | Name | Status | Task→˓State | Power State | Networks |

18 Chapter 6. Juju

Page 23: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

+--------------------------------------+--------------------------+--------+----------→˓--+-------------+-----------------------------------------------+| 9994d93a-fe8c-4225-96dc-xxxxxxxxxxxx | juju-67dc76-controller-0 | ACTIVE | -→˓ | Running | Ext-Net=xxx.xx.xx.xxx; Cloud GUP=192.168.2.89 || 0721b23b-2085-4418-afef-xxxxxxxxxxxx | juju-67dc76-controller-1 | ACTIVE | -→˓ | Running | Ext-Net=xxx.xx.xx.xxx || 87ae62b9-9988-4df6-a698-xxxxxxxxxxxx | juju-67dc76-controller-2 | ACTIVE | -→˓ | Running | Ext-Net=xxx.xx.xx.xxx |+--------------------------------------+--------------------------+--------+----------→˓--+-------------+-----------------------------------------------+

Now we need to setup the interface in the instance. First ssh in with juju ssh -m controller 0 and gatherinterface name with ip link list. Now edit interfaces config file:

sudo nano /etc/network/interfaces.d/50-cloud-init.cfg

And add at the end:

auto ens6iface ens6 inet dhcp

Now turn it up:

ifup ens6

After doing this to all instances you can ping between them using the local network.

This part is a dark point for me as it requires a lot of manual tweaks. But it could worth the cost in OVH environmentfor security matters I suppose. Anyway I decided to not use local networks and vracks.

Dashboard

When ready give a try to Juju dashboard:

juju gui

This opens the dashboard in your browser, default username is admin. To get your password, type:

juju show-controller --show-password

Model

Create a model to prepare the kubernetes deployment:

juju add-model kubernetes

This will add a new model. A default one already exists and can be used also. When creating a model, juju switchesautomatically onto it so that every futures commands use this one. You can operate other models using the -m optionor:

juju switch default

6.7. Dashboard 19

Page 24: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

To switch into default model for instance.

Now you can specify advance global parameters for your model, like, for instance, the default series or network to use.To show actual parameters values:

juju model-configAttribute From Valueagent-metadata-url default ""agent-stream default releasedagent-version model 2.0.2.1apt-ftp-proxy default ""apt-http-proxy default ""apt-https-proxy default ""apt-mirror model ""automatically-retry-hooks default truedefault-series model xenialdevelopment default falsedisable-network-management default falseenable-os-refresh-update default trueenable-os-upgrade default truefirewall-mode default instanceftp-proxy default ""http-proxy default ""https-proxy default ""ignore-machine-addresses default falseimage-metadata-url model https://storage.gra1.cloud.ovh.net/v1/AUTH_→˓xxxxxxxxxxxxxxxxxxxxx/simplestreamsimage-stream default releasedlogforward-enabled default falselogging-config model <root>=DEBUG;unit=DEBUGnetwork model 810aa4db-3fc7-4612-8e7a-xxxxxxxxxxno-proxy default ""provisioner-harvest-mode default destroyedproxy-ssh default falseresource-tags model {}ssl-hostname-verification default truestorage-default-block-source model cindertest-mode default falsetransmit-vendor-metrics default trueuse-default-secgroup default falseuse-floating-ip default false

You can see that I already specified a network to use because I have two. You MUST do the same or you will not beable deploy any machine. To do this:

juju config network 810aa4db-3fc7-4612-8e7a-xxxxxxxxxxjuju model-config default-series=yakkety image-metadata-url=https://storage.gra1.→˓cloud.ovh.net/v1/AUTH_bc197efbd58b4e8bb0bcc8d8354767c8/simplestreams/images/

Unfortunately, until now, I did not find any way to pass two networks so that it could use it to provision instances.

I have also already set up image-stream the same way but this is optional unless you want to add new image addedafter bootstrapping.

20 Chapter 6. Juju

Page 25: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 7

Kubernetes

Kubernetes is “an open source system for automating deployment, scaling, and management of containerized applica-tions”. It is powered by Google... and it is awesome! My problem was: containers are incredible! But how to use itin high availability production environment with scalability options without becoming a admin jedi master.... with areduced admin team (2) of course (this guy is silly). Kubernetes can help! To be honest if you are not prepared to lookfor unexisting docs and facing some “not really ready yet functions”, you should try something like Jelastic that is amore turnkey option. Otherwise, fasten your seat belt.

• Deploying

• High availability

• kubectl

• Dashboards

• Namespaces

• We are working on Openstack and we love logs (Optional for Openstack part, see next chapter)

• Ceph persistent storage pool

• kubectl exec and port-forward

• Pulling private images from Dockerhub

• Logs

• Troubleshooting

• Deeeeeestrrroooy tests

• YES SECURITY MATTERS!

– Your API and dashboard are accessible... from ANYONE!!

– Ok you have done great, but your password is WEAK!!!

– Paranoid security issues

21

Page 26: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Deploying

Here we will use the Juju magical abilities using the production grade bundle proudly named The Canonical Distribu-tion Of Kubernetes. I will start with the easy way. Before going on, install the even more magical tool conjure-up(here with snap but works also with apt:

sudo snap install conjure-up --classic

Reading the doc this should be as simple as (oh! joy!):

conjure-up canonical-kubernetes

You can monitor deployment with (more detailed than the conjure-up tool:

watch -c -d juju status --color

Which return something like that at the end:

Every 2,0s: juju status --color→˓ Coo1: Wed Feb 1→˓20:35:49 2017

Model Controller Cloud/Region Versionbob kubernetes ovh/GRA1 2.0.2.1

App Version Status Scale Charm Store Rev→˓ OS Noteseasyrsa 3.0.1 active 1 easyrsa jujucharms 6→˓ ubuntuetcd 2.2.5 active 3 etcd jujucharms 23→˓ ubuntuflannel 0.7.0 active 5 flannel jujucharms 10→˓ ubuntukubeapi-load-balancer 1.10.1 active 1 kubeapi-load-balancer jujucharms 6→˓ ubuntu exposedkubernetes-master 1.5.2 active 2 kubernetes-master jujucharms 11→˓ ubuntukubernetes-worker 1.5.2 active 3 kubernetes-worker jujucharms 13→˓ ubuntu

Unit Workload Agent Machine Public address Ports→˓Messageeasyrsa/0* active idle 0 137.74.30.10→˓Certificate Authority connected.etcd/0* active idle 4 137.74.30.94 2379/tcp→˓Healthy with 3 known peers.etcd/1 active idle 2 137.74.30.13 2379/tcp→˓Healthy with 3 known peers.etcd/2 active idle 1 137.74.30.12 2379/tcp→˓Healthy with 3 known peers.kubeapi-load-balancer/0* active idle 3 137.74.30.15 443/tcp→˓Loadbalancer ready.kubernetes-master/0* active idle 5 137.74.30.16 6443/tcp→˓Kubernetes master running.flannel/0* active idle 137.74.30.16

→˓Flannel subnet 10.1.63.1/24kubernetes-worker/2* active idle 6 137.74.30.17 80/tcp,443/tcp→˓Kubernetes worker running.

22 Chapter 7. Kubernetes

Page 27: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

flannel/1 active idle 137.74.30.17→˓Flannel subnet 10.1.14.1/24kubernetes-worker/3 active idle 9 149.202.181.217 80/tcp,443/tcp→˓Kubernetes worker running.flannel/4 active idle 149.202.181.217

→˓Flannel subnet 10.1.64.1/24kubernetes-worker/6 active idle 20 137.74.31.160 80/tcp,443/tcp→˓Kubernetes worker running.flannel/13 active idle 137.74.31.160

→˓Flannel subnet 10.1.74.1/24

Machine State DNS Inst id Series AZ0 started 137.74.30.10 d39f3e78-4fb5-45c9-9aa6-6b89bc483346 yakkety nova1 started 137.74.30.12 51ec2edb-e486-48c2-b3d0-e141ed841dd9 yakkety nova2 started 137.74.30.13 28441006-57f2-41d7-8ee3-27ed1548a3d8 yakkety nova3 started 137.74.30.15 a9e9e9b0-a8a4-47af-b64a-eb6046a917d7 yakkety nova4 started 137.74.30.94 e7006886-f108-4179-802d-805735bf91be yakkety nova5 started 137.74.30.16 26c6e24e-3a81-4ae9-9b51-1efb68d9b718 yakkety nova6 started 137.74.30.17 4c137ee3-7a65-4fc1-b712-abbde3fc7730 yakkety nova9 started 149.202.181.217 7a6804ea-1951-4164-8a8a-e2143f90ebea yakkety nova20 started 137.74.31.160 3067f156-f6ba-49db-9d4c-4d760fe7bbd9 yakkety nova

Note: If conjure-up failed you shall retry. It happened to me and it finally worked. Also do not forget that this is thesimple way of doing thing, trying to adjust resources (ram, cores...) with conjure-up led in my case to a failed statedue to Juju not knowing enough things about my OVH public cloud in order to choose right instance types.

The longer and harder way explained after is the one that allows a more personalized deployment. The real very longstory is that I had to use the hard way because there was a caveat with a missing Flannel file (network component)in version 20 of the bundle. That allowed me to learn how to overcome this problem having instructively use thedebugging hooks system that I will not detail here as we are seeking to endure as less pain as possible.

Normally, using the conjure-up solution should work from version 21 ou of the box (worked for me).

If you are facing problems that you do not want to solve by putting hands in the dirty motor parts, OR you want topersonalize your deployment just modify a little the bundle file (here I changed yakkety series instead of xenial). Todo that just create the following file:

nano ~/gupcluster/bundle.yaml

Note: You can change instance types after to adapt the project to your workload. So this should be considered asoptional.

And insert (adapt constraints to your needs, I successfully managed to use vps-ssd-1 only machines during my initialtests without problem) the modified version of the bundle cs:bundle/canonical-kubernetes-21. Adaptconstraints and series to your needs and quotas. As you cannot set Juju to use a specific Openstack flavor, adaptconstraints so that they match existing ones without ambiguity (for a lab setup vps-ssd-0 is ok for everything, justcomment all constraints):

series: yakketyservices:

easyrsa:annotations:

gui-x: '450'gui-y: '550'

7.1. Deploying 23

Page 28: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

charm: cs:~containers/easyrsa-6num_units: 1

etcd:annotations:

gui-x: '800'gui-y: '550'

charm: cs:~containers/etcd-23num_units: 3constraints:

- instance-type=vps-ssd-3flannel:annotations:

gui-x: '450'gui-y: '750'

charm: cs:~containers/flannel-10kubeapi-load-balancer:annotations:

gui-x: '450'gui-y: '250'

charm: cs:~containers/kubeapi-load-balancer-6expose: truenum_units: 1

kubernetes-master:annotations:

gui-x: '800'gui-y: '850'

charm: cs:~containers/kubernetes-master-11num_units: 1constraints:

- instance-type=vps-ssd-3kubernetes-worker:annotations:

gui-x: '100'gui-y: '850'

charm: cs:~containers/kubernetes-worker-13expose: truenum_units: 3constraints:

- instance-type=eg-7-ssd-flexrelations:- - kubernetes-master:kube-api-endpoint

- kubeapi-load-balancer:apiserver- - kubernetes-master:loadbalancer

- kubeapi-load-balancer:loadbalancer- - kubernetes-master:cluster-dns

- kubernetes-worker:kube-dns- - kubernetes-master:certificates

- easyrsa:client- - etcd:certificates

- easyrsa:client- - kubernetes-master:etcd

- etcd:db- - kubernetes-worker:certificates

- easyrsa:client- - kubernetes-worker:kube-api-endpoint

- kubeapi-load-balancer:website- - kubeapi-load-balancer:certificates

- easyrsa:client

24 Chapter 7. Kubernetes

Page 29: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

- - flannel:etcd- etcd:db

- - flannel:cni- kubernetes-master:cni

- - flannel:cni- kubernetes-worker:cni

Now import this bundle in the kubernetes model we have created before and use the import button to load thisbundle. and click on the “commit” button and confirm deployment on next screen. Monitor cluster coming up with:

watch -c -d juju status --color

The truth is that for production setup you will need to use constraints, in order to have a fully ready cluster, and againthere is a caveat with the bundle: the constraints instance-type key is not recognized causing Juju to deploy onlycostless machines (vps-ssd-0 for OVH). For now, I do not know if this is due to a malformation in my file or anythingelse. But do not panic, the bundle is rigorously equal to this simple bash script that worked like a charm (haha):

#!/usr/bin/env bashjuju deploy cs:~containers/kubernetes-worker-13 --num-units=3 --series=yakkety --→˓force --constraints instance-type=eg-7-ssd-flexjuju deploy cs:~containers/kubernetes-master-11 --series=yakkety --force --→˓constraints instance-type=vps-ssd-3juju deploy cs:~containers/kubeapi-load-balancer-6 --series=yakkety --forcejuju deploy cs:~containers/flannel-10 --series=yakkety --forcejuju deploy cs:~containers/etcd-23 --num-units=3 --series=yakkety --force --→˓constraints instance-type=vps-ssd-3juju deploy cs:~containers/easyrsa-6 --series=yakkety --forcejuju add-relation kubernetes-master:kube-api-endpoint kubeapi-load-balancer:apiserverjuju add-relation kubernetes-master:loadbalancer kubeapi-load-balancer:loadbalancerjuju add-relation kubernetes-master:cluster-dns kubernetes-worker:kube-dnsjuju add-relation kubernetes-master:certificates easyrsa:clientjuju add-relation etcd:certificates easyrsa:clientjuju add-relation kubernetes-master:etcd etcd:dbjuju add-relation kubernetes-worker:certificates easyrsa:clientjuju add-relation kubernetes-worker:kube-api-endpoint kubeapi-load-balancer:websitejuju add-relation kubeapi-load-balancer:certificates easyrsa:clientjuju add-relation flannel:etcd etcd:dbjuju add-relation flannel:cni kubernetes-master:cnijuju add-relation flannel:cni kubernetes-worker:cnijuju expose kubeapi-load-balancer

Note: Recheck last version of charms before deploying.

Just do:

chmod +x deploy_kubernetes.sh./deploy_kubernetes.sh

After several minutes all workloads should turn into a pleasant green active status. We have made it!

You can use Juju to ssh into any instance with (for instance):

juju ssh kubernetes-worker/0

We just have to attached instances to local network so that they could speak together by this way (see “Neutronnetworks” chapter seen early in Juju config). I did not use this ability for now as I considered traffic secured enough

7.1. Deploying 25

Page 30: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

by default.

High availability

By default we are using only one kubernetes-master. You can simply add another one, the api load balancer willredirect traffic. At time of writing this, the load balancer function is said to be experimental in bundle’s readme. Itseemed to work well anyway during my tests:

juju add-unit kubernetes-master

You can do the same for kubernetes-workers and etcd (but there is already 3 replicas for both) and additionally setconstraints to newly created units using:

juju set-constraints kubernetes-worker mem=32G cores=8

kubectl

kubectl is the joystick that provide control over our newly kubernetes cluster. To install it we need to download theconfiguration file and kubectl itself from the master. We can optionally move it to /usr/local/bin in order toinclude it in our terminal path:

mkdir -p ~/.kubejuju scp kubernetes-master/0:config ~/.kube/configjuju scp kubernetes-master/0:kubectl ./kubectl #Can take a whilemv ./kubectl /usr/local/bin

We can now operate Kubernetes cluster:

kubectl cluster-info

Dashboards

To start the kubernetes dashboard use:

kubectl proxy

Or go to the IP of your load balancer, for instance: https://xx.xx.xx.xx/ui

There are other dashboards that you can list here:

kubectl cluster-info

Like Grafana for example.

There is another convenient way to monitor your cluster using the excellent android app Cabin.

Namespaces

Kubernetes comes with the great availability to create namespaces which are virtual clusters. I profit from this tocreate: dev, staging and production virtual clusters.

26 Chapter 7. Kubernetes

Page 31: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

By default Kubernetes uses the default namespace, to run a command in another one use for instance:

kubectl --namespace=dev get pods

To prevent from typing this parameter each time, use:

export CONTEXT=$(kubectl config view | awk '/current-context/ {print $2}')kubectl config set-context $CONTEXT --namespace=devkubectl config view | grep namespace:

We are working on Openstack and we love logs (Optional for Open-stack part, see next chapter)

And we could tell this to Kubernetes because it will then be able to provision automatically new volumes throughcinder. And that is great. We also have to address an odd issue that prevent the dashboard and kubectl from usinglogs. To do this we have to ssh in masters and workers in order to modify their options (at time of writing this, to myknowledge, there was no decent doc explaining anything about that). Repeat for each node and master:

juju ssh kubernetes-master/0## For masters:sudo nano /etc/default/kube-apiserver## For workers:sudo nano /etc/default/kubelet

Add the options:

• --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP to allow thecluster to use internal IPs for pods addressing. Should be added only on masters. This will allows logs tobe printed.

• --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud_config to letKubernetes cluster know that what kind of cloud we are using and pass its credentials so that it can provisioncinder volumes in our case. Should be added to workers and masters.

Here is how should look /etc/default/kube-apiserver file for masters:

#### kubernetes system config## The following values are used to configure the kube-→˓apiserver#

# The address on the local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.

KUBE_API_PORT="--insecure-port=8080"

# default admission control policiesKUBE_ADMISSION_CONTROL="--admission-→˓control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota"

# Add your own!

KUBE_API_ARGS="--tls-cert-file=/srv/kubernetes/server.crt --basic-auth-file=/srv/→˓kubernetes/basic_auth.csv --etcd-keyfile=/etc/ssl/etcd/client-key.pem --tls-private-→˓key-file=/srv/kubernetes/server.key --token-auth-file=/srv/kubernetes/known_tokens.→˓csv --v=4 --etcd-certfile=/etc/ssl/etcd/client-cert.pem --service-account-key-file=/→˓etc/kubernetes/serviceaccount.key --service-cluster-ip-range=10.152.183.0/24 --etcd-→˓servers=https://137.74.30.12:2379 --client-ca-file=/srv/kubernetes/ca.crt --min-→˓request-timeout=300 --etcd-cafile=/etc/ssl/etcd/client-ca.pem --kubelet-preferred-→˓address-types=InternalIP,Hostname,ExternalIP --cloud-provider=openstack --cloud-→˓config=/etc/kubernetes/cloud_config"

7.6. We are working on Openstack and we love logs (Optional for Openstack part, see next chapter)27

Page 32: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

And /etc/default/kubelet file for workers:

# kubernetes kubelet (node) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all→˓interfaces)KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve onKUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname. If you override this#→˓reachability problems become your own issue.

# KUBELET_HOSTNAME="--hostname-override=kubernetes-worker-3"

# Add your own!

KUBELET_ARGS="--kubeconfig=/srv/kubernetes/config --require-kubeconfig --network-→˓plugin=cni --cluster-domain=cluster.local --cluster-dns=10.152.183.10 --cloud-→˓provider=openstack --cloud-config=/etc/kubernetes/cloud_config"

Second parameter points to a config file that we have to create in /etc/kubernetes/:

sudo nano /etc/kubernetes/cloud_config

That should look like:

[Global]auth-url=https://auth.cloud.ovh.net/v2.0/username=xxxxxxxxxpassword=xxxxxxxxxxxxxxxxxxxxxxxxxxxregion=XXXXtenant-id=xxxxxxxxxxxxxxxxxxxxxxxxx

Another file that requires the second point modification is /etc/default/kube-controller-manager onmasters, it should look like:

#### The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!

KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/→˓serviceaccount.key --v=2 --min-resync-period=3m --root-ca-file=/srv/kubernetes/ca.→˓crt --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud_config"

Important: Regarding cinder auto provisioning feature. At first it could look as a simple, cool and good idea. Butyou should be aware of what we are doing: When we will create a so called “persistent volume” (pv) and a “persistentvolume claim” (pvc), a cinder volume will be provisioned, automatically attached to the worker executing the nodethat claimed the pv and finally mounted. The big problem is that cinder volume cannot be attached to several hosts(named compute instances in Openstack). And sadly it seems it is not going to change soon. The problem is that whenthe node dies, the cinder volume does not get attached to another node and thus maybe killing your super layer andmaybe your stack (I know this having practicing this failure path...). Therefore, use cinder provisioning at your ownrisks. I will go later for a Ceph cluster outside of Kubernetes that is used to decouple the use of cinder from directmounting onto nodes OR using deamon sets that can be used to make sure that we run at least one pod per node of a

28 Chapter 7. Kubernetes

Page 33: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

replica set.

Ceph persistent storage pool

This approach is really better for production cluster as it will decouple cinder volumes from direct mounting onto thenodes. And it is quite simple thanks to a tight integration by the bundle developpers! The only cons, is that it is morecosts. Lets add units:

juju deploy cs:ceph-mon -n 3 --constraints instance-type=vps-ssd-2juju deploy cs:ceph-osd -n 3 --constraints instance-type=vps-ssd-2juju add-relation ceph-mon ceph-osdjuju add-storage ceph-osd/0 osd-devices=cinder,100G,1juju add-storage ceph-osd/1 osd-devices=cinder,100G,1juju add-storage ceph-osd/2 osd-devices=cinder,100G,1juju add-relation kubernetes-master ceph-mon

3 of each is mandatory and normally, you should use dual cores at least every 1To if I believe Ceph hardware reco-mandations. I put constraints for a single core model but not the first price, and everything is going smooth in labenv.

Note: In real life I strongly recommend to follow ceph hardware requirements with a 2 cores machine per osd. Duringmy tests I felt into a caveat where the Juju agent failed and was unrecoverable even if the ceph cluster was not affected.After some research it seems to be related to the workload.

To see what provisioning tool you should use in the juju add-storage directive, just use jujustorage-pool. In my case, cinder.

Now you can even add from Juju a persistent volume in your Kubernetes cluster, with:

juju run-action kubernetes-master/0 create-rbd-pv name=test size=50

That is all, as you can see, this is a bit more expensive but far way easier than direct cinder provisioning. I have usedthis method for my production cluster.

This can be even better with auto provisioning features. Like cinder, we can set the cluster in order that it “knows”how to provision ceph volumes when they are claimed. To do this we need to set a secret for the namespace we wantto use. This is used by the provisioning tool to talk to ceph cluster. To do this, first go to the dashboard in the defaultnamespace and click on ceph-secret that was added automatically when Juju hooked ceph and kubernetes-mastersinstances. Click on the “eye” icon next to the key field and copy this value. Then paste it in the following line andchoose which namespace will get this secret (dev here):

kubectl --namespace=dev create secret generic ceph-secret --type="kubernetes.io/rbd" -→˓-from-literal=key='Key base 64 encoded'

Then we have to create the storage-class corresponding to our ceph cluster in the same namespace with this yaml file:

nano ~/gupcluster/rbd-storage-class.yaml

And insert:

apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata:

7.7. Ceph persistent storage pool 29

Page 34: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

name: rbdprovisioner: kubernetes.io/rbdparameters:

monitors: XXX.XXX.XXX.XXX:6789,XXX.XXX.XXX.XXX:6789,XXX.XXX.XXX.XXX:6789 # Your→˓ceph's monitors

adminId: adminadminSecretName: ceph-secretadminSecretNamespace: "default"pool: rbduserId: adminuserSecretName: ceph-secret

We are indicating there where we will find admin credentials (in default namespace). And where are user’s one (thesame here but copied in the current namespace). Then we create the storage class:

kubectl --namespace=dev create --namespace=dev -f ~/gupcluster/rbd-storage-class.yaml

We are ready! Now when we will use in a stateful set (for instance) this section:

volumeClaimTemplates:- metadata:

name: dataannotations:volume.beta.kubernetes.io/storage-class: rdb

spec:accessModes:- ReadWriteOnce

resources:requests:

storage: 10Gi

A ceph persistent storage will be automatically provisioned.

kubectl exec and port-forward

Both functions will not work due to a bug in early kubernetes 1.5 releases.

This is now working with latest Juju bundle versions. I will anyway present what you could do to overcome a nonworking port-forwarding command:

There are 3 workaround:

• SSH into the master and use kubectl exec from there. This will work for exec but is not a real solutionfor port-forward

• port-forward is useful when I want to make as if I were inside my cluster even from my remote client. Ifit does not work I can simply create a “permanent pod” and use it to provide an access point from the inside ofthe cluster. Here you won’t need kubectl, you can install the tools you need like a postgresql client and try toconnect to your database.

• Modify masters host configuration file to tell manually what node correspond to what name (I did not test thisoption and I do not like it)

Anyway the second option is not a so bad idea in my opinion, as it can be used to test some services as if we were oneof the service consumers. And it give me the opportunity to show you how to create a pod. Create a config file:

30 Chapter 7. Kubernetes

Page 35: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

nano ~/gupcluster/configPod.yaml

And insert this yaml definition:

apiVersion: v1kind: Podmetadata:

name: permanentconfigpodspec:

containers:- name: configpodimage: ubuntu:latest# Just spin & wait forevercommand: [ "/bin/bash", "-c", "--" ]args: [ "while true; do sleep 30; done;" ]

The trick is to launch a task that last forever, we ensure this way that the pod will never be recreated because it is idle.

Then create the pod:

kubectl --namespace=dev create -f ~/gupcluster/configPod.yaml

Pulling private images from Dockerhub

If you need like me to use private docker image registry, you will also need to tell that to Kubenetes. I will explainhow to do this with DockerHub, but things are quite the same with other main image registry providers like GCE andAWS. In my case (not yet open source software), I will certainly need more than one of the thankfully provided privaterepo from DockerHub. This may be a good idea to build my own image registry inside my Kubernetes cluster. Butthis is another story that I may develop later.

For now lets create a secret in the correct namespace:

kubectl --namespace=dev create secret docker-registry regsecret --docker-→˓username=username --docker-password=<your-pword> --docker-email=email

Then in your pod description file add this section to the spec section:

imagePullSecrets:- name: regsecret

That’s it!

Logs

And use this command to monitor your cluster:

watch -c -d kubectl get pods,deployments,nodes,ingress,services,endpoints,pv,pvc,jobs,→˓secret

After logging inside a node or master, we can use the logs here:

tail -f /var/log/syslog

You can use Grafana dashboard (for resources usage), or install elastic log consolidation layer (not covered here).

7.9. Pulling private images from Dockerhub 31

Page 36: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Troubleshooting

In case one of the nodes stays in the state “waiting for kubelet”, it is most likely that kubelet could not start becausedocker itself is stuck. You can check by ssh into the node and try a:

sudo service kubelet startA dependency job for kubelet.service failed. See 'journalctl -xe' for details.

If you get this error, try:

sudo service docker startJob for docker.service failed because the control process exited with error code.See "systemctl status docker.service" and "journalctl -xe" for details.

journalctl -xe will probably return:

-- Unit docker.service has begun starting up.Feb 02 10:58:35 juju-4163e6-bob-6 docker[2483]: time="2017-02-02T10:58:35.984087030Z"→˓level=info msg="libcontainerd: new containerd process, pid: 2506"Feb 02 10:58:37 juju-4163e6-bob-6 audit[2541]: AVC apparmor="STATUS" operation=→˓"profile_replace" profile="unconfined" name="docker-default" pid=2541 comm=→˓"apparmor_parser"Feb 02 10:58:37 juju-4163e6-bob-6 kernel: audit: type=1400 audit(1486033117.054:21):→˓apparmor="STATUS" operation="profile_replace" profile="unconfined" name="docker-→˓default" pid=2541 comm=Feb 02 10:58:37 juju-4163e6-bob-6 docker[2483]: time="2017-02-02T10:58:37.059798397Z"→˓level=error msg="[graphdriver] prior storage driver \"aufs\" failed: driver not→˓supported"Feb 02 10:58:37 juju-4163e6-bob-6 docker[2483]: time="2017-02-02T10:58:37.060209639Z"→˓level=fatal msg="Error starting daemon: error initializing graphdriver: driver not→˓supported"Feb 02 10:58:37 juju-4163e6-bob-6 systemd[1]: docker.service: Main process exited,→˓code=exited, status=1/FAILUREFeb 02 10:58:37 juju-4163e6-bob-6 sudo[2451]: pam_unix(sudo:session): session closed→˓for user rootFeb 02 10:58:37 juju-4163e6-bob-6 systemd[1]: Failed to start Docker Application→˓Container Engine.-- Subject: Unit docker.service has failed-- Defined-By: systemd-- Support: http://www.ubuntu.com/support---- Unit docker.service has failed.---- The result is failed.

In this case just execute:

sudo rm -rf /var/lib/docker/

And restart:

sudo service docker startsudo service kubelet start

32 Chapter 7. Kubernetes

Page 37: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Deeeeeestrrroooy tests

I have made a lot of tests and even some “not so intended ones...”, and I will share here my pure crashes experiences(without the screams that may have occurred):

WARNING! Lots of things I have done to solve nor analyse root causes of my particular issues in a labenvironment are not satisfying at all and should only be taken for what it is: side notes, not absolutetroubleshooting method

• Tried to reboot a single master.... API and dashboard gone down of course, but everything came up normallyafter startup

• Tried to add a new master and then change flavor of first one.... No problem during update but we looseddashboards (all of them including Grafana). The only way to make the default one came up again was todisable/enable it with:

juju config kubernetes-master enable-dashboard-addons=falsejuju config kubernetes-master enable-dashboard-addons=true

• Rebooting a node.... It ends up with a kubelet that could not start due to docker failure. I overcame this with:

sudo rm -rf /var/lib/docker/

And restart:

sudo service docker startsudo service kubelet start

See full explanations in this bug report in particula thaJeztah and linas comments. Everything ok after that

• Rebooting a node that has cinder pv attached.... There was 2 of three consul cluster node on it. Consul camedown, no auto remount nor auto re affectation of lost pod even with their stateful set. So the only way toovercome if planning to use simple cinder volumes is to make sure that there is at least one pod per node of areplica set using deamon sets

• Enabling juju HA after cluster creation.... Resulted in permanent loss of provisioning facilities through Juju. Ifound no way tu resolve this issue but did not retry after release of Juju 2.1 series

• Detaching through Openstack mounted in use cinder volumes.... Temporary failure of corresponding pod andauto remount then ok again

• Destroy completely an in use worker through Juju without any pod on it.... no problem

• Destroy completely claimed cinder volume through Openstack.... no problem, pv gets re provisioned automati-cally

• Destroy completely non claimed cinder volumes.... no problem, they get destroyed in Kubernetes

• Destroy through Openstack an instance in pending state for Juju.... Juju get confused and was not able to removethe corresponding machine from its juju status The unit stayed also here but juju remove-unitworked. juju remove-machine did not worked even with the --force option

• Adding a new private neutron network after first juju botstrap.... Causing juju and cinder pro-visioning failing unless I specified a default network in model config (juju model-confignetwork=xxxxxx-xxxxx-xxxxx-xxxx)

• Using failing jobs in a namespace.... This one was not desired ar all! I created a stateful set to set up a Consulcluster, and created a job used to join 3 servers. But the Consul service creation failed and thus the job. Going tothe dashboard under the namespace concerned looking for pods or other screens ended with an error. Actuallythe whole API server failed, restarting masters one after the other did nothing to this. I was starting to despair

7.12. Deeeeeestrrroooy tests 33

Page 38: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

when I got the idea to delete completely the namespace where the job lies. And everything returned to normalstate (OUF!). Gaining more experience later, this type of problem may just be related to the dashboard, thusdisabling/enabling again should do the trick.

• Destroy a NFS pv mounted on a pod.... OUCH! This made the whole worker node going down! I think this is aconsequence of the fact that NFS servers and clients must run in privileged mode and thus increasing risks forthe host

• Hard reboot a worker node with pods running.... Pods die. And born again (rc or stateful sets) at restart withoutproblem. The only thing is that I used a replication controller with one replica. When the node died, it was notmigrated to another node. But the reboot was certainly not long enough for the master to reschedule it.

• Hard reboot of a second worker.... Stolon died! To my understanding this is due to the fact that the first nodereboot made a consul pod to be rescheduled with a different IP and was marked as failing. The new one wasnot joined to the consul cluster and thus I was having from raft point of vue 2 consul nodes of 4 failing. Whenrestarting the second node, that was furthermore the master, the remaining one was enable to reelect a newone and the cluster has gone was stale. I learned this way (to late!) that after each node death it is more thanimportant to follow these instructions!!! Also note that things are going better thanks to contributors of theconsul github repo credited later and their latest commits

• Do not know what really happened but while I was playing with NFS, my 3 ceph-osd nodes get in failed status(juju agent status). Curiously, this had no impact on Kubernetes nor NFS sharing and filesystem content. Buthonestly, I was not able to do anything nor to find anything related by googling hard around. I finally ended byadding 3 new units one by one, detaching cinder disks and reattaching them to new units, and everything gonein green active status again. After searching around I think this is due to a too small server (one core 4Gb ram).The problem did not show up using a two core 7Gb ram

• Upgrade the cluster while pods are running... no real problem

• Upgrade Juju agents... Big problems the (bad) way I did it. Upgrading controller first but not waiting for it tofinish before going to other models. This did not break Kubernetes, but Juju was not usable anymore

To resume, it was quite resistant to my GREAT immaturity! The only way I managed to really break the toy was dueto my great limited basic knowledge of some components (like consul or underlying neutron networks). And it did notaffect the Kubernetes cluster integrity.

6 main lessons (and growing):

1. Make sure underlying HA Juju infrastructure works before going on with Kubernetes

2. Unless you know exactly what to do, do not use multiple network with Openstack

3. Create namespaces for your tests inside Kubernetes, I had the occasion to demonstrate that it could save yourday when something goes wrong!

4. Never ever delete a NFS pv already bounded!

5. If a worker dies, and goes up again, even if all seems peaceful and quiet, do not think this has nos consequences(hey, do not laugh at me there may be other candid users here). Check services like consul that needs a minimumnumber of voting nodes. In a three nodes environment you are most of the time at one step before the fall

6. You are stuck trying to understand what happened to your failing Juju unit.... hum, my advice: look for asolution in a “reasonable time” and just fire up new units if you fail in repairing broken things. I know this isbad and you definitely should search for root causes, but in a limited time, with limited team... this could saveyour day

34 Chapter 7. Kubernetes

Page 39: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

YES SECURITY MATTERS!

When I first came to this part I said myself: “yep, this is important, but I’ll doing this later”... But later arrived andhere are my non specialist but still not so obvious simple advices (I hope):

• Your API and dashboard are accessible... from ANYONE!!

• Ok you have done great, but your password is WEAK!!!

• Paranoid security issues

Note: Yes this is the very classical advice for this kind of subject: “Understand what I have done but do not forgetthat you are alone on this side of the project...”. So here it is: I do not intend to be exhaustive, I only explain here howto secure the ultra basic things regarding accessing the API

Your API and dashboard are accessible... from ANYONE!!

Yes that is hard to believe, but by default, the Juju install lets your API cluster AND your dashboards exposed to theInternet without ANY kind of limitation!!. In the Juju bundle doc, it is said that you can access your API or dashboardsusing kubectl proxy. That is true but you can also access it through the kubeapi loadbalancer. To illustrate theissue, just try something like that:

curl https://yourKubebalancerIP/api/v1/

This will print all objects accessible. Of course we do not want that.

This is due to the fact that the Kubeapi loadbalancer is exposed by Juju by default. So the first thing to do is:

juju unexpose kubeapi-load-balancer

Annnnnnndddd.... that does not unexpose to the Internet the unit! At least not in version 20 of the bundle I do notknow if this is due to my setup nor my actions, and I did not want to rebuild everything to assert one or another option.So I went to the horizon dashboard, found the kubeapi-load-balancer instance, check the security groups attached andin fact I could see that exposing or unexposing is doing nothing in this part. I always have:

Entrée IPv4 TCP 443 (HTTPS) 0.0.0.0/0 (CIDR)

And even a HTTP entry! Therefore I replace the line with another like this:

Entrée IPv4 TCP 443 (HTTPS) juju-2xxxxxxxxxxxxxxxxxxxxxxc75a

Authorizing only other servers to access to the instance and removing. I also removed the HTTP entry even if it was notused by the nginx load balancer (I checked in /etc/nginx/site-enabled/apilib). From what I understandthe problem is that the load balancer authenticate itself with certificate and key corresponding to the service tokenfor the admin. Therefore, if you can access to it on port 443 without authentication, it will authenticate for you andforward your request. To make sure your api is not reachable anymore this way, just browse the load balancer IPhttps://yourLBIP. Nothing shall happen now.

Know we have to find another way to access the api. If you followed my instructions your local kubectl install willpoint to the kubeapi-load-balancer (even if it was downloaded from one of the master). Therefore if you unexpose theload balancer, you are blind. We need to point to one of the master by changing the .kube/config file. Check the

7.13. YES SECURITY MATTERS! 35

Page 40: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

line server: https://loadbalancerIP:443 and change it to point to the IP and API port of one of yourmasters server: https://masterIP:6443.

Then expose your masters with juju expose kubernetes-master, now we see again, check this by a simplekubectl get pods for instance.

Know accessing the api via https://yourKubebalancerIP/api/v1/ in your web browser will prompt alogin window. Much better.

Ok you have done great, but your password is WEAK!!!

Credentials are admin, admin by default.... perfectible.

Juju installs 2 pre configured authentication methods, simple, and service token. Simple one consist in looking into afile for credentials (clear text). To change it, ssh to each master and modify /srv/kubernetes/basic_auth.csv (password,user,uid) then restart kube-apiserver sudo service kube-apiserver restart.

That is enough to access to the API. In case the dashboard is shown but you get a message saying it needs credentials,just disable then re enable the dashboard (this is a problem with secrets generated at dashboard creation, and this is theeasiest way to make things right again):

juju config kubernetes-master enable-dashboard-addons=falsejuju config kubernetes-master enable-dashboard-addons=true

The second auth method is service token (this is the one used by default by kubectl). If you want to use it with yourweb browser, you will need to install a certificate locally (only tested with chrome on ubuntu 16.10). First connect toone of the master and print or copy client.crt and client.key:

sudo cat /srv/kubernetes/client.crtsudo cat /srv/kubernetes/client.key

When copied locally generate a certificate compatible with chrome:

openssl pkcs12 -export -clcerts -inkey client.key -in client.crt -out kubecfg.p12 -→˓name "kubecfg"

Then install this certificate in chrome.

The first time you will attempt to connect to the dashboard it will prompt for the certificate to use. Pick it up and yourdone. This method is better than the first one storing passwords in plain text of course.

Paranoid security issues

On the worker side, things are more paranoid like. Even if you expose with Juju the workers (done by default), noports are accessible. Therefore (we will see that later), using NodePort functionality to expose services will lead tonothing. I did not found any simple way to open manually a port with Juju (certainly did not dig enough), and doing itmanually with Openstack will not work, because they are reinitialized periodically. The only simple way I found wasto use the default ingress controller provided along the bundle with::

juju config kubernetes-worker ingress=true

This will open 80 and 443 ports, and we will be able to use ingress descriptors to expose our services. That is ok forme and has the great advantage to be really simple compared to a self deployed Ingress controller (yes I tried beforeunderstanding that ports 443 and 80 was not opened). If all nodes are not exposed at one time do not worry (you cancheck opened port with juju status). It can take time, in my case the last controller pod restarted several timesbefore being reachable.

36 Chapter 7. Kubernetes

Page 41: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 8

Self maintained high availability Postgresql Cluster

This is another big challenge when talking about web apps. Postgresql is a well known powerful open source databaseengine. But strangely, when it comes to web deployment and high availability, googling does not seems really usefulfor my objective: I want now, not for christmas a multi node postgresql cluster using native streaming replication,with automatic failover and master promotion, connection pooling (remember we are building a multi-tenants webapps with, I hope hundreds, no! thousands of connections), load balancing read instructions to standbys servers, failover should be transparent to pg clients when fail over occurs, and, icing on the cake, au-to-ma-tic re provisioning....annnnnddd you know what? Santa Clauss is not coming to town!

But I found several tracks to follow. In particular those 2 projects (there are others like Patroni and Crunchy Datacontainers suite that looks gorgeous but is simply not documented, that I won’t detail them here):

• Stolon

• paunin’s postgresql pgpool cluster

None of them fulfilled all my wishes but both are providing Kubernetes production grade ready examples. Flash prosand cons:

• Stolon is more complex to setup (but far less than doing this by hand of course), it uses streaming replication,and Kubernetes statefulSets to re provision failed pods, is capable of automatic fail over, but... no connectionpooling and load balancing built in. We are stuck with our standard 100 simultaneous connections limit. Thatwe can of course increase provided that we put the adequate expensive material in place.

• Paunin’s solution with a more “natural” postgresql style solution. It does natively connection pooling, streamingreplication, load balancing and automatic fail over, but, to my knowledge, no auto promotion and no automaticre provisioning. For those who already set up a pgpoolII cluster this deployment looks MIRACULOUS!

At the end I made up my mind with Stolon and accepted with lots of regrets and again sadness, to drop load balancingoption. Regarding transparent fail over for pg clients, I did not find anything that could help anywhere. But maybe Ishould have dug harder...

And do not think I gave up on connection pooling. I just planned to give this part to a pgbouncer layer or pgpool that Inow better for having fighting with it a while. This interminable introduction is coming at last to an end, lets configurea stolon cluster.

37

Page 42: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

• Secure store

– Option A: Going with Consul

– Option B: Going with etcd

• Stolon

– Stolon preparation

– Stolon deployment

– Tests and operation

• Connection pooling

Secure store

Stolon can use etcd or consul as a secure store. etcd? Perfect, lets use the one provisioned by Juju along the Kubernetescluster..... annnnnnd no, you cannot. It is closed to workers for obvious security reasons. So we have two options,provide the store inside or outside Kubernetes cluster. With Juju both consul and etcd stores can be deployed easily, butthis will consumes instances, Openstack security groups, and additional money. We better build it inside Kubernetes.

Again looking for turnkey options, I found this excellent etcd-operator helm chart. But sadly, it was not working onmy brand new Kubernetes 1.5 at first attempt due to a regression in this version. Only for 1.4 series. I found othergithub projects, but no other well enough documented nor using auto re provisioning.

I decided to go then for consul and (joy!), there is a great implementation ready for Kubernetes, kindly provided bykelseyhightower with its consul-on-kubernetes repo ! I found no equivalent.

Note: Please take into account that I finally decided to go for etcd operator as it is a more complete project thatincludes an API to ease recoveries, upgrades... I left anyway the Consul part as it represents a good alternative to etcdin my opinion having tested it.

Option A: Going with Consul

We must before install go, cfssl and cfssljson. Download go here and untar it (this can be really messy if you miss thepath part):

sudo tar -C /usr/local -xzf go1.7.5.linux-amd64.tar.gzcurl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gztar -xvf go1.6.linux-amd64.tar.gzsudo mv go /usr/localecho "export PATH=$PATH:/usr/local/go/bin" >> /etc/profilesource /etc/profileexport GOPATH=$HOME/gogo get -u github.com/cloudflare/cfssl/cmd/cfsslgo get -u github.com/cloudflare/cfssl/cmd/...

Then clone consul repo:

cd ~/gupcluster/git clone https://github.com/kelseyhightower/consul-on-kubernetes.gitcd ~/gupcluster/consul-on-kubernetes

38 Chapter 8. Self maintained high availability Postgresql Cluster

Page 43: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Generate certification authorities, private keys key encryption and store them in a secret (this is condensed I know):

cfssl gencert -initca ca/ca-csr.json | cfssljson -bare cacfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca/ca-config.json \ -→˓profile=default \ ca/consul-csr.json | cfssljson -bare consulGOSSIP_ENCRYPTION_KEY=$(consul keygen)kubectl --namespace=dev create secret generic consul \

--from-literal="gossip-encryption-key=${GOSSIP_ENCRYPTION_KEY}" \--from-file=ca.pem \--from-file=consul.pem \--from-file=consul-key.pem

Finally create all resources:

kubectl --namespace=dev create configmap consul --from-file=configs/server.jsonkubectl --namespace=dev create -f services/consul.yamlkubectl --namespace=dev create -f statefulsets/consul.yaml## Wait for the pods to startup before creating the jobkubectl --namespace=dev create -f jobs/consul-join.yaml

We should can kubectl port-forward consul-0 8400:8400 command here to talk with consul. or wecould connect to the configPod. First ssh to the master and wrap the pod’s bash:

juju ssh kubernetes-master/0kubectl --namespace=dev exec -ti permanentconfigpod -- bash

Now we could use consul members provided that you have installed a consul client (what I did not do). Consulcomes also with a dashboard that should be accessible on localhost:8500 with kubectl port-forward8500:8500 but you know the story...

To clean up your installation use:

bash cleanup

Consul is ready! You can also notice that 6 persistent volumes has been auto provisioned and mounted to the rightworker from Openstack cinder’s API!

Option B: Going with etcd

etcd-operator has the GREAT advantage over consul-on-kubernetes to reprovision automatically and recover failedetcd.

Install etcd-operator with this descriptor:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:

name: etcd-operatorspec:

replicas: 1template:metadata:

labels:name: etcd-operator

spec:containers:- name: etcd-operator

8.1. Secure store 39

Page 44: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

image: quay.io/coreos/etcd-operator:v0.2.4env:- name: MY_POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace

- name: MY_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name

Deploy:

kubectl --namespace=dev create -f deployment.yaml

Now create the etcd cluster with this simple descriptor:

apiVersion: "etcd.coreos.com/v1beta1"kind: "Cluster"metadata:

name: "gup-etcd-cluster"spec:

size: 3version: "3.1.4"

Maybe you will not believe me but this is finished!! I had to take some time to realize it the first time.

Stolon

This part will go through the steps to create the stolon cluster.

• Stolon preparation

• Stolon deployment

• Tests and operation

Stolon preparation

From there, ssh to the master and wrap the pod’s bash:

juju ssh kubernetes-master/0kubectl --namespace=dev exec -ti permanentconfigpod -- bash

That is because we need access to consul’s api through consul:8500 that we cannot be proxied to our remote client.We are operating in root from know:

apt-get update && apt-get upgrade && apt-get install curl iputils-ping git postgresql-→˓client gcccurl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gztar -xvf go1.6.linux-amd64.tar.gzmv go /usr/localecho "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile

40 Chapter 8. Self maintained high availability Postgresql Cluster

Page 45: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

source /etc/profileexport GOPATH=$HOME/gocd /optgit clone https://github.com/sorintlab/stolon.gitcd stolon./buildcd /optmv stolon /usr/localecho "export PATH=$PATH:/usr/local/stolon/bin" >> /etc/profilesource /etc/profile

Each example file provided along stolon example in /stolon/examples/kubernetes/statefulset/ repomust be adapted to change the consul section (stolon-keeper.yaml, stolon-proxy-service.yaml,stolon-proxy.yaml, stolon-sentinel.yaml). Example for stolon-proxy.yaml: (TODO: shouldbe replaced by a deployment in order to ease future rolling update):

apiVersion: v1kind: ReplicationControllermetadata:

name: stolon-proxyspec:

replicas: 2selector:name: stolon-proxy

template:metadata:

labels:name: stolon-proxystolon-cluster: "kube-stolon"stolon-proxy: "true"

spec:containers:- name: stolon-proxy

image: sorintlab/stolon:master-pg9.6command:- "/bin/bash"- "-ec"- |exec gosu stolon stolon-proxy

env:- name: STPROXY_CLUSTER_NAME# TODO(sgotti) Get cluster name from "stoloncluster" label using a downward

→˓volume api instead of duplicating the name herevalue: "kube-stolon"

- name: STPROXY_STORE_BACKENDvalue: "consul" # Or etcd

- name: STPROXY_STORE_ENDPOINTSvalue: "etcd:2973"

- name: STPROXY_LISTEN_ADDRESSvalue: "0.0.0.0"

## Uncomment this to enable debug logs#- name: STPROXY_DEBUG# value: "true"

ports:- containerPort: 5432

readinessProbe:tcpSocket:

port: 5432

8.2. Stolon 41

Page 46: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

initialDelaySeconds: 10timeoutSeconds: 5

Stolon deployment

Lets initialize the cluster with (still in our configPod). For Consul:

stolonctl --cluster-name=kube-stolon --store-backend=consul --store-→˓endpoints=consul:8500 init

For etcd:

stolonctl --cluster-name=kube-stolon --store-backend=etcd --store-endpoints=etcd:2379→˓init

Now return back to our remote client and create stolon resources:

kubectl --namespace=dev create -f stolon-sentinel.yamlkubectl --namespace=dev create -f secret.yamlkubectl --namespace=dev create -f stolon-keeper.yaml #change in statefullset and add→˓terminationGracePeriodSeconds: 10 in spec:kubectl --namespace=dev create -f stolon-proxy.yamlkubectl --namespace=dev create -f stolon-proxy-service.yaml

Tests and operation

We can easily scale the postgresql cluster with:

kubectl --namespace=dev scale --replicas=3 rc stolon-sentinelkubectl --namespace=dev scale --replicas=3 rc stolon-proxy

To scale the keeper, just modify replica parameter in the stolon-keeper.yaml to the desired number andthen replace the old stateful set with:

kubectl --namespace=dev replace -f stolon-keeper.yaml

In order to monitor stolon you can keep an eye on it using (and of course still the juju status and kubectlget that I personally keep opened with watch in other terminals when working on the cluster):

stolonctl --cluster-name=kube-stolon --store-backend=consul --store-→˓endpoints=consul:8500 status=== Active sentinels ===

ID LEADER1892b404 false3c5ede5d false655dad12 true

=== Active proxies ===

ID05094f770637496ad0e756bc

42 Chapter 8. Self maintained high availability Postgresql Cluster

Page 47: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

=== Keepers ===

UID PG LISTENADDRESS HEALTHY PGWANTEDGENERATION PGCURRENTGENERATIONkeeper0 10.1.64.29:5432 true 5 5keeper1 10.1.74.16:5432 true 1 1keeper2 10.1.74.19:5432 true 1 1keeper3 10.1.14.10:5432 true 1 1

=== Cluster Info ===

Master: keeper0

===== Keepers tree =====

keeper0 (master)-keeper1-keeper2-keeper3

To test pgsql connection wrap again the configPod and try a simple psql command:

juju ssh kubernetes-master/0kubectl --namespace=dev exec -ti permanentconfigpod -- bashpsql --host stolon-proxy-service --port 5432 postgres -U stolon -W # (password1 by→˓default)

You are seeing the psql prompt, you won!

I will not detail tests to check that the cluster can failover properly

Connection pooling

Another cruel choice to make, and it is dependant to the previous one. Stolon cannot pool or load balance requests.Although we require hundreds or thousand simultaneous connections in theory and we are stuck at a little hundred intheory for standard purpose server (I can increase easily this limit but it would just put be back of to better blow up).

From here I saw two choice (may exist other of course):

• Pgbouncer: A specialized connection pooling tool for Postgresql (only pooling)

• PgpoolII: A multipurpose tool for pooling, load balancing, fail over, request replication....

Pgbouncer is ideal for our need and could be later put in front of a PgpoolII server if Pgpool functions became requiredor hardly desired.

The problem is that it uses a special authentication method with 2 files pgbouncer.ini that will store all knowndatabases and hosts attached (in our case only the Stolon proxy), and a simple user.txt that will store all authorizeduser. At first, this look to me a bit regressive as I already implemented in Postgresql all authentication layer and I do notneed one more. But it could be of a great advantage to flatten rights on the Postgresql side with one Saas administratoruser (save/restore and other update information operation becomes a lot more easier like this). Anyway if I can alsogo for a transparent mode (if I well understood the auth_type=any parameter).

Going on with Pgbouncer would say handling two local files that needs to be updated and shared across at least2 replicas to achieve high availability. So we have to resolve this problem first that will come also later with ourGestiClean Up’ replicas who need to store web asset content specific to each tenant. Therefore I will concentrate oureffort on this layer first in the next chapter and then dedicate a chapter to connection pooling.

8.3. Connection pooling 43

Page 48: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

44 Chapter 8. Self maintained high availability Postgresql Cluster

Page 49: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 9

File sharing system

GestiClean Up’ replicated pods will (and Pgbouncer) will require to store tenant’s specific web assets in a sharedsecured place. And of course its content must be the same anytime across any pod (for instance if a user load a newlogo, any pod executing the program should reuse this same logo).

There are MANY solutions out there to achieve that from the simple NFS server to the Ceph cluster deployment orMongodb. From my understanding I have 2 kind of solutions:

• Integrate a system inside Kubernetes that is in charge of sharing an attached pv or a database on a pv

• Simply consuming pv that are provisioned from a high availability storage system like Ceph as proposed in theJuju Canonical Kubernetes bundle

Anyway storage MUST survive to the death of pods, services, replica sets or even Kubernetes cluster like customersdatabases of course.

As we will require to store regular files for Pgbouncer, we could imagine to unify this need with tenant’s web assets ina standard file system. Like NFS style solutions. Furthermore GestiClean Up’ is designed for the moment to look intodirectories in function of the tenant’s name. Banco, Lets go for file sharing system then.

If we go for direct pv consumption through claims then we have to consider that they are not all equal in regard tomultiple write consumers. According to Kubernetes pv doc, we can narrow down possible choices to NFS, GlusterFSand Cephfs.

We should also think about decoupling cinder volumes from direct node mounting in case they die. To do this we haveto setup an external NFS/Ceph/Gluster/whatever cluster outside of Kubernetes so that it consumes volumes from thatcluster, and make sure they remain available even if a node dies.

We have a Ceph cluster, Ceph is capable to present volumes that accepts multiple host mounting... We have a candidate.But again unfortunately, things are more complicated... Because you are using your Ceph file system to provideKubernetes with rbd volumes, it cannot be used directly for a direct cephfs use. So lets go to the next solution, NFS.

For this first setup we will deploy a replica controller for a NFS machine (more simple than the Glusterfs) that willprovision and consume rbd volumes (what we could change later with minimal efforts). One could say: “Hey youhave only one NFS pod? What about HA”, I would say that this compromise seems fair enough for the moment aswe can use a replication controller or a deployment in order to make sure that there is at least one pod running. RealHA implementation of NFS does not worth the cost if we can deal with a few seconds to recreate a new pod (alwaysprovided that you are not using directly attached cinder volumes).

45

Page 50: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

If the NFS pod gets overload, we may just add a new one with a new cinder volume and assigned it to a region or analgorithm based that splits in several groups the sub domains assigned to tenants for example (this may not apply foryou but GestiClean Up’ assigns one sub domain to each tenant).

• Allow privileged containers and security precautions and allow nfs on nodes

• Deploy NFS server

• Creating and claiming NFS persistent volumes

Allow privileged containers and security precautions and allow nfson nodes

After loooooooooooooooong searching, I finally found out that both NFS server AND clients must run in privilegedsecurity context. And you should know that great powers implies great responsibilities. Running a container inprivileged, is, to simplify, like giving access to the host and thus providing a security hole. This Kubernetes securitybest practice article strongly enjoys us to activate the plugin DenyEscalatingExec on masters in order to denyexec and attach commands to pods that run with escalated privileges that allow host access. So first, we need to allowthe cluster to run privileged containers.

By default juju-deployed clusters do not support running privileged containers. As we need them, we have to edit /etc/default/kube-apiserver on the master nodes, and /etc/default/kubelet on your worker nodes.

For all masters:

juju show-status kubernetes-master --format json | \jq --raw-output '.applications."kubernetes-master".units | keys[]' | \xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBE_API_ARGS=\"/KUBE_API_ARGS=\"--

→˓allow-privileged\ /' /etc/default/kube-apiserver && sudo systemctl restart kube-→˓apiserver.service"

For all workers:

juju show-status kubernetes-worker --format json | \jq --raw-output '.applications."kubernetes-worker".units | keys[]' | \xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBELET_ARGS=\"/KUBELET_ARGS=\"--

→˓allow-privileged\ /' /etc/default/kubelet && sudo systemctl restart kubelet.service"

Then we need to activate the DenyEscalatingExec on masters in /etc/default/kube-apiserver: ..note:

We should do this after having completed our tests, because we will not be able to→˓use anymore``kubectl --namespace=dev exec -ti test2 -- bash`` for example.

sudo nano /etc/default/kube-apiserver

And make # default admission control policies look like:

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,→˓ServiceAccount,ResourceQuota,DenyEscalatingExec"

Then restart kube-apiserver sudo service kube-apiserver restart.

46 Chapter 9. File sharing system

Page 51: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Because we are using containers, that are not virtual machines in the way that they use the kernel host, we have alsoto activate nfs at kernel level on each Kubernetes worker using NFS and install nfs-common if not already done:

sudo apt-get install nfs-commonsudo modprobe nfssudo modprobe nfsd

Note: You should definitly check your logs or try to mount one of your nfs shares in the permanent config pod just tomake sure the nfs daemon is up and running. This may avoid future headache trying to figure out why pods consumingnfs shares does not start.

Another GREAT caveat, I have killed entirely my NFS setup in order to make sure this documentation is applicablefollowing once again my own recommendations, and I fall onto a new issue where the node CPU became stuck,the NFS server container file system went lock and kubelet service gone down......Ooooooh crµ%µ£/§p, oh sadness!recreating all NFS deployment several times did nothing to this, I had to restart the node, clean docker directory andrestart kubelet to be able to get the whole thing work again. I think the problem happened because I deleted everythingin the same order I created them, destroying pv before nfs server consuming it and nfs pvs before nfs clients. I do nothave enough knowledge of privileged mode, but I guess it could have get things worse, by making the host himselfgoing down. Be extremely prudent with those privileged containers in production environments!

It could be a good idea to use --cap-add directive instead of getting the whole wontainer privileged. We can usethe security context (not tested):

securityContext:capabilities:

add:- SYS_ADMIN

Deploy NFS server

This implementation is inspired from this kubernetes nfs example you could check also this repo. First create aprovisioning for our pv (will create an rbd volume automatically) with:

mkdir ~/gupcluster/nfsnano ~/gupcluster/nfs/data-provisioning/nfs-rbd-pv.yaml

And insert:

kind: PersistentVolumeClaimapiVersion: v1metadata:

name: dataannotations:volume.beta.kubernetes.io/storage-class: rbd

spec:accessModes:- ReadWriteOnce

resources:requests:

storage: 200Gi # Change this for production environment

This will create a provisioning request for a rdb volume.

9.2. Deploy NFS server 47

Page 52: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Warning: NEVER EVER DELETE THE PERSISTENT CLAIM OR YOU WILL SWAP THE PERSISTENTVOLUME. In fact, when using provisioning capabilities, the persistent volume reclaim policy is set by default to“Delete”!

Create an image from such Dockerfile for your NFS server:

FROM ubuntu:xenial

RUN apt-get update && apt-get install -y --no-install-recommends \netbase \nfs-kernel-server \

&& rm -rf /var/lib/apt/lists/*

RUN mkdir -p /exports

VOLUME /exports

EXPOSE 111/udp 2049/tcp

ADD run.sh /usr/local/bin/run.shENTRYPOINT ["run.sh"]

This Docker image is pretty simple and has the advantage that you can directly pass the files to export in arguments.

Now create a new replication controller for the nfs server pods:

nano ~/gupcluster/nfs/nfs-server-rc.yaml

And insert:

apiVersion: v1kind: ReplicationControllermetadata:

name: nfs-serverspec:

replicas: 1selector:role: nfs-server

template:metadata:

labels:role: nfs-server

spec:containers:- name: nfs-server

image: erezhorev/dockerized_nfs_serverargs: [ "saas-config" , "web-dynamic-content" ]ports:- name: nfscontainerPort: 2049

- name: mountdcontainerPort: 20048

- name: rpcbindcontainerPort: 111

securityContext:privileged: true

volumeMounts:

48 Chapter 9. File sharing system

Page 53: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

- mountPath: /exportsname: data

volumes:- name: datapersistentVolumeClaim:claimName: nfsdata

Here we will use /exports as the NFS data root and export web-dynamic-content for assets sharing acrosspods. With the docker image used here I can pass also the saas-config file that will be used for Pgbouncer forinstance.

Create service file:

nano ~/gupcluster/nfs/nfs-server-svc.yaml

And insert:

kind: ServiceapiVersion: v1metadata:

name: nfs-serverspec:

ports:- name: nfs

port: 2049- name: mountd

port: 20048- name: rpcbind

port: 111selector:role: nfs-server

We are ready to deploy:

kubectl create -f ~/gupcluster/nfs/nfs-rbd-pv.yamlkubectl create -f ~/gupcluster/nfs/nfs-server-rc.yamlkubectl create -f ~/gupcluster/nfs/nfs-server-svc.yaml

Creating and claiming NFS persistent volumes

And the truth is that I never ended up making it work normally. Even after exhuming kubernetes report issues, forum,googling in every place I could find and loooooooooooooooooootttssss of tryings. I finally withdrawn And I did not tryagain if you want to do this anyway). I can make a NFS pv to be properly mounted on a pod using pv and pvc and nfsserver service hostname ONLY if it resides on the same node than the server. The problem is that the service hostnamenever get resolved from another node, no matter I install nfs-common, activate modprobe.... So I will explain whatworked for one node, and then go to the manual poor non agnostic solution.

A NFS pv should use this template nfs-pv.yaml:

apiVersion: v1kind: PersistentVolumemetadata:

name: nfswdcspec:

capacity:

9.3. Creating and claiming NFS persistent volumes 49

Page 54: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

storage: 5GiaccessModes:- ReadWriteMany

nfs:server: nfs-server #Provided that you are on the pod that will consume it is on

→˓the same node as the server....path: "/web-dynamic-content"

The path must be absolute. Also, you can see that I have put the hostname of the nfs-server service. Normal, wouldyou say... but this did not worked straight away because, in my first attempts the nfs mount was unable to resolve it(see here for resolution). Anyway, I changed the docker installed nfs-common on each nodes and have done a lot ofother tweaks more or less knowing in what direction I should go and it finally worked....magically (aaaahhh). But onlyif the pod consuming it is on the same node than the server ( rohhhhhh).

This template can be used for application deployment that will consume pvc in order to share customer’s dynamiccontent. Deployments or pods definition should use this nfs-pvc.yaml template:

apiVersion: v1kind: PersistentVolumeClaimmetadata:

name: nfswdcclaimspec:

accessModes:- ReadWriteManyresources:

requests:storage: 5Gi

And finally a pod definition template testpod1.yaml:

apiVersion: v1kind: Podmetadata:

name: test1spec:

containers:- name: test1

image: ubuntu:latestcommand: [ "/bin/bash", "-c", "--" ]args: [ "apt-get update && apt-get upgrade -y && apt-get install nfs-common -y &

→˓& while true; do sleep 30; done;" ]volumeMounts:- mountPath: "/var/web-dynamic-content"

name: datasecurityContext:

privileged: truevolumes:- name: data

persistentVolumeClaim:claimName: nfswdcclaim

For tests purpose, we can create a second pod and then do some simple operation from one pod like creating a dir andchecking if it is showing on the second one. nfs-common MUST be installed on the nfs-client side. I have done itby executing bash commands in the test pod, but it should be included directly into the dockerfile.

Now this will not work properly in all case so just pass the mount instruction manually and the dream will come trueat last:

50 Chapter 9. File sharing system

Page 55: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

apiVersion: v1kind: Podmetadata:

name: test21spec:

containers:- name: test2

image: ubuntu:latestcommand: [ "/bin/bash", "-c", "--" ]args: [ "apt-get update && apt-get upgrade -y && apt-get install -y nfs-common

→˓netbase dnsutils && mkdir /var/saas-config && mount -t nfs4 nfs-server:/saas-config→˓/var/saas-config && while true; do sleep 30; done;" ]# volumeMounts:# - mountPath: "/var/saas-config"# name: data

securityContext:privileged: true

# volumes:# - name: data# persistentVolumeClaim:# claimName: nfsscclaim

Of course passing parameters in environment vars may be better.

I would have never imagined it would be so tricky to set up a simple NFS service!!

9.3. Creating and claiming NFS persistent volumes 51

Page 56: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

52 Chapter 9. File sharing system

Page 57: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 10

Postgresql connection pooling

We are ready to use Pgbouncer for connection pooling now that we have a shared file system available. To do this, Iwill use a dockerfile based on this repo from brainsam that allows to pass as env variables all parameters needed toPgbouncer.

• Prepare

• Deploy

• Test

• Pooling modes

Prepare

I did not find any docker image that fits all my needs, so I created one like this:

FROM ubuntu:xenial

ENV DEBIAN_FRONTEND noninteractive

RUN set -x \&& apt-get -qq update \&& apt-get install -yq --no-install-recommends pgbouncer nfs-common

→˓python python-psycopg2\&& apt-get purge -y --auto-remove \&& rm -rf /var/lib/apt/lists/*

ADD mkauth.py /ADD entrypoint.sh ./

53

Page 58: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

EXPOSE 5432ENTRYPOINT ["./entrypoint.sh"]

The entrypoint script allows me to declare every pgbouncer parameter in pgbouncer.ini and create/populate it ifneeded. It allows me the same way to mount a NFS share for the config file. See here the script used:

#!/bin/sh

PG_LOG=/var/log/pgbouncerPG_CONFIG_DIR=/etc/pgbouncerUSER_DIR=/etc/pgusersAUTH_FILE=userlist.txtPG_USER=pgbouncerNFS_SHARE=/saas-configNFS_SERVER=nfs-serverNFS_MOUNT=/etc/pgusersMKAUTH_SCRIPT=mkauth.py

echo "Check NFS mount location"if [ ! -d ${NFS_MOUNT} ]; then

echo "create nfs mount location ${NFS_MOUNT}"mkdir -p ${NFS_MOUNT}

fi

echo "Mount NFS share"mount -t nfs4 ${NFS_SERVER}:${NFS_SHARE} ${NFS_MOUNT}

echo "Check user list existence"if [ ! -f ${USER_DIR}/${AUTH_FILE} ]; then

echo "create pgbouncer user list in ${USER_DIR}"mkdir -p ${USER_DIR}echo "Sync user list with postgresql registered users"python /${MKAUTH_SCRIPT} ${USER_DIR}/${AUTH_FILE} "host=stolon-proxy-service

→˓dbname=postgres user=${DB_USER} password=${DB_PASSWORD}"fi

echo "Check pgbouncer ini file existence"if [ ! -f ${PG_CONFIG_DIR}/pgbouncer.ini ]; then

echo "create pgbouncer config in ${PG_CONFIG_DIR}"mkdir -p ${PG_CONFIG_DIR}

echo "Create ini file"printf "\

#pgbouncer.ini# Description# Config file is in “ini” format. Section names are between “[” and “]”.# Lines starting with “;” or “#” are taken as comments and ignored.# The characters “;” and “#” are not recognized when they appear later in the line.[databases]

* = host=${DB_HOST:?"Setup pgbouncer config error! \You must set DB_HOST env"} \

port=${DB_PORT:-5432} user=${DB_USER:-postgres} \${DB_PASSWORD:+password=${DB_PASSWORD}}

[pgbouncer]# Generic settings${LOGFILE:+logfile = ${LOGFILE}\n}\${PIDFILE:+pidfile = ${PIDFILE}\n}\

54 Chapter 10. Postgresql connection pooling

Page 59: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

listen_addr = ${LISTEN_ADDR:-0.0.0.0}${LISTEN_PORT:+listen_port = ${LISTEN_PORT}\n}\${UNIX_SOCKET_DIR:+unix_socket_dir = ${UNIX_SOCKET_DIR}\n}\${UNIX_SOCKET_MODE:+unix_socket_mode = ${UNIX_SOCKET_MODE}\n}\${UNIX_SOCKET_GROUP:+unix_socket_group = ${UNIX_SOCKET_GROUP}\n}\${USER:+user = ${USER}\n}\${AUTH_FILE:+auth_file = ${USER_DIR}/${AUTH_FILE}\n}\${AUTH_HBA_FILE:+auth_hba_file = ${AUTH_HBA_FILE}\n}\auth_type = ${AUTH_TYPE:-any}${AUTH_QUERY:+auth_query = ${AUTH_QUERY}\n}\${POOL_MODE:+pool_mode = ${POOL_MODE}\n}\${MAX_CLIENT_CONN:+max_client_conn = ${MAX_CLIENT_CONN}\n}\${DEFAULT_POOL_SIZE:+default_pool_size = ${DEFAULT_POOL_SIZE}\n}\${MIN_POOL_SIZE:+min_pool_size = ${MIN_POOL_SIZE}\n}\${RESERVE_POOL_SIZE:+reserve_pool_size = ${RESERVE_POOL_SIZE}\n}\${RESERVE_POOL_TIMEOUT:+reserve_pool_timeout = ${RESERVE_POOL_TIMEOUT}\n}\${MAX_DB_CONNECTIONS:+max_db_connections = ${MAX_DB_CONNECTIONS}\n}\${MAX_USER_CONNECTIONS:+max_user_connections = ${MAX_USER_CONNECTIONS}\n}\${SERVER_ROUND_ROBIN:+server_round_robin = ${SERVER_ROUND_ROBIN}\n}\ignore_startup_parameters = ${IGNORE_STARTUP_PARAMETERS:-extra_float_digits}${DISABLE_PQEXEC:+disable_pqexec = ${DISABLE_PQEXEC}\n}\${APPLICATION_NAME_ADD_HOST:+application_name_add_host = ${APPLICATION_NAME_ADD_HOST}→˓\n}\${CONFFILE:+conffile = ${CONFFILE}\n}\${JOB_NAME:+job_name = ${JOB_NAME}\n}\

# Log settings${SYSLOG:+syslog = ${SYSLOG}\n}\${SYSLOG_IDENT:+syslog_ident = ${SYSLOG_IDENT}\n}\${SYSLOG_FACILITY:+syslog_facility = ${SYSLOG_FACILITY}\n}\${LOG_CONNECTIONS:+log_connections = ${LOG_CONNECTIONS}\n}\${LOG_DISCONNECTIONS:+log_disconnections = ${LOG_DISCONNECTIONS}\n}\${LOG_POOLER_ERRORS:+log_pooler_errors = ${LOG_POOLER_ERRORS}\n}\${STATS_PERIOD:+stats_period = ${STATS_PERIOD}\n}\${VERBOSE:+verbose = ${VERBOSE}\n}\admin_users = ${ADMIN_USERS:-postgres}stats_users = ${STATS_USERS},stats,postgres,root# Connection sanity checks, timeouts${SERVER_RESET_QUERY:+server_reset_query = ${SERVER_RESET_QUERY}\n}\${SERVER_RESET_QUERY_ALWAYS:+server_reset_query_always = ${SERVER_RESET_QUERY_ALWAYS}→˓\n}\${SERVER_CHECK_DELAY:+server_check_delay = ${SERVER_CHECK_DELAY}\n}\${SERVER_CHECK_QUERY:+server_check_query = ${SERVER_CHECK_QUERY}\n}\${SERVER_LIFETIME:+server_lifetime = ${SERVER_LIFETIME}\n}\${SERVER_IDLE_TIMEOUT:+server_idle_timeout = ${SERVER_IDLE_TIMEOUT}\n}\${SERVER_CONNECT_TIMEOUT:+server_connect_timeout = ${SERVER_CONNECT_TIMEOUT}\n}\${SERVER_LOGIN_RETRY:+server_login_retry = ${SERVER_LOGIN_RETRY}\n}\${CLIENT_LOGIN_TIMEOUT:+client_login_timeout = ${CLIENT_LOGIN_TIMEOUT}\n}\${AUTODB_IDLE_TIMEOUT:+autodb_idle_timeout = ${AUTODB_IDLE_TIMEOUT}\n}\${DNS_MAX_TTL:+dns_max_ttl = ${DNS_MAX_TTL}\n}\${DNS_NXDOMAIN_TTL:+dns_nxdomain_ttl = ${DNS_NXDOMAIN_TTL}\n}\

# TLS settings${CLIENT_TLS_SSLMODE:+client_tls_sslmode = ${CLIENT_TLS_SSLMODE}\n}\${CLIENT_TLS_KEY_FILE:+client_tls_key_file = ${CLIENT_TLS_KEY_FILE}\n}\${CLIENT_TLS_CERT_FILE:+client_tls_cert_file = ${CLIENT_TLS_CERT_FILE}\n}\${CLIENT_TLS_CA_FILE:+client_tls_ca_file = ${CLIENT_TLS_CA_FILE}\n}\${CLIENT_TLS_PROTOCOLS:+client_tls_protocols = ${CLIENT_TLS_PROTOCOLS}\n}\

10.1. Prepare 55

Page 60: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

${CLIENT_TLS_CIPHERS:+client_tls_ciphers = ${CLIENT_TLS_CIPHERS}\n}\${CLIENT_TLS_ECDHCURVE:+client_tls_ecdhcurve = ${CLIENT_TLS_ECDHCURVE}\n}\${CLIENT_TLS_DHEPARAMS:+client_tls_dheparams = ${CLIENT_TLS_DHEPARAMS}\n}\${SERVER_TLS_SSLMODE:+server_tls_sslmode = ${SERVER_TLS_SSLMODE}\n}\${SERVER_TLS_CA_FILE:+server_tls_ca_file = ${SERVER_TLS_CA_FILE}\n}\${SERVER_TLS_KEY_FILE:+server_tls_key_file = ${SERVER_TLS_KEY_FILE}\n}\${SERVER_TLS_CERT_FILE:+server_tls_cert_file = ${SERVER_TLS_CERT_FILE}\n}\${SERVER_TLS_PROTOCOLS:+server_tls_protocols = ${SERVER_TLS_PROTOCOLS}\n}\${SERVER_TLS_CIPHERS:+server_tls_ciphers = ${SERVER_TLS_CIPHERS}\n}\

# Dangerous timeouts${QUERY_TIMEOUT:+query_timeout = ${QUERY_TIMEOUT}\n}\${QUERY_WAIT_TIMEOUT:+query_wait_timeout = ${QUERY_WAIT_TIMEOUT}\n}\${CLIENT_IDLE_TIMEOUT:+client_idle_timeout = ${CLIENT_IDLE_TIMEOUT}\n}\${IDLE_TRANSACTION_TIMEOUT:+idle_transaction_timeout = ${IDLE_TRANSACTION_TIMEOUT}\n}\${PKT_BUF:+pkt_buf = ${PKT_BUF}\n}\${MAX_PACKET_SIZE:+max_packet_size = ${MAX_PACKET_SIZE}\n}\${LISTEN_BACKLOG:+listen_backlog = ${LISTEN_BACKLOG}\n}\${SBUF_LOOPCNT:+sbuf_loopcnt = ${SBUF_LOOPCNT}\n}\${SUSPEND_TIMEOUT:+suspend_timeout = ${SUSPEND_TIMEOUT}\n}\${TCP_DEFER_ACCEPT:+tcp_defer_accept = ${TCP_DEFER_ACCEPT}\n}\${TCP_KEEPALIVE:+tcp_keepalive = ${TCP_KEEPALIVE}\n}\${TCP_KEEPCNT:+tcp_keepcnt = ${TCP_KEEPCNT}\n}\${TCP_KEEPIDLE:+tcp_keepidle = ${TCP_KEEPIDLE}\n}\${TCP_KEEPINTVL:+tcp_keepintvl = ${TCP_KEEPINTVL}\n}\################## end file ##################" > ${PG_CONFIG_DIR}/pgbouncer.inifi

echo "add user" ${PG_USER}adduser ${PG_USER}

echo "Create log directory"mkdir -p ${PG_LOG}chmod -R 755 ${PG_LOG}chown -R ${PG_USER}:${PG_USER} ${PG_LOG}chown ${PG_USER}:${PG_USER} ${USER_DIR}/${AUTH_FILE}

echo "Starting pgbouncer..."cat ${PG_CONFIG_DIR}/pgbouncer.iniexec pgbouncer -u ${PG_USER} ${PG_CONFIG_DIR}/pgbouncer.ini

Important: Listen carefully, it will save you time: with latest version of pgbouncer, you have to set pgbouncer as theowner of the authfile (for us userlist.exe) or it will irremediably throw a psql ERROR: No such user:user. I can set it as an env var, but this will not be used.

I am using here the mkauth python file proposed by the pgbouncer team to sync the users list file with the real postgresqlusers. This way I can continue to use postgres based role creation and call the python script when needed to sync theusers list used by pgbouncer. Here is the python script:

#! /usr/bin/env python

import sys, os, tempfile, psycopg2

if len(sys.argv) != 3:print 'usage: mkauth path_to_user_file "host=stolon-proxy-service dbname=postgres

→˓user=stolon password=password1"'

56 Chapter 10. Postgresql connection pooling

Page 61: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

sys.exit(1)

# read old filefn = sys.argv[1]try:

old = open(fn, 'r').read()except IOError:

old = ''

# create new file datadb = psycopg2.connect(sys.argv[2])curs = db.cursor()curs.execute("select usename, passwd from pg_shadow order by 1")lines = []for user, psw in curs.fetchall():

user = user.replace('"', '""')if not psw: psw = ''psw = psw.replace('"', '""')lines.append('"%s" "%s" ""\n' % (user, psw))

db.commit()cur = "".join(lines)

# if changed, replace data securelyif old != cur:

fd, tmpfn = tempfile.mkstemp(dir = os.path.split(fn)[0])f = os.fdopen(fd, 'w')f.write(cur)f.close()os.rename(tmpfn, fn)

The NFS share mount allows me to use Kubernetes deployment to achieve high availability by using the same configfiles for every pod created. Here is the deployment yaml:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:

name: pgbouncer-depspec:

replicas: 2revisionHistoryLimit: 2template:metadata:

labels:app: pgbouncer

spec:volumes:

- name: stolonsecret:secretName: stolon

- name: configconfigMap:name: pgbouncer-config # File imported must be named ucs.ini or it has to

→˓be forced using items:-key path valuescontainers:- name: pgbouncer

image: inforum/gup-pgbouncerenv:- name: NFS_SHARE

10.1. Prepare 57

Page 62: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

value: "/saas-config"- name: NFS_SERVERvalue: "nfs-server"

- name: NFS_MOUNTvalue: "/etc/pgusers"

- name: DB_PASSWORDvalue: "/etc/secrets/stolon/password"

volumeMounts:- mountPath: /etc/pgbouncername: config

- mountPath: /etc/secrets/stolonname: stolon

securityContext:privileged: true

imagePullSecrets:- name: regsecret

Here, I can use the env: descriptor to pass all environment variables needed. We now have to expose the pgbouncerservice using this file:

apiVersion: v1kind: Servicemetadata:

name: pgbouncer-servicespec:

ports:- port: 5432

targetPort: 5432selector:app: "pgbouncer"

And we use preferentially a config map to store and ease future updates of basic parameters:

kind : ConfigMapapiVersion : v1metadata:

name: pgbouncer-configdata:

pgbouncer.ini: "# Description\n# Config file is in “ini” format. Section names are→˓between “[” and “]”.\n# Lines starting with “;” or “#” are taken as comments and→˓ignored.\n# The characters “;” and “#” are not recognized when they appear later in→˓the line.\n[databases]\n#TODO: Pool user must be different from admin user for→˓security reasons or we should use env var populated by a secret\n* = host=stolon-→˓proxy-service port=5432 user=stolon password=password1\n\n[pgbouncer]\n# Generic→˓settings\nlisten_addr = 0.0.0.0\nlisten_port = 5432\nauth_file = /etc/pgusers/→˓userlist.txt\nauth_type = md5\nignore_startup_parameters = extra_float_digits\n#→˓Log settings\nadmin_users = stolon\n################## end file ##################"

Deploy

To deploy use these commands (you will need the NFS shares to be ready):

kubectl --namespace=dev create -f ~/gupcluster/pgbouncer/pgbouncer-config.yamlkubectl --namespace=dev create -f ~/gupcluster/pgbouncer/pgbouncer-dep.yamlkubectl --namespace=dev create -f ~/gupcluster/pgbouncer/pgbouncer-svc.yaml

58 Chapter 10. Postgresql connection pooling

Page 63: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

That’s it!

Test

To test that you can access the stolon cluster through your brand new connection pooling tool, SSH to your permanentconfig pod, make sure postgresql-client is installed and try this:

psql -h pgbouncer-service -p 5432 postgres -U stolon

It should prompt for your password and print the psql prompt. You are in!

Note: Of course you may also use kubectl --namespace=dev port-forward pgbouncer-podxxxx5433:5432. This will forward pgbouncer port 5432 to localhost:5433 (in case you already have a postgresql serveron localhost or just let 5432).

Pgbouncer comes with a lot of useful stats that you can query provided that you allow a user as the admin one inADMIN_USERS env var. In order to access to that:

psql -h pgbouncer-service -p 5432 pgbouncer -U stolon

And in psql prompt, try one of those commands like:

SHOW CONFIG;

You can create a new user in psql console with:

root@pgbouncer-dep-3486319260-741qp:/# psql -h pgbouncer-service -U stolon gcupsaasPassword for user stolon:Type "help" for help.

gcupsaas=# CREATE USER testconnect WITH PASSWORD 'testconnect' LOGIN;CREATE ROLEgcupsaas=# \q

But do not forget after that to update your auth file with something like: (in one of pgbouncer pods):

python mkauth.py /etc/pgusers/userlist.txt "host=stolon-proxy-service dbname=postgres→˓user=stolon password=password1"

Then try to connect to your database using your new user:

psql -h pgbouncer-service -U testconnect gcupsaasPassword for user testconnect:Type "help" for help.

gcupsaas=#

Pooling modes

Next step would be to test different pooling modes. In particular transaction pooling in my case. In fact, I have amulti tenants apps that will use at least for each tenant one connection per application started. With the starter offerwe provide, our customers gain access to 3 apps. It can be more if we decide to use more pythons workers and/or

10.3. Test 59

Page 64: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

processes. Therefore, in my case, session based could be less effective than statement. Because with session basedI could get one effective connection between pgbouncer and stolon per user (even if n are needed). But they willall remain open as our apps tries to stay connected and therefore we will consume a lot of connections at the end(remember we may have more than thousands customers). With the statement based mode, and the use of one userbetween pgbouncer and stolon (like this is done in our example upward by adding this line to pgbouncer.ini: * =host=stolon-proxy-service port=5432 user=stolon password=password1), we could makesure that unused connections do not consume a postgresql connection.

60 Chapter 10. Postgresql connection pooling

Page 65: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 11

Reverse proxy

This part can in theory be achieved using Kubernetes ingresses and ingress controllers. Ingress controllers are basicallypods managed by a replication controller that are used for load balancing and service exposure to Internet. We will usea Nginx based controller template as I know pretty well this component (compared to other solutions lke Haproxy).

The goal here for me is to simplify as much as possible services exposure using one ingress to rule them all, and onelayer to deal with TLS negotiation between external and internal network. I would like also to manage compressionand other specific header treatment done by a reverse proxy.

For our solution GestiClean Up’, we make use of a wildcard certificate *.gesticleanup.com. This allows us touse the base domain gesticleanup.com for our website built with the [excellent odoo), and sub domains for ourcustomers. Each sub domain is used to route traffic to the right uwsgi socket or static folder. One or more reserved subdomain will be used for accessing to the UCS proxy ucsproxy.gesticleanup.com and why not other databasemanagement or saas admin web service (not yet implemented).

De facto one wildcard certificate is enough. But.... sadness comes again, ingress controller is pretty young anddocumentation is just too poor. But here comes the sun again, I have beaten the beast for you and will not bore you(too much) with my lamentations on this part!

• Manage certificates

• Activate default ingress controller

• Config maps

• Ingress

• Deploy

• Tests

• External load balancing

61

Page 66: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Manage certificates

In order to use TLS properly we have to create a secret with correct certificate declaration:

apiVersion: v1data:

tls.crt: base64 encoded certtls.key: base64 encoded key

kind: Secretmetadata:

name: tls-secrettype: Opaque

You can encode your cert and key using websites like this one or use base64 in bash.

Activate default ingress controller

As said before in Kubernetes security chapter, we are going to use the default ingress controller kindly provided byour Juju bundle. To do this just enter:

juju config kubernetes-worker ingress=truejuju expose kubernetes-worker

And wait until pods are provisioned.

This will create a default backend and a replication controller that will create our controller’s pods. We could havedone this by hand, and benefit from having a more up to date ingress controller (0.9.0-beta.2 instead of 0.8.3), but Ilearned the hard way that the version installed is good enough for what I have to perform, and that Juju will take careof opening needed ports with Openstack and security groups (what I was unable to do persistently by myself, shameon me).

Config maps

The default controller comes with a pre configured default backend. Ok, nice but if you require more advanced featureit is possible to add some options to the replication controller spec. Even by editing it through the dashboard or withkubectl edit rc nginx-ingress-controller. This will certainly be added later according to this post.

What interested me is that I am using web sockets along our ucs proxy. Nginx deals with them out of the box, but Ineed to make sure connections stays open longer than the default values. To do this, we first need to create a configmaplike this one:

apiVersion: v1data:

proxy-read-timeout: "3600"proxy-send-timeout: "3600"

# proxy-buffer-size: "128k"gzip-types: "*"ssl-redirect: "true"

# upstream-fail-timeout: "30"kind: ConfigMapmetadata:

name: nginx-ingress-controller-confnamespace: default

62 Chapter 11. Reverse proxy

Page 67: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

You can see that I also enabled compression on every compressible files and forced ssl redirection for HTTP queries.

Note: If you are looking for a real “fat and hairy” manual, you should not try kubernetes doc and go instead to here.

Now we need to tell our controller’s pods that they should use this configMap by adding this value to the replicationcontroller descriptor (in container/args section):

--nginx-configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf

Then delete pods one by one so that the replication controller could recreate with configMap call.

Note:

Note that if you have a recent version of the ingress controller, you may use instead:: –configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf

Ingress

We are ready to build our Ingress descriptor. As said I need at least to route traffic for the ucs proxy, for tenants ofour application GestiClean Up’ and deal with SSL/TLS negotiation. I will not tell you the long story of how I finallyfound a way to do this but this issue and zarqman post at the end was the key. Order is important, and the way youprovide the wildcard domain is determinant:

apiVersion: extensions/v1beta1kind: Ingressmetadata:

name: test-ingressspec:

rules:- host: testucs.yourdomain.comhttp:

paths:- path: /

backend:serviceName: ucs-serviceservicePort: 80

- host: "*.yourdomain.com"http:

paths:- path: /

backend:serviceName: gup-server-serviceservicePort: 80

tls:- hosts:- testucs.yourdomain.com- "*.yourdomain.com"secretName: tls-secret

As you can see, it is possible to provide a wildcard domain using strings but also a named sub domain provided thatyou name it in tls and rules sections. I may not have dig hard enough anyway.

11.4. Ingress 63

Page 68: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Deploy

From your bash:

kubectl create -f nginx-ingress-controller-conf.yaml # do not forget to add the→˓corresponding args value in rckubectl --namespace=dev create -f tls-secret.yamlkubectl --namespace=dev create -f ingress.yaml

Tests

As a guide, here is what I did to test this setup:

• Point ucsproxy.gesticleanup.com to one of my node IP

• Point test.gesticleanup.com to the same IP

• Browse to http://testucs.yourdomain.com and make sure it redirects automatically to https

• Make sure the certificate is recognized and connection secured

• Make sure to get to the service required (ucs-service in my case)

• Go to https://test.yourdomain.com and make sure the wildcard location is used (gup-service in my case)

External load balancing

At this point the ingress is up and running. But it exposes your services on one IP per node. To make sure we useall of them (and therefore be fault resistant), we need to use an additional external load balancer. In my case, runningwith my favorite cloud provider OVH, I decided to go for their load balancer offer that is excellent in this situation(HAProxy based, secure and highly available). I could have build myself an Nginx load balancer, but I’am tired nowand the cost/quality gain is in favour of OVH to my opinion. I will not detail here how to set this part up.

64 Chapter 11. Reverse proxy

Page 69: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

CHAPTER 12

GestiClean Up’ at last! (or your own magnificent incredible personal app)

Yes! All this to be able to deploy OUR OWN marvellous app.... at last! The real big deal in my case was to“contenerized” GestiClean Up’. That was not to much complicated as it already (almost: not for logging part) workedas a micro service.

Note: Of course this part, may be of relative interest for your own app. But I let it there as I think it could be helpfulfor similar stacks (and also because you may have advices to help me make things better!)

• Docker image

• Database prerequisites

• Config map

• Deployment preparation

• Deploy

• Test

Docker image

Our docker image uses three layers:

• nginx

• uwsgi and its famous emperor

• Our GestiClean Up’ framework

65

Page 70: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Note: Nginx shall be advantageously put in a separate container as it constitute a micro service that will be easier tomaintain outside.

The dockerfile is in my case:

FROM ubuntu:xenial

ENV DEBIAN_FRONTEND noninteractive

RUN set -x \&& apt-get -qq update \&& apt-get install -yq --no-install-recommends \nginx \curl \libpq-dev \nfs-common \uwsgi uwsgi-plugin-python uwsgi-plugin-emperor-pg \postgresql-client \libffi-dev \libgmp-dev build-essential python-dev \&& curl --silent --show-error --retry 5 https://bootstrap.pypa.io/get-pip.

→˓py | python2.7 \&& apt-get purge -y --auto-remove \&& rm -rf /var/lib/apt/lists/*

ADD apps/ /opt/gup/apps

COPY deploy/requirements.txt /tmp/

RUN pip install --requirement /tmp/requirements.txt

ADD deploy/entrypoint.sh ./RUN chmod +x /entrypoint.sh

RUN ln -sf /dev/stdout /var/log/nginx/access.log \&& ln -sf /dev/stderr /var/log/nginx/error.log

EXPOSE 80 443

ENTRYPOINT ["./entrypoint.sh"]

Important: I do not know really why, but I had to set execution permission myself in Dockerfile this time for thescripts I am using. It took me some time to realize that.

Our entrypoint.sh script allows us to set ini files according to defaults or environment variables.

Note: This could look silly or stupid to say that but don not forget to launch Nginx at the end! Or your exposedports will remain closed without much more information on what is happening. I spent a couple of nights dismantlingdocker and ingress controller for nothing before understanding that my service nginx reload was not startingNginx, a service nginx start replaced it advantageously.

66 Chapter 12. GestiClean Up’ at last! (or your own magnificent incredible personal app)

Page 71: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Database prerequisites

For our Docker image to fire up correctly we need to create a special user for the Uwsgi Emperor and give it rights toread/write to the so called vassals table.

We can do this from any pod capable of reaching the pgbouncer service. You will probably need to install a Postgresqlclient first:

apt-get install postgresql-client

Our gcupsaas database should already be up and running. As a reminder, we have done this with:

createdb -h pgbouncer-service -U stolon gcupsaas gcupsaas

Then connect to the stolon cluster:

psql -h pgbouncer-service -U stolon gcupsaas

Now create our Emperor user:

CREATE USER emperor WITH PASSWORD 'emperor' LOGIN;

The table vassals must be created:

CREATE TABLE vassals ( name TEXT NOT NULL, config TEXT, ts TIMESTAMP NOT NULL, uid→˓TEXT, gid TEXT, socket TEXT );

And the Emperor shall be doted of the great privileges to enlist, kill or promote his devoted vassals:

GRANT SELECT, UPDATE, INSERT, DELETE ON vassals TO emperor;

You can disconnect from psql console (Ctrl-D) as is all the power that this user should get to reduce the attacksurface.

Now, we need to update Pgbouncer user list to allow the Emperor to access stolon cluster. Our pgbouncer image has aspecial Python script for that. First we need to connect to one of the Pgbouncer pods and execute this command:

python mkauth.py /etc/pgusers/userlist.txt "host=stolon-proxy-service dbname=postgres→˓user=stolon password=password1"

You can check with cat /etc/pgusers/userlist.exe that the emperor is listed with an encoded password.We may try to connect to the stolon cluster to feel his power:

root@pgbouncer-dep-3486319260-741qp:/# psql -h pgbouncer-service -U emperor gcupsaasPassword for user emperor:psql (9.5.6, server 9.6.1)WARNING: psql major version 9.5, server major version 9.6.

Some psql features might not work.Type "help" for help.

gcupsaas=# \lList of databases

Name | Owner | Encoding | Collate | Ctype | Access privileges-----------+--------+----------+------------+------------+-------------------gcupsaas | stolon | UTF8 | en_US.utf8 | en_US.utf8 | =Tc/stolon +

| | | | | stolon=CTc/stolon+| | | | | emperor=c/stolon

postgres | stolon | UTF8 | en_US.utf8 | en_US.utf8 |

12.2. Database prerequisites 67

Page 72: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

template0 | stolon | UTF8 | en_US.utf8 | en_US.utf8 | =c/stolon +| | | | | stolon=CTc/stolon

template1 | stolon | UTF8 | en_US.utf8 | en_US.utf8 | =c/stolon +| | | | | stolon=CTc/stolon

(4 rows)

gcupsaas=# \dtList of relations

Schema | Name | Type | Owner--------+---------+-------+--------public | entity | table | stolonpublic | vassals | table | stolon

(2 rows)

gcupsaas=#

Config map

Like pgbouncer, we will use a config map to store the GestiClean Up’ main configuration file.

Here it is:

[PATH]NFSMode=TrueNFSRootDirectory=/var/web-dynamic-content/appsPathDirectory=/opt/gcup-current-stable/apps

We have not done this yet, but I found that formatting the config map directly in yaml is more convenient (mostlybecause that allows me to use kubectl replace to update config maps). So here is my yaml config map:

kind : ConfigMapapiVersion : v1metadata:

name: gup-server-configdata:

config.ini: "[PATH]\nNFSMode=True\nNFSRootDirectory=/var/web-dynamic-content/→˓\nappsPathDirectory=/opt/gcup-current-stable/apps"

That we create this way as usual:

kubectl --namespace=dev create -f gup-server-sonfig.yaml

Deployment preparation

We will use here a deployment descriptor in order to set up GestiClean Up’ server layer as it is excellent when we needto upgrade the docker image for example with zero down time (a new replica set will be created behind the serviceand will start new pods one by one then kill old ones when ready... perfect).

Here is our deployment file:

apiVersion: extensions/v1beta1kind: Deploymentmetadata:

68 Chapter 12. GestiClean Up’ at last! (or your own magnificent incredible personal app)

Page 73: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

name: gup-server-depspec:

replicas: 3revisionHistoryLimit: 2template:metadata:

labels:app: gupserver

spec:volumes:

- name: configconfigMap:name: gup-server-config

containers:- name: gup-server

image: yourImage:latestenv:- name: NFS_SHAREvalue: "/wdc"

- name: NFS_SERVERvalue: "nfs-server"

- name: NFS_MOUNTvalue: "/var/wdc"

volumeMounts:- mountPath: /etc/gupname: config

securityContext:privileged: true

imagePullSecrets:- name: regsecret

And this service file:

apiVersion: v1kind: Servicemetadata:

name: gup-server-servicespec:

ports:- port: 80

targetPort: 80selector:app: "gupserver"

Deploy

Nothing more than creating “as usual” the deployment and the dedicated service:

kubectl --namespace=dev create -f gup-server-dep.yamlkubectl --namespace=dev create -f gup-server-svc.yaml

Of course I do not speak here of the Nginx, Uwsgi and GestiClean Up’ configuration files that are created and tunedduring the docker image pulling process.

12.5. Deploy 69

Page 74: A guide to setup your own Kubernetes Cluster With ......A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0 could with alternatives that

A guide to setup your own Kubernetes Cluster... With GestiClean Up! Documentation, Release 1.0.0

Test

In my case, I just have to make my testing subdomain point to one of my nodes. The ingress controller and its rulesproxies the requests to my GestiClean Up’ Nginx. Now we get the login window. Huha!

70 Chapter 12. GestiClean Up’ at last! (or your own magnificent incredible personal app)