containerized suse openstack cloud technology preview · 1 welcome to containerized suse openstack...

85
Containerized SUSE OpenStack Cloud Technology Preview

Upload: others

Post on 13-May-2020

22 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

Containerized SUSEOpenStack CloudTechnology Preview

Page 2: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

Containerized SUSE OpenStack Cloud Technology Preview

Publication Date: 06/18/2019

SUSE LLC10 Canal Park DriveSuite 200Cambridge MA 02141USA

https://www.suse.com/documentation

Page 3: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

Contents

1 Welcome to Containerized SUSE OpenStack CloudTechnology Preview 1

2 Containerized SUSE OpenStack Cloud TechPreview 2

2.1 Deployment Guide 2

2.2 Installation Overview 2

2.3 System Requirements 2

Infrastructure 2 • Minimum Node Specification 3 • Cluster

size 4 • Network Requirements 5

2.4 Set Up Deployer 5

Create Containerized SUSE OpenStack Cloud Workspace 6 • Installing

the Containerized SUSE OpenStack Cloud software 6 • SSH Key

Preparation 7 • Passwordless sudo 7 • Configure Ansible 8

2.5 SUSE Enterprise Storage Integration 9

2.6 Configure Cloud 10

Configure the Inventory 10 • Configure for SES Integration 12 • SES Salt

Runner Usage (Optional) 13 • Configure for Kubernetes 13 • Configure

the Neutron External Interface and Tunnel Device 13 • Configure

the VIP for OpenStack service Public Endpoints 14 • Set Up Retry

Files Save Path 14 • Configure the VIP for Airship UCP Service

Endpoints 14 • Configure Cloud Scale Profile 14 • Advanced

Configuration 15

2.7 Deploy Airship and OpenStack 15

Track Deployment Progress 16 • Run Developer Mode 18

2.8 Verify Deployment 19

Verify OpenStack Operation 19 • OpenStack Tempest Testing 19

iii Containerized SUSE OpenStack Cloud Technology Preview

Page 4: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.9 Next Steps 23

2.10 Uninstall 23

3 Administration and Operations Guide 24

3.1 Using run.sh 24

Deployment Actions 24 • Cleanup Actions 24 • Testing 25

3.2 Scaling In/Scaling Out 25

Adding or removing compute nodes 25 • Control plane horizontal

scaling 26

3.3 Updates 27

Updating OpenStack Version 27 • Updating OpenStack Service

Configuration 28 • Updating Individual Images and Helm Charts 28

3.4 Reboot Compute Host 29

3.5 Troubleshooting 30

Viewing Shipyard Logs 30 • Viewing Logs From Kubernetes

Pods 31 • Recover Controller Host Node 32 • Recover Compute Host

Node 32

3.6 Recovering from Node Failure 33

Pod Status of NodeLost or Unknown 33 • Frequent Pod Evictions 34

3.7 Kubernetes Operations 34

3.8 Tips and Tricks 35

Display all images used by a component 35 • Remove Dangling Docker

images 35 • Setting the Default Context 35

4 CSOC User Guide 36

4.1 Advanced Users 36

4.2 Ansible Tips 36

4.3 Build and Consume Your Own Images 36

Build non-OpenStack images 36 • Build LOCI images 37 • Consume

Built Images 37

iv Containerized SUSE OpenStack Cloud Technology Preview

Page 5: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

4.4 Disable CaaSP Transactional Updates 38

4.5 Clean Up Kubernetes 38

4.6 Customizing Helm Testing Behavior 39

4.7 Discover Helm Charts Overrides 39

4.8 Minimal Network Example 40

4.9 Multiple Network Example 42

4.10 Manage OpenStack Services 45

4.11 Deleting Containerized SUSE OpenStack Cloud from OpenStack 46

4.12 Use Custom Patches 47

4.13 Use Your Own Certificates 47

4.14 Use a Personal Fork 47

5 Developer Documentation 48

5.1 Contributor Guidelines 48

Submodules 48 • Before submitting code 48 • Bug reporting

process 48 • Review process 49 • Upstream communication

channels 49

5.2 Code rules 50

General Guidelines for Submitting Code 50 • Documentation with

Code 50 • Code Comments 51 • Ansible Style Guide 51

5.3 Testing 51

Bash Linting 52 • Ansible Linting 52 • Helm chart values linting 52

5.4 Periodic work 52

5.5 Airship Developer Guide 52

Testing upstream patches 52 • Build your own images 52 • Point to

your own images in airship 53

6 Administration and Operations Guide 54

6.1 Project history 54

v Containerized SUSE OpenStack Cloud Technology Preview

Page 6: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6.2 Project goals 54

6.3 Design considerations 55

Workspace 55

6.4 Why... 55

6.5 Image building process 56

Upstream process 56 • socok8s process 56

6.6 OpenStack-Helm chart overrides 57

Helm chart values overriding principle 57 • OpenStack-Helm

scripts 57 • Customizing OSH charts for SUSE when deploying in OSH only

mode 58 • How deployers can extend a custom SUSE OSH chart in OSH-only

mode 58

6.7 Summary Deploy on OpenStack diagrams 58

Simplified network diagram 58 • OSH deploy on OpenStack 59

6.8 Environment variables 62

In socok8s 62 • Ansible environment variables 63 • OpenStack-Helm

environment variables 63

7 Glossary 64

8 SCOC Internal 65

8.1 SUSE ECP Internal (Experimental) 65

8.2 Deploy in SUSE ECP Overview 65

8.3 Prepare Localhost 66

Base software 67 • Cloning this repository 67 • Configure

Ansible 68 • Defining a workspace 68 • Set the Deployment

Mechanism 69

8.4 Prepare the Target Hosts 70

In Separate Steps 71 • In a Single Step 71

8.5 Configure the Deployment 72

Configure the inventory 72 • Make the SES pools Known by

Ansible 74 • Configure the VIP for OpenStack service public

vi Containerized SUSE OpenStack Cloud Technology Preview

Page 7: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

endpoints 74 • Configure the VIP for Airship UCP service

endpoints 74 • Provide a kubeconfig File 75 • Advanced

configuration 75

8.6 Set Up OpenStack 75

In separate steps 76 • In a Single Step 77

vii Containerized SUSE OpenStack Cloud Technology Preview

Page 8: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

1 Welcome to Containerized SUSE OpenStack CloudTechnology Preview

The socok8s project automates Containerized SUSE OpenStack Cloud provisioning and lifecyclemanagement on SUSE Container as a Service Platform (CaaSP) and SUSE Enterprise Storage(SES), using Airship, OpenStack Helm, shell scripts, and Ansible playbooks.

As a technology preview, this is not meant for production use and a typical support subscrip-tion is not being offered. Some of the functionality currently in our generally available SUSEOpenStack Cloud product is not available in this tech preview. This technology preview givesusers and partners the ability to test and see the new technologies that may be offered in SUSE'sfuture releases.

This release deploys the following set of core OpenStack services:

Cinder

Glance

Heat

Horizon

Keystone

Neutron

Nova

This technology preview will not be moved into a production support mode of operation. Theunderlying technology is being considered for the future release of SUSE OpenStack Cloud prod-uct. There is a possibility of refreshing the capabilities of the technology preview later.

This technology preview is provided under the terms of the Apache-2.0 license.

Please provide your feedback using the beta mailing lists that are documented on the technologypreview download page or contact your SUSE representative. We are interested in your experi-ence and use cases. Your feedback is appreciated.

1

Page 9: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2 Containerized SUSE OpenStack Cloud Tech Preview

2.1 Deployment GuideThis deployment guide provides instructions for installing Containerized SUSE OpenStack Cloudon top of SUSE CaaS Platform and SUSE Enterprise Storage.

2.2 Installation OverviewThis guide refers to the following types of hosts:

A Deployer with dual roles. It is the starting point for invoking the deployment socok8sscripts and Ansible playbooks. It is the access point to your Kubernetes cluster. A deployercan be a continuous integration (CI) node, a laptop, or a dedicated VM.

A series of CaaS Platform nodes: administration node, master, workers.

A series of SES nodes.

The following diagram shows the general workflow of a Containerized SUSE OpenStack Clouddeployment on an installed SUSE CaaS Platform cluster and SUSE Enterprise Storage.

2.3 System RequirementsBefore you begin the installation, your system must meet the following requirements.

2.3.1 Infrastructure

The Deployer must run openSUSE Leap 15 or SUSE Linux Enterprise 15. See Section 2.4,

“Set Up Deployer” for required deployment tools and packages.

2 Deployment Guide

Page 10: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

NoteTo install openSUSE Leap 15, follow the instructions at the openSUSELeap software

website (https://software.opensuse.org/distributions/leap) .

The CaaS Platform cluster must run the latest CaaS Platform version 3.

NoteThe CaaS Platform Installation Quick Start guide is available in the SUSE documen-

tation for CaaSP Quick Start (https://www.suse.com/documentation/suse-caasp-3/sin-

glehtml/book_caasp_installquick/book_caasp_installquick.html) .

You must register the CaaS Platform product to get access to the up-date repository. We strongly recommend enabling the auto-update reposito-ry during CaaS Platform installation. If auto update is not enabled, youmust run transactional update following the instructions in the CaaSP

documentation (https://www.suse.com/documentation/suse-caasp-3/book_caasp_ad-

min/data/sec_admin_software_transactional-updates.html) .

The SES cluster must run SES version 5.5.

NoteThe SES deployment guide is available in the SUSE documenta-

tion for SES (https://www.suse.com/documentation/suse-enterprise-storage-5/single-

html/book_storage_deployment/book_storage_deployment.html) .

2.3.2 Minimum Node Specification

2.3.2.1 Deployer node

(v)CPU: 4

Memory: 4GB

Storage: 40GB

3 Minimum Node Specification

Page 11: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.3.2.2 CaaS Platform worker node

(v)CPU: 6

Memory: 16GB

Storage: 80GBIf the work node is used as Compute node, sizing should be determined by the targetworkloads on the compute node.

2.3.2.3 SES node

(v)CPU: 6

Memory: 16GB

Storage: 80GB

2.3.3 Cluster size

A minimal CaaS Platform cluster requires one administration node, one master node and twoworker nodes.

Containerized SUSE OpenStack Cloud enrolls CaaS Platform work nodes for two different pur-poses: control plane where the Airship and OpenStack services run and compute nodes wherecustomer workloads are hosted.

For a minimal cloud, you should plan one worker node for the control plane, and one or moreworker nodes as OpenStack compute nodes.

For a high availability (HA) cloud, we recommend three worker nodes designated for the Airshipand OpenStack control plane, and additional worker nodes allocated for compute. For detailedinformation about scale profiles, see Section 2.6.9, “Configure Cloud Scale Profile”.

4 Cluster size

Page 12: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.3.4 Network Requirements

CaaS Platform networking

Create necessary CaaS Platform networks before deploying Containerized SUSEOpenStack Cloud. Separating traffic by function is recommended but not required.

Storage Network

A separate storage network can be created to isolate storage traffic. This separate net-work should be present on the Caas Platform and ses_config.yml mon_host: section.

VIP for Airship and OpenStack

Virtual IP address will be assigned to Pods allowing ingress to Airship and OpenStackservices. The ingress IP assignments for these services must be on a subnet presenton the Caas Platform nodes and an IP that is not currently in use. VIPs are configuredin env/extravars .

DNS

Installing Containerized SUSE OpenStack Cloud updates /etc/hosts on all Caas Plat-form nodes and Deployer. If expanding testing beyond these devices, it is recom-mended to use DNS for sharing this data. It is possible to configure the Deployer withdnsmasq to supply DNS functionality, but this is beyond the scope of this preview.

Distributed Virtual Routing (DVR) is not supported in this Technology Preview.Only at networks are supported in Containerized SUSE OpenStack Cloud.

NoteNetwork configuration examples can be found in Section 4.1, “Advanced Users”.

2.4 Set Up Deployer

5 Network Requirements

Page 13: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.4.1 Create Containerized SUSE OpenStack Cloud Workspace

All the deployment artifacts are stored in a Workspace . By default, the workspace is a directorylocated in the user's home directory on the Deployer. Set up your workspace with the followingsteps:

1. Create a directory in your home directory that ends in -workspace .

2. Export SOCOK8S_ENVNAME=DIRECTORY_NAME_PREFIX to set your workspace.

3. To change your workspace parent directory, export SOCOK8S_WORKSPACE_BASEDIR withthe base directory where your workspace is located.

mkdir ~/socok8s-workspaceexport SOCOK8S_ENVNAME=socok8sexport SOCOK8S_WORKSPACE_BASEDIR=~

2.4.2 Installing the Containerized SUSE OpenStack Cloud software

We recommend two ways of installing the Containerized SUSE OpenStack Cloud software.

1. (Recommended) Install with an ISO image including required dependencies:

Download openSUSE-Addon-socok8s-x86_64-Media.iso from https://download.opensuse.org/repositories/Cloud:/socok8s/images/iso/sudo zypper addrepo --refresh PATH_TO_ISO_IMAGE socok8s-isosudo zypper install socok8s (installs to /usr/share/socok8s)

2. The following software must be manually installed on your Deployer using zypper in-stall or pip install :

ansible >= 2.7.8

git-core

jq

python3-virtualenv

python3-jmespath

python3-netaddr

6 Create Containerized SUSE OpenStack Cloud Workspace

Page 14: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

python3-openstacksdk

python3-openstackclient

python3-heatclient

which

After the required packages are installed, clone the socok8s GitHub repository (https://

github.com/SUSE-Cloud/socok8s) . This repository uses submodules, which have addition-al code needed for the playbooks to work. Required dependencies must be installed man-ually. Intended for developers.

git clone --recursive https://github.com/SUSE-Cloud/socok8s.git

Fetch or update the tree of the submodules by running:

git submodule update --init --recursive

2.4.3 SSH Key Preparation

Create an SSH key on the Deployer node, and add the public key to each CaaS Platform workernode.

2.4.4 Passwordless sudo

If installing as a non-root user, you must give your user passwordless sudo on the Deployer.

sudo visudo

Add the following.

USERNAME ALL=(ALL) NOPASSWD: ALL

Add the line above after #includedir /etc/sudoers.d . Replace USERNAME with your user-name.

7 SSH Key Preparation

Page 15: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.4.5 Configure Ansible

2.4.5.1 Use ARA (recommended)

Ansible Run Analysis (ARA) makes Ansible runs easier to visualize, understand, and trou-bleshoot. To use ARA:

1. Install ARA and its required dependencies: pip install ara[server] .

2. Set the ARA environment variable before running run.sh : export USE_ARA='True'

To set up ARA permanently on the Deployer, create an Ansible configuration le loading ARAplugins:

python3 -m ara.setup.ansible | tee ~/.ansible.cfg

For more details on the ARA web interface, see ARA Read The Docs (https://ara.readthedocs.io/en/

stable/webserver.html) .

2.4.5.2 Ansible Logging

Enable Ansible logging with the following steps:

1. Create an Ansible configuration le in the $HOME directory, for example, .ansible.cfg .This configuration le can be used for other Ansible configurations.

2. Add your log_path to .ansible.cfg . Use a log path and log filename that t yourneeds, for example:

[defaults]log_path=$HOME/.ansible/ansible.log

2.4.5.3 Enable Pipelining (recommended)

You can improve SSH connections by enabling pipelining:

cat << EOF >> ~/.ansible.cfg[ssh_connection]pipelining = TrueEOF

8 Configure Ansible

Page 16: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.5 SUSE Enterprise Storage Integration

For SES deployments using version 5.5 and higher, a Salt runner can create all the users and poolsOpenStack services require. It also generates a yaml configuration that is needed to integratewith Containerized SUSE OpenStack Cloud. The integration runner creates separate users forCinder, Cinder backup, and Glance. Both the Cinder and Nova services will have the same user,as Cinder needs access to create objects that Nova uses.

Log in as root to run the SES 5.5 Salt runner on the salt admin host.

# salt-run --out=yaml openstack.integrate prefix=mycloud

The prefix parameter allows pools to be created with the specified prefix. In this way, multiplecloud deployments can use different users and pools on the same SES deployment.

Sample yaml output:

ceph_conf: cluster_network: 10.84.56.0/21 fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673 mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7 mon_initial_members: ses-osd1, ses-osd2, ses-osd3 public_network: 10.84.56.0/21cinder: key: AQCdfIRaxefEMxAAW4zp2My/5HjoST2Y8mJg8A== rbd_store_pool: mycloud-cinder rbd_store_user: cindercinder-backup: key: AQBb8hdbrY2bNRAAqJC2ZzR5Q4yrionh7V5PkQ== rbd_store_pool: mycloud-backups rbd_store_user: cinder-backupglance: key: AQD9eYRachg1NxAAiT6Hw/xYDA1vwSWLItLpgA== rbd_store_pool: mycloud-glance rbd_store_user: glancenova: rbd_store_pool: mycloud-novaradosgw_urls:

9 SUSE Enterprise Storage Integration

Page 17: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

- http://10.84.56.7:80/swift/v1 - http://10.84.56.8:80/swift/v1

After you have run the openstack.integrate runner, copy the yaml into the ses_config.ymlle in the root of the workspace on the Deployer node.

2.6 Configure Cloud

This Workspace , structured like an ansible-runner directory, contains the following deploymentartifacts:

socok8s-workspace|-- inventory |-- hosts.yml|-- env |-- extravar|-- ses_configuration.yml|-- kubeconfig

2.6.1 Configure the Inventory

You can create an inventory based on the hosts.yml le in the examples directory (exam-ples/workdir/inventory/hosts.yml).

---caasp-admin: vars: ansible_user: root

caasp-masters: vars: ansible_user: root

caasp-workers: vars:

10 Configure Cloud

Page 18: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

ansible_user: root

soc-deployer: vars: ansible_user: root

ses_nodes: vars: ansible_user: root

airship-openstack-compute-workers: vars: ansible_user: root

airship-openstack-control-workers: vars: ansible_user: root

airship-openstack-l3-agent-workers: vars: ansible_user: root

airship-ucp-workers: vars: ansible_user: root

airship-kube-system-workers: vars: ansible_user: root

For each group, a hosts: key should be added for each of the hosts you are using. For example:

airship-openstack-control-workers: hosts: caasp-worker-001: ansible_host: 10.86.1.144 vars: ansible_user: root

The group airship-ucp-workers specifies the list of CaaS Platform worker nodes to whichthe Airship Under Cloud Platform (UCP) services will be deployed. The UCP services in socok8sinclude Armada, Shipyard, Deckhand, Pegleg, Keystone, Barbican, and core infrastructure ser-vices such as MariaDB, RabbitMQ, and PostgreSQL.

The group airship-openstack-control-workers specifies the list of CaaS Platform workernodes that make up the OpenStack control plane. The OpenStack control plane includes Key-stone, Glance, Cinder, Nova, Neutron, Horizon, Heat, MariaDB, and RabbitMQ.

11 Configure the Inventory

Page 19: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

The group airship-openstack-l3-agent-workers specifies the list of CaaS Platform workernodes where OpenStack Neutron L3 agent runs. This nodes have a public cidr so that tenantfloating IPs can route properly.

The group airship-openstack-compute-workers defines the CaaS Platform worker nodesused as OpenStack Compute Nodes. Nova Compute, Libvirt, Open vSwitch (OVS) are deployedto these nodes.

For most users, UCP and OpenStack control planes can share the same worker nodes. The OpenS-tack Compute Nodes should be dedicated worker nodes unless a light workload is expected.

See also Ansible Inventory Hosts and Groups (https://docs.ansible.com/ansible/2.7/user_guide/in-

tro_inventory.html#hosts-and-groups) .

NoteDo not add localhost as a host in your inventory. It is a host with special meaningto Ansible. If you want to create an inventory node for your local machine, add yourmachine's hostname inside your inventory, and specify this host variable: ansible_con-nection: local

If the Deployer is running as a non-root user, replace ansible_user: value for the soc-deployer entry with your logged in user.

2.6.2 Configure for SES Integration

The SES integration configuration le ( ses_config.yml ) is created as part of deployment.Needed Ceph admin keyring and user keyring are added in the le env/extravars in yourworkspace. In initial deployment steps, all necessary Ceph pools are created and configurationis made available in your workspace. Make sure that env/extravars le is already present inyour workspace before deployment step is executed.

If there is a need to differentiate storage resources (pools, users) associated with the deployment,then a variable can be set in extravars to add a prefix to those resources. Otherwise defaultwith no specific prefix is used.

airship_ses_pools_prefix: "mycloud-"

If usage of SES salt runner script is preferred, then you can review next sub-section. Otherwiseskip it.

12 Configure for SES Integration

Page 20: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.6.3 SES Salt Runner Usage (Optional)

In case ses_config.yml is created as output from Section 2.5, “SUSE Enterprise Storage Integration”,then it can be copied to workspace. The Ceph admin keyring and user keyring, in base64, mustbe present in the le env/extravars in your workspace.

The Ceph admin keyring can be obtained by running the following on ceph host.

echo $( sudo ceph auth get-key client.admin ) | base64

For example:

ceph_admin_keyring_b64key: QVFDMXZ6dGNBQUFBQUJBQVJKakhuYkY4VFpublRPL1RXUEROdHc9PQo=ceph_user_keyring_b64key: QVFDMXZ6dGNBQUFBQUJBQVJKakhuYkY4VFpublRPL1RXUEROdHc9PQo=

2.6.4 Configure for Kubernetes

Containerized SUSE OpenStack Cloud relies on kubectl and Helm commands to configure yourOpenStack deployment. You need to provide a kubeconfig le on the deployer node, in yourworkspace. You can fetch this le from the Velum UI on your SUSE CaaS Platform cluster.

If DNS is not used for SUSE CaaSP cluster and is not available on the deployer, add en-tries for DNS resolution in the /etc/hosts le on the deployer. In the following exam-ple, the caasp_master_node_ip is 192.168.7.231 and the caasp_master_host_name ispcloud003master . The /etc/hosts/ le should be edited to include:

192.168.7.231 api.infra.caasp.local192.168.7.231 pcloud003master

2.6.5 Configure the Neutron External Interface and Tunnel Device

Add neutron_tunnel_device : with its appropriate value for your environment in your env/extravars . It specifies the overlay network for VM traffic. The tunnel device should be availableon all OpenStack controllers and compute hosts.

Add neutron_external_interface : with its appropriate value for your environment in yourenv/extravars . It specifies the bond which the overlay is a member of.

For example:

neutron_external_interface: bond0neutron_tunnel_device: bond0.24

13 SES Salt Runner Usage (Optional)

Page 21: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.6.6 Configure the VIP for OpenStack service Public Endpoints

Add socok8s_ext_vip : with its appropriate value for your environment in your env/ex-travars . This should be an available IP on the external network (in a development environ-ment, it can be the same as CaaSP cluster network).

For example:

socok8s_ext_vip: "10.10.10.10"

2.6.7 Set Up Retry Files Save Path

Before beginning deployment, a path can be specified where Ansible retry les can be saved inorder to avoid potential errors. The path should point to a user-writable directory. Set the pathin either of the following ways:

export ANSIBLE_RETRY_FILES_SAVE_PATH=PATH_TO_DIRECTORY before deploying withrun.sh commands.

Set the value of retry_files_save_path in your Ansible configuration le.

There is an option to disable creating these retry les by setting retry_files_enabled =False in your Ansible configuration le.

2.6.8 Configure the VIP for Airship UCP Service Endpoints

Add socok8s_dcm_vip : with its appropriate value for your environment in your env/ex-travars . This should be an available IP on the Data Center Management (DCM) network (indevelopment environment, it can be the same as CaaSP cluster network).

For example:

socok8s_dcm_vip: "192.168.51.35"

2.6.9 Configure Cloud Scale Profile

The Pod scale profile in socok8s allows you to specify the desired number of Pods that eachAirship and OpenStack service should run.

14 Configure the VIP for OpenStack service Public Endpoints

Page 22: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

There are two built-in scale profiles: minimal and ha . minimal will deploy exactly one Podfor each service, making it suitable for demo or trial on a resource-limited system. ha (HighAvailability) ensures at least two instances of Pods for all services, and three or more Pods forservices that require quorum and are more heavily used.

To specify the scale profile to use, add scale_profile: in the env/extravars .

For example:

scale_profile: ha

The definitions of the Pod scale profile can be found in this repository: playbooks/roles/airship-deploy-ucp/files/profiles .

You can customize the built-in profile or create your own profile following the le name con-vention.

2.6.10 Advanced Configuration

socok8s deployment variables respects Ansible general precedence. Therefore all the variablescan be adapted.

You can override most user-facing variables with host vars and group vars.

Containerized SUSE OpenStack Cloud is flexible, and allows you to override the value of anyupstream Helm chart value with the appropriate overrides.

NotePlease read Section 4.1, “Advanced Users” for inspiration on overrides.

2.7 Deploy Airship and OpenStack

15 Advanced Configuration

Page 23: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

To deploy SUSE Containerized OpenStack Cloud using Airship, run:

./run.sh deploy

This script takes several minutes to finish.

2.7.1 Track Deployment Progress

2.7.1.1 Using kubectl

To check the deployment progress of the Airship UCP services:

kubectl get po -n ucp

To check the deployment progress of the Openstack services:

kubectl get po -n openstack

2.7.1.2 OpenStack Services

Containerized SUSE OpenStack Cloud deploys OpenStack Cinder, Glance, Heat, Horizon, Key-stone, Neutron, and Nova.

Containerized SUSE OpenStack Cloud deployment will automatically add the following hostrules to the /etc/hosts le on the deployer:

10.10.10.10 identity.openstack.svc.cluster.local10.10.10.10 image.openstack.svc.cluster.local10.10.10.10 volume.openstack.svc.cluster.local10.10.10.10 compute.openstack.svc.cluster.local10.10.10.10 network.openstack.svc.cluster.local10.10.10.10 dashboard.openstack.svc.cluster.local10.10.10.10 nova-novncproxy.openstack.svc.cluster.local10.10.10.10 orchestration.openstack.svc.cluster.local

You can access OpenStack service public endpoints using the host names listed in the /etc/hosts le. For example, access OpenStack Horizon (dashboard) at http://dashboard.open-stack.svc.cluster.local .

You can access Horizon and other OpenStack service APIs from a different system by adding theentries above to DNS or /etc/hosts on that system.

Distributed Virtual Routing (DVR) is not supported in this Technology Preview.

16 Track Deployment Progress

Page 24: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.7.1.3 Using Kubernetes Dashboard

Deploy the Kubernetes Dashboard UI with the instructions for Kubernetes Dashboard (https://

github.com/kubernetes/dashboard) .

2.7.1.4 Using Shipyard CLI

Airship Shipyard CLI allows you to retrieve the progress and status of deployment actions.

To use the CLI, you must set up two environment variables:

export OS_CLOUD=airshipexport OS_PASSWORD=PPPPPEEEEEEdLbdlbdlb_Ry

The OS_PASSWORD is the Shipyard service password in the UCP keystone. It can be found in thesecrets/ucp_shipyard_keystone_password le in your workspace on the deployer node.

To check the workflow status of the deployment action, run:

/opt/airship-shipyard/tools/shipyard.sh describe action/01D821AZ27H6NCSPV01RXQPDST

The last argument is the action key in Shipyard. Its value is stored in the soc-keys.yaml lein your workspace, for example,

Site:name: socaction_key: action/01D963GH0B621TBQHZAH8MW9JE

Sample output of the Shipyard describe command:

Name: update_softwareAction: action/01D963GH0B621TBQHZAH8MW9JELifecycle: CompleteParameters: {}Datetime: 2019-04-23 22:01:57.003504+00:00Dag Status: successContext Marker: b2157815-e993-4333-b881-4937084441ddUser: shipyard

Steps Index State Footnotesstep/01D963GH0B621TBQHZAH8MW9JE/action_xcom 1 successstep/01D963GH0B621TBQHZAH8MW9JE/dag_concurrency_check 2 successstep/01D963GH0B621TBQHZAH8MW9JE/deployment_configuration 3 successstep/01D963GH0B621TBQHZAH8MW9JE/validate_site_design 4 success

17 Track Deployment Progress

Page 25: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

step/01D963GH0B621TBQHZAH8MW9JE/armada_build 5 successstep/01D963GH0B621TBQHZAH8MW9JE/decide_airflow_upgrade 6 successstep/01D963GH0B621TBQHZAH8MW9JE/armada_get_status 7 successstep/01D963GH0B621TBQHZAH8MW9JE/armada_post_apply 8 successstep/01D963GH0B621TBQHZAH8MW9JE/upgrade_airflow 9 skippedstep/01D963GH0B621TBQHZAH8MW9JE/skip_upgrade_airflow 10 successstep/01D963GH0B621TBQHZAH8MW9JE/deckhand_validate_site_design 11 successstep/01D963GH0B621TBQHZAH8MW9JE/armada_validate_site_design 12 successstep/01D963GH0B621TBQHZAH8MW9JE/armada_get_releases 13 successstep/01D963GH0B621TBQHZAH8MW9JE/create_action_tag 14 success

Commands User Datetimeinvoke shipyard 2019-04-23 22:01:57.752593+00:00

Validations: None

Action Notes:> action metadata:01D963GH0B621TBQHZAH8MW9JE(2019-04-23 22:01:57.736165+00:00): Configdoc revision 1

2.7.1.5 Logs

To check Airship logs, run the Shipyard logs CLI command, for example:

/opt/airship-shipyard/tools/shipyard.sh logs step/01D963GH0B621TBQHZAH8MW9JE/armada_build

To check logs from a running container, use the kubectl logs command. For example, to re-trieve the test output from the Keystone Rally test, run:

kubectl logs airship-keystone-test -n openstack

2.7.2 Run Developer Mode

If you want to patch upstream Helm charts or build your own container images, set the followingenvironment variables before deployment:

export SOCOK8S_DEVELOPER_MODE='True'export AIRSHIP_BUILD_LOCAL_IMAGES='true'./run.sh deploy

Alternatively, you can add the following two lines to the env/extravars le:

SOCOK8S_DEVELOPER_MODE: true

18 Run Developer Mode

Page 26: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

AIRSHIP_BUILD_LOCAL_IMAGES: true

2.8 Verify Deployment

2.8.1 Verify OpenStack Operation

The cloud deployment includes Rally testing for the core Airship UCP and OpenStack servicesby default.

At this point, your Deployer node should have an OpenStack configuration le, and the OpenS-tackClient (OSC) command line interface should be installed.

Test access to the OpenStack service via the VIP and determine that the OpenStack services arefunctioning as expected by running the following commands:

export OS_CLOUD='openstack'openstack endpoint listopenstack server list

2.8.2 OpenStack Tempest Testing

After the deployment of Containerized SUSE OpenStack Cloud has completed, it is possible to runOpenStack Tempest tests against the core services in the deployment using the run.sh script.Before running Tempest tests, it will be necessary to manually configure OpenStack networkresources and provide a few configuration parameters in the ${WORKDIR}/env/extravars le.

2.8.2.1 Setting Up An External Network And Subnet in OpenStack

To set up an external network and subnet in OpenStack, the following commands can be runfrom a shell on the Deployer node.

export OS_CLOUD=openstackopenstack network create --provider-network-type flat --provider-physical-network external \ --external publicopenstack subnet create --network public --subnet-range 192.168.100.0/24 --allocation-pool \

19 Verify Deployment

Page 27: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

start=192.168.100.10,end=192.168.100.200 --gateway 192.168.100.1 --no-dhcp public-subnet

NoteThe external public network is expected to be able to reach the internet. The above valueswill vary based on your network environment.

After the public network and subnet have been created in OpenStack, their names will need to bemade known to Tempest by adding the following keys in the ${WORKDIR}/env/extravars le:

openstack_external_network_name: "public"openstack_external_subnet_name: "public-subnet"

Tempest will create a private network (10.0.0.0/8) to use as the default network, and it needs toknow the CIDR block from which to allocate project IPv4 subnets. This value should be specifiedwith the following key in the extravars le:

openstack_project_network_cidr: "10.0.4.0/24"

2.8.2.2 Configuring Tempest Test Parameters

By default, the implementation of Tempest in Containerized SUSE OpenStack Cloud will runsmoke tests for all deployed services including compute, identity, image, network, and volume,using 4 workers.

To modify the number of workers, add the following key with a value of your choosing to theextravars le:

tempest_workers: 6

To disable tests for specific OpenStack components, any or all of the following keys can be addedto the extravars le:

tempest_enable_cinder_service: falsetempest_enable_glance_service: falsetempest_enable_nova_service: falsetempest_enable_neutron_service: false

To run all Tempest tests instead of just smoke tests, add the following key to the extravars le:

tempest_test_type: "all"

20 OpenStack Tempest Testing

Page 28: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.8.2.3 Using a Blacklist

To exclude specifc tests from the collection of tests being run against the deployment, theycan be added to the blacklist le located at socok8s/playbooks/roles/airship-deploy-tem-pest/files/tempest_blacklist .

When adding tests to the blacklist, each test should be listed on a new line and should beformatted like the following example:

- (?:tempest\.api\.identity\.v3\.test_domains\.DefaultDomainTestJSON\.test_default_domain_exists)

By default, the blacklist le provided with Containerized SUSE OpenStack Cloud will be usedwhen running Tempest tests. If desired, use of a blacklist can be disabled by adding the followingkey to ${WORKDIR}/env/extravars :

use_blacklist: false

2.8.2.4 Running Tempest Tests

After all of the OpenStack network resources have been created and all configuration parametershave been provided in ${WORKDIR}/env/extravars , Tempest testing can be started by runningthe following command from the root of the socok8s directory:

./run.sh test

Once the Tempest pods have been deployed, testing will begin immediately. You can check theprogress of the test pod at any time by running:

kubectl get pods -n openstack | grep tempest-run

Example output:

airship-tempest-run-tests-hq6jg 1/1 Running 0 33m

A status of Running indicates that testing is still in progress. Once testing is complete, the statusof the airship-tempest-run-tests pod will change to Complete , indicating that all enabledtests are executed.

21 OpenStack Tempest Testing

Page 29: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.8.2.5 Tempest Test Results

All test results can be viewed by retrieving the logs from the airship-tempest-run-testsPod by running the following command:

kubectl logs -n openstack airship-tempest-run-tests-hq6jg

NoteThe logs can be viewed at any time, even while a current test batch is still running.

After testing is complete, the logs will conclude with a summary of all passed, skipped, andfailed tests similar to the following:

Sample output for smoke tests execution (default value for tempest_test_type)

======Totals======Ran: 120 tests in 1043.0000 sec. - Passed: 88 - Skipped: 28 - Expected Fail: 0 - Unexpected Success: 0 - Failed: 4Sum of execute time for each test: 1684.2065 sec.

==============Worker Balance============== - Worker 0 (25 tests) => 0:06:17.321190 - Worker 1 (39 tests) => 0:15:52.956097 - Worker 2 (27 tests) => 0:17:23.015459 - Worker 3 (29 tests) => 0:05:19.495695

22 OpenStack Tempest Testing

Page 30: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

2.9 Next StepsAfter you have verified that your OpenStack cloud is working, here are additional things youcan do:

Review the Chapter 3, Administration and Operations Guide to know more about using Airshipand OpenStack-Helm.

Review the Chapter 5, Developer Documentation to know more about contributing to Con-tainerized SUSE OpenStack Cloud.

Review the Chapter 6, Administration and Operations Guide for more about the backgroundand design of Containerized SUSE OpenStack Cloud.

2.10 UninstallTo remove Containerized SUSE OpenStack Cloud from your tech preview environment:

From the socok8s directory:

run.sh remove_deployment

This command will display a warning and a prompt to continue or not.

23 Next Steps

Page 31: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

3 Administration and Operations Guide

This section has information on the administration and operation of Containerized SUSE Open-stack Cloud.

3.1 Using run.shThe primary means for running deployment, update, and cleanup actions in Containerized SUSEOpenStack Cloud is run.sh , a bash script that acts as a convenient wrapper around Ansibleplaybook execution. All of the commands below should be run from the root of the socok8sdirectory.

3.1.1 Deployment Actions

The run.sh deploy command:

Performs all necessary setup actions

Deploys all Airship UCP components and OpenStack services

Configures the inventory, extravars le, and appropriate environment variables as de-scribed in the Section 2.1, “Deployment Guide”

./run.sh deploy

It may be desirable to redeploy only OpenStack services while leaving all Airship componentsin the UCP untouched. In these use cases, run:

./run.sh update_openstack

3.1.2 Cleanup Actions

In addition to deployment, run.sh can be used to perform environment cleanup actions.

To clean up the deployment and remove Containerized SUSE OpenStack Cloud entirely, run thefollowing command in the root of the socok8s directory:

./run.sh remove_deployment

24 Using run.sh

Page 32: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

This will delete all Helm releases, all Kubernetes resources in the ucp and openstack name-spaces, and all persistent volumes that were provisioned for use in the deployment. After thisoperation is complete, only the original Kubernetes services deployed by the SUSE CaaS Plat-form will remain.

3.1.3 Testing

The run.sh script also has an option to deploy and run OpenStack Tempest tests. To begintesting, review Section 2.8, “Verify Deployment” and then run the following command:

./run.sh test

NotePlease read the Section 2.1, “Deployment Guide” for more information about configuring andrunning OpenStack Tempest tests in Containerized SUSE OpenStack Cloud.

3.2 Scaling In/Scaling Out

3.2.1 Adding or removing compute nodes

To add a compute node, the node must be running SUSE CaaS Platform v3.0, has been acceptedinto the cluster and bootstrapped using the Velum dashboard. After the node is bootstrapped,add its host details to the airship-openstack-compute-workers group in your inventoryin ${WORKSPACE}/inventory/hosts.yaml . Run the following command from the root of thesocok8s directory:

./run.sh add_openstack_compute

NoteMultiple new compute nodes can be added to the inventory at the same time.

It can take a few minutes for the new host to initialize and show in the OpenStack hy-pervisor list.

25 Testing

Page 33: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

To remove a compute node, run the following command from the root of the socok8s directory:

./run.sh remove_openstack_compute NODE_HOSTNAME

NoteNODE_HOSTNAME must be same as host name in Ansible inventory.

Compute nodes must be removed individually. When the node has been successfullyremoved, the host details must be manually removed from airship-openstack-com-pute-workers group in the inventory.

3.2.2 Control plane horizontal scaling

Containerized SUSE OpenStack Cloud provides two built-in scale profiles:

minimal, the default profile, deploys a single Pod for each service

ha deploys a minimum of two Pods for each service. Three or more Pods are suggested forservices that will be heavily utilized or require a quorum

Change scale profiles by adding a scale_profile key to ${WORKSPACE}/env/extravars andspecifying a profile value:

scale_profile: ha

The built-in profiles are defined in playbooks/roles/airship-deploy-ucp/files/profilesand can be modified to suit custom use cases. Additional profiles can be created and added tothis directory following the le naming convention in that directory.

We recommend using at least three controller nodes for a highly available control plane for bothAirship and OpenStack services. To add new controller nodes, the nodes must:

be running SUSE CaaS Platform v3.0

have been accepted into the cluster

be bootstrapped using the Velum dashboard.

26 Control plane horizontal scaling

Page 34: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

After the nodes are bootstrapped, add the host entries to the airship-ucp-workers ,airship-openstack-control-workers , airship-openstack-l3-agent-workers , and air-ship-kube-system-workers groups in your Ansible inventory in ${WORKSPACE}/invento-ry/hosts.yaml .

To apply the changes, run the following command from the root of the socok8s directory:

./run.sh deploy

3.3 UpdatesContainerized SUSE OpenStack Cloud is delivered as an RPM package. Generally it can be up-dated by updating the RPM package to the latest version and redeploying with the necessarysteps in the Section 2.1, “Deployment Guide”. This is the typical update path and will incorporateall recent changes. It will also automatically update component chart and image versions.

It is also possible to update services and components directly using the procedures below.

3.3.1 Updating OpenStack Version

To make a global change to the OpenStack version used by all component images, create a keyin ${WORKSPACE}/env/extravars called suse_openstack_image_version and set it to thedesired value. For example, to use the stein version, add the following line to the extravarsle:

suse_openstack_image_version: "stein"

It is also possible to update an individual image or subset of images to a different version ratherthan making a global change. To do this, it is necessary to manually edit the versions.yamlle located in socok8s/site/soc/software/config/ . Locate the images to be changed in theimages section of the le and modify the line to include the desired version. For example, to usethe stein version for the heat_api image, change the following line in versions.yaml from

heat_api: "{{ suse_osh_registry_location }}/openstackhelm/heat:{{ suse_openstack_image_version }}"

to

heat_api: "{{ suse_osh_registry_location }}/openstackhelm/heat:stein"

27 Updates

Page 35: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

3.3.2 Updating OpenStack Service Configuration

Certain use cases may require the addition or modification of OpenStack service configurationparameters. To update the configuration for a particular service, parameters can be added ormodified in the conf section of the chart for that service. For example, to change the logginglevel of the Keystone service to debug , locate the conf section of the Keystone chart locat-ed at socok8s/site/soc/software/charts/osh/openstack-keystone/keystone.yaml andadd the following lines, beginning with the logging key:

conf: logging: logger_root: level: DEBUG logger_keystone: level: DEBUG

NoteInformation about the supported configuration parameters for each service can generallybe found in the OpenStack Configuration Guides (https://docs.openstack.org/rocky/configu-

ration/index.html) for each release. Determining the correct keys and values to includein the chart for each service may require examining the values.yaml le for OpenStackHelm. In the Keystone logging example above, the names and proper locations for the log-ging keys were determined by reviewing the logging section in /opt/openstack/open-stack-helm/keystone/values.yaml , then copying those keys to socok8s/site/soc/software/charts/osh/openstack-keystone/keystone.yaml and providing the de-sired values.

When the desired parameters have been added to each chart requiring changes, the configurationupdates can be applied by changing to the root of the socok8s directory and running:

./run.sh update_openstack

3.3.3 Updating Individual Images and Helm Charts

The versions.yaml le can also be used for more advanced update configurations such asusing a specific image or Helm chart source version.

28 Updating OpenStack Service Configuration

Page 36: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

NoteChanging the image registry location from its default value or using a custom or non-default image will void any product support by SUSE.

To specify the use of an updated or customized image, locate the appropriate image name insocok8s/site/soc/software/config/versions.yaml and modify the line to include the de-sired image location and tag. For example, to use a new heat_api image, modify its entry withthe new image location:

heat_api: "registry_location/image_directory/image_name:tag"

Similarly, the versions.yaml le can be used to retrieve a specific version of any Helm chartbeing deployed. To do so, it is necessary to provide a repository location, type, and a reference.The reference can be a branch, commit ID, or a reference in the repository and will default tomaster if not specified. As an example, to use a specific version of the Helm chart for Heat,add the following information to the osh section under charts :

heat:location: https://git.openstack.org/openstack/openstack-helmreference: ${REFERENCE}subpath: heattype: git

NoteWhen specifying a particular version of a Helm chart, it may be necessary to rst createthe appropriate subsection under charts . Airship components such as Deckhand andShipyard belong under ucp , OpenStack services belong under osh , and infrastructurecomponents belong under osh_infra .

3.4 Reboot Compute HostBefore rebooting compute host, shut down all Nova VMs from that compute host. After rebootthe compute host, it is possible that the pods will come up out of order. If this happens, youmight see indications of the Nova VMs not getting an IP address. To address this problem, runthe following commands:

kubectl get pods -o wide | grep ovs-agent | grep COMPUTE_NAME

29 Reboot Compute Host

Page 37: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

kubectl delete pod -n openstack OVS-AGENT_POD_NAME

This should restart the Neutron OVS agent pod and reconfigure the vxlan tunnel network con-figuration.

3.5 Troubleshooting

3.5.1 Viewing Shipyard Logs

The deployment of OpenStack components in Containerized SUSE OpenStack Cloud is directedby Shipyard, the Airship platform's directed acyclic graph (DAG) controller, so Shipyard is oneof the best places to begin troubleshooting deployment problems. The Shipyard CLI client au-thenticates with Keystone, so the following environment variables must be set before runningany commands:

export OS_USERNAME=shipyardexport OS_PASSWORD=$(kubectl get secret -n ucp shipyard-keystone-user \-o json | jq -r '.data.OS_PASSWORD' | base64 -d)

NoteThe Shipyard user's password can be obtained from the contents of ${WORKSPACE}/se-crets/ucp_shipyard_keystone_password .

The following commands are run from the /opt/airship/shipyard/tools directory. If noShipyard image is found when the rst command is executed, it is downloaded automatically.

To view the status of all Shipyard actions, run:

./shipyard.sh get actions

Example output:

Name Action Lifecycle Execution Time Step Succ/Fail/Oth Footnotesupdate_software action/01D9ZSVG70XS9ZMF4Z6QFF32A6 Complete 2019-05-03T21:33:27 13/0/1 (1)update_software action/01DAB3ETP69MGN7XHVVRHNPVCR Failed 2019-05-08T06:52:58 7/0/7 (2)

30 Troubleshooting

Page 38: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

To view the status of the individual steps of a particular action, copy its action ID and run thefollowing command:

./shipyard.sh describe action/01DAB3ETP69MGN7XHVVRHNPVCR

Example output:

Name: update_softwareAction: action/01DAB3ETP69MGN7XHVVRHNPVCRLifecycle: FailedParameters: {}Datetime: 2019-05-08 06:52:55.366919+00:00Dag Status: failedContext Marker: 18993f2c-1cfa-4d42-9320-3fbd70e75c21User: shipyard

Steps Index State Footnotesstep/01DAB3ETP69MGN7XHVVRHNPVCR/action_xcom 1 successstep/01DAB3ETP69MGN7XHVVRHNPVCR/dag_concurrency_check 2 successstep/01DAB3ETP69MGN7XHVVRHNPVCR/deployment_configuration 3 successstep/01DAB3ETP69MGN7XHVVRHNPVCR/validate_site_design 4 successstep/01DAB3ETP69MGN7XHVVRHNPVCR/armada_build 5 failedstep/01DAB3ETP69MGN7XHVVRHNPVCR/decide_airflow_upgrade 6 Nonestep/01DAB3ETP69MGN7XHVVRHNPVCR/armada_get_status 7 successstep/01DAB3ETP69MGN7XHVVRHNPVCR/armada_post_apply 8 upstream_failedstep/01DAB3ETP69MGN7XHVVRHNPVCR/skip_upgrade_airflow 9 upstream_failedstep/01DAB3ETP69MGN7XHVVRHNPVCR/upgrade_airflow 10 Nonestep/01DAB3ETP69MGN7XHVVRHNPVCR/deckhand_validate_site_design 11 successstep/01DAB3ETP69MGN7XHVVRHNPVCR/armada_validate_site_design 12 upstream_failedstep/01DAB3ETP69MGN7XHVVRHNPVCR/armada_get_releases 13 failedstep/01DAB3ETP69MGN7XHVVRHNPVCR/create_action_tag 14 None

To view the logs from a particular step such as armada_build , which has failed in the aboveexample, run:

./shipyard.sh logs step/01DAB3ETP69MGN7XHVVRHNPVCR/armada_build

3.5.2 Viewing Logs From Kubernetes Pods

To view the logs from any Pod in the Running or Completed state, run

31 Viewing Logs From Kubernetes Pods

Page 39: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

kubectl logs -n ${NAMESPACE} ${POD_NAME}

To view logs from a specific container within a Pod in the Running or Completed state, run:

kubectl logs -n ${NAMESPACE} ${POD_NAME} -c ${CONTAINER_NAME}

If logs cannot be retrieved due to the Pod entering the Error or CrashLoopBackoff state, itmay be necessary to use the -p option to retrieve logs from the previous instance:

kubectl logs -n ${NAMESPACE} ${POD_NAME} -p

3.5.3 Recover Controller Host Node

If deployment fails with the error of controller host not reachable ( has entered maintenancemode )

Go to maintenance mode on controller host and run following commands:

mounted_snapshot=$(mount | grep snapshot | gawk 'match($6, /ro.*@\/.snapshots\/(.*)\/snapshot/ , arr1 ) { print arr1[1] }')

btrfs property set -ts /.snapshots/$mounted_snapshot/snapshot ro false

mount -o remount, rw /

mkdir /var/lib/neutron

btrfs property set -ts /.snapshots/$mounted_snapshot/snapshot ro true

reboot

3.5.4 Recover Compute Host Node

If deployment failed with the error of compute host not reachable ( has entered maintenancemode )

Go to maintenance mode on compute host and run following commands:

mounted_snapshot=$(mount | grep snapshot | gawk 'match($6, /ro.*@\/.snapshots\/(.*)\/snapshot/ , arr1 ) { print arr1[1] }')

btrfs property set -ts /.snapshots/$mounted_snapshot/snapshot ro false

32 Recover Controller Host Node

Page 40: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

mount -o remount, rw /

mkdir /var/lib/libvirtmkdir /var/lib/novamkdir /var/lib/openstack-helmmkdir /var/lib/neutron

btrfs property set -ts /.snapshots/$mounted_snapshot/snapshot ro true

reboot

3.6 Recovering from Node FailureKubernetes clusters are generally able to recover from node failures by performing a numberof self-healing actions, but it may be necessary to manually intervene occasionally. Recoveryactions vary depending on the type of failure. Some common scenarios and their solutions areoutlined below.

3.6.1 Pod Status of NodeLost or Unknown

If a large number of Pods show a status of NodeLost or Unknown , rst determine which nodesmay be causing the problem by running:

kubectl get nodes

If any of the nodes show a status of NotReady but they still respond to ping and can be accessedvia SSH, it may be that either the kubelet or docker service has stopped running. This canbe confirmed by checking the Conditions section for the message Kubelet has stoppedposting node status after running:

kubectl describe node ${NODE_NAME}

Log into the affected nodes and check the status of these services by running:

systemctl status kubeletsystemctl status docker

If either service has stopped, start it by running:

systemctl start ${SERVICE_NAME}

33 Recovering from Node Failure

Page 41: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

NoteThe kubelet service requires Docker to be running. If both services are stopped, Dockershould be restarted rst.

These services should start automatically each time a node boots up and should be running atall times. If either service has stopped, examine the system logs to determine the root cause ofthe failure. This can be done by using the journalctl command:

journalctl -u kubelet

3.6.2 Frequent Pod Evictions

If Pods are frequently being evicted from a particular node, it may be a sign that the node isunhealthy and requires maintenance. Check that node's conditions and events by running:

kubectl describe node NODE_NAME

If the cause of the Pod evictions is determined to be resource exhaustion, such as NodeHas-DiskPressure or NodeHasMemoryPressure , it may be necessary to remove the node from thecluster temporarily to perform maintenance. To gracefully remove all Pods from the affectednode and mark them as not schedulable, run:

kubectl drain NODE_NAME

After maintenance work is complete, the node can be brought back into the cluster by running:

kubectl uncordon NODE_NAME

which will allow normal Pod scheduling operations to resume. If the node was decommissionedpermanently while offline and a new node was brought into the CaaSP cluster as a replacement,it is not necessary to run the uncordon command. A new schedulable resource will be createdautomatically.

3.7 Kubernetes OperationsKubernetes has documentation for troubleshooting typical problems with applications and clusters

(https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting//) .

34 Frequent Pod Evictions

Page 42: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

3.8 Tips and Tricks

3.8.1 Display all images used by a component

Using Neutron as an example:

kubectl get pods -n openstack -l application=neutron -o \jsonpath="{.items[*].spec.containers[*].image}"|tr -s '[[:space:]]' '\n' \| sort | uniq -c

3.8.2 Remove Dangling Docker images

This is useful after building local images:

docker rmi $(docker images -f "dangling=true" -q)

3.8.3 Setting the Default Context

To avoid having to pass -n openstack repeatedly:

kubectl config set-context $(kubectl config current-context) --namespace=openstack

35 Tips and Tricks

Page 43: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

4 CSOC User Guide

4.1 Advanced UsersUsage scenarios for Containerized SUSE OpenStack Cloud

4.2 Ansible TipsThere are several variables for extra debugging from the Ansible playbooks as shown in the Ansi-

ble documentation (https://docs.ansible.com/ansible/latest/reference_appendices/config.html) .

Some examples:

export ANSIBLE_VERBOSITY=3export ANSIBLE_STDOUT_CALLBACK=debug

Both of these environment variables will enable verbosity and debug for all the playbooks beingrun.

You can enable the debugger for failed tasks as shown in the Ansible playbook debugger user

guide (https://docs.ansible.com/ansible/latest/user_guide/playbooks_debugger.html) .

export ANSIBLE_ENABLE_TASK_DEBUGGER=True

This command launches the debugger when a task fails so you can examine the task, vars, andretry the task. Check the Ansible docs link for all available options.

The debug task allows you to print to stdout while playbooks are executed without necessarilyhalting the playbook. Detailed information is available in the Ansible debug module documen-

tation (https://docs.ansible.com/ansible/latest/modules/debug_module.html) .

4.3 Build and Consume Your Own Images

4.3.1 Build non-OpenStack images

If you want to build your own image (for example, libvirt ), set the following in your${WORKDIR}/env/extravars :

---

36 Advanced Users

Page 44: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

myregistry: "myuser-osh.openstack.local:5000/"developer_mode: "True"# Builds the libvirt image from OSH-images repository.docker_images: - context: libvirt repository: "{{ myregistry }}openstackhelm/libvirt" # dockerfile: # Insert here the alternative Dockerfile's name. # build_args: # Insert here your extra build arguments to pass to docker. tags: - latest-opensuse_15

4.3.2 Build LOCI images

The LOCI command to build the OpenStack images is stored by default in loci_build_com-mand (see suse-build-images role default variables (https://github.com/SUSE-Cloud/socok8s/blob/

master/playbooks/roles/suse-build-images/defaults/main.yml) ).

For example, set loci_build_command to /openstack/loci/build-ocata.sh to build LOCIwith the Ocata release.

NoteBy default, the list of projects to build in LOCI is empty, and the LOCI builds are skipped.Define loci_build_projects as a list, each item being an upstream project to build inthe image build process.

4.3.3 Consume Built Images

After your images are built, you can point to them in the deployment.

4.3.3.1 For OSH (developer mode)

Set the following variable (for example for libvirt image override) in your env/extravars :

---# Points to that image in the libvirt chart.suse_osh_deploy_libvirt_yaml_overrides: images: tags: libvirt: "{{ myregistry }}openstackhelm/libvirt:latest-opensuse_15"

37 Build LOCI images

Page 45: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

4.4 Disable CaaSP Transactional UpdatesAlthough disabling CaaSP transactional updates is discouraged, there may be situations whenit is inconvenient to have automatic CaaSP updates. For more information, visit the CaaSPdocumentation for transactional updates (https://www.suse.com/documentation/suse-caasp-3/

book_caasp_admin/data/sec_admin_software_transactional-updates.html) .

Run the following to prevent a cluster from being updated:

systemctl --now disable transactional-update.timer

If you want to override once a week instead of daily, run the following:

mkdir /etc/systemd/system/transactional-update.timer.dcat << EOF > /etc/systemd/system/transactional-update.timer.d/override.conf[Timer]OnCalendar=OnCalendar=weeklyEOFsystemctl daemon-reload

Or use the traditional systemctl commands:

systemctl edit transactional-update.timersystemctl restart transactional-update.timersystemctl status transactional-update.timer

Check the next run:

systemctl list-timers

4.5 Clean Up KubernetesTo remove all traces of a Kubernetes deployment in your Containerized SUSE OpenStack Cloudenvironment, run:

export DELETE_ANYWAY='YES'./run.sh clean_k8s

WarningYou will lose all your Kubernetes data.

38 Disable CaaSP Transactional Updates

Page 46: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

4.6 Customizing Helm Testing BehaviorBy default, all tests that have been defined for each Helm chart will be run as part of that chart'sdeployment. However, in some scenarios you may want to prevent these tests from running.To disable Helm tests, define the run_tests key in ${WORKDIR}/env/extravars and set itto false :

run_tests: false

NoteThis will disable all Helm tests in all charts during a full site deployment. Tests for indi-vidual Helm charts can still be run by using the Helm CLI and the service name, such as:

helm test airship-glance

If needed, the default timeout value of 2700s for test completion can be increased by adding thetest_timeout key in ${WORKDIR}/env/extravars and providing a value, in seconds:

test_timeout: 3000

4.7 Discover Helm Charts OverridesThe upstream Helm charts have values that can be used to alter the deployment of a chart tomatch your needs.

Generate a list of the available overrides:

1. Build all the Helm charts (make all in each of the /opt/openstack/openstack-helmfolders) in your environment.

2. Run the following on your deployer node:

for fname in /opt/openstack/openstack-helm{,-infra}/*.tgz; do chartname=$(basename $fname | rev | cut -f "2-" -d "-" | rev); foldername=$(dirname $fname); pushd $foldername; echo -e "\nNow analysing: $chartname\n\n" >> /opt/charts-details; helm inspect values $chartname >> /opt/charts-details; popd;done

39 Customizing Helm Testing Behavior

Page 47: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

less /opt/charts-details

4.8 Minimal Network ExampleThis is the minimal network configuration for CCP.

The following configuration les reflect the diagram above.

${WORKDIR}/env/extravars :

socok8s_deployment_goal: airshipsocok8s_ext_vip: 172.30.0.245socok8s_dcm_vip: 172.30.0.246#either "minimal" or "ha"scale_profile: minimalredeploy_osh_only: false

${WORKDIR}/inventory/hosts.yml :

---caasp-admin:hosts: caasp-admin: ansible_host: 172.30.0.11vars: ansible_user: rootcaasp-masters:hosts: caasp-master: ansible_host: 172.30.0.12vars: ansible_user: rootcaasp-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 caasp-worker2: ansible_host: 172.30.0.14

40 Minimal Network Example

Page 48: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

vars: ansible_user: root

soc-deployer:hosts: deployer: ansible_host: 172.30.0.10vars: ansible_user: rootses_nodes:hosts: ses: ansible_host: 172.30.0.15vars: ansible_user: root

# added for airship

airship-openstack-control-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

airship-ucp-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

airship-kube-system-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

41 Minimal Network Example

Page 49: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

airship-openstack-compute-workers:hosts: primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

${WORKDIR}/ses_config.yml :

---# Example ses_config.yml fileceph_conf:cluster_network: 172.30.0.0/24fsid: d40fea38-fcf6-3dd5-8479-dd36e8f53ac5mon_host: 172.30.0.15mon_initial_members: sespublic_network: 172.30.0.0/24cinder:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: volumesrbd_store_user: cindercinder-backup:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: cinder_backuprbd_store_user: cinder-backupglance:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: imagesrbd_store_user: glancelibvirt:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: vmsrbd_store_user: cindernova:rbd_store_pool: novaradosgw_urls: []

4.9 Multiple Network Example

This is a multiple network example configuration for CCP. CaaS Platform and SES networkconfiguration have their own respective documentation.

42 Multiple Network Example

Page 50: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

The following configuration les reflect the diagram above.

${WORKDIR}/env/extravars :

socok8s_deployment_goal: airshipsocok8s_ext_vip: 172.30.1.245socok8s_dcm_vip: 172.30.0.246#either "minimal" or "ha"scale_profile: minimalredeploy_osh_only: false

${WORKDIR}/inventory/hosts.yml :

---caasp-admin:hosts: caasp-admin: ansible_host: 172.30.0.11vars: ansible_user: rootcaasp-masters:hosts: caasp-master: ansible_host: 172.30.0.12vars: ansible_user: rootcaasp-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

43 Multiple Network Example

Page 51: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

soc-deployer:hosts: deployer: ansible_host: 172.30.0.10vars: ansible_user: rootses_nodes:hosts: ses: ansible_host: 172.30.0.15vars: ansible_user: root

# added for airship

airship-openstack-control-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

airship-ucp-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

airship-kube-system-workers:hosts: caasp-worker1: ansible_host: 172.30.0.13 primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

airship-openstack-compute-workers:

44 Multiple Network Example

Page 52: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

hosts: primary: yes caasp-worker2: ansible_host: 172.30.0.14vars: ansible_user: root

${WORKDIR}/ses_config.yml :

---# Example ses_config.yml fileceph_conf:cluster_network: 172.30.2.0/24fsid: d40fea38-fcf6-3dd5-8479-dd36e8f53ac5mon_host: 172.30.2.15mon_initial_members: sespublic_network: 172.30.2.0/24cinder:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: volumesrbd_store_user: cindercinder-backup:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: cinder_backuprbd_store_user: cinder-backupglance:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: imagesrbd_store_user: glancelibvirt:key: AQDkeIZcAAAAABAAdTOl4xyDS0/v9B8m1drZmQ==rbd_store_pool: vmsrbd_store_user: cindernova:rbd_store_pool: novaradosgw_urls: []

4.10 Manage OpenStack ServicesContainerized SUSE OpenStack Cloud currently deploys OpenStack Cinder, Glance, Heat, Hori-zon, Keystone, Neutron, Nova.

You can change which services to deploy by modifying the chart group list in the site manifestle site/soc/software/manifests/full-site.yaml .

45 Manage OpenStack Services

Page 53: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

chart_groups: - openstack-ingress-controller-soc - openstack-mariadb-soc - openstack-memcached-soc - openstack-keystone-soc - openstack-ceph-config-soc - openstack-glance-soc - openstack-cinder-soc - openstack-compute-kit-soc - openstack-heat-soc - openstack-horizon-soc

For more details, refer to the Airship site authoring guide (https://airship-treasuremap.readthedo-

cs.io/en/latest/authoring_and_deployment.html) .

4.11 Deleting Containerized SUSE OpenStack Cloudfrom OpenStack

If you have built Containerized SUSE OpenStack Cloud on top of OpenStack, you can deleteyour whole environment by running:

./run.sh teardown

This will delete the CaaSP, SES, and deployer nodes from your cloud. It will not delete yourWORKDIR .

If you want to delete your WORKDIR too, run:

export DELETE_ANYWAY='YES'./run.sh teardown

WarningYou will lose all of your Containerized SUSE OpenStack Cloud data, your overrides, yourcertificates, your inventory.

46 Deleting Containerized SUSE OpenStack Cloud from OpenStack

Page 54: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

4.12 Use Custom PatchesTo apply upstream patches in your environment, set your patch numbers under the dev_patch-er_user_patches key in ${WORKDIR}/env/extravars :

dev_patcher_user_patches: # test patch for keystone - 12345 # test patch for cinder - 12345

These patches will only be carried in your environment. If you want to change the product (fordeveloper mode or not), please submit a pull request to the socok8s GitHub repository (https://

github.com/SUSE-Cloud/socok8s) .

NoteThis list of patches provided via extravars will be appended to the default patches listavailable on the dev-patcher role vars.

4.13 Use Your Own CertificatesIf you want to run developer mode and bring your own registry's SSL certificates, define thefollowing in ${WORKDIR}/env/extravars :

socok8s_registry_certkey:socok8s_registry_cert:

The variables should point to les present in your localhost. If they are not defined, self-signedcertificates will be generated on your localhost, and transferred to all the nodes.

4.14 Use a Personal ForkContainerized SUSE OpenStack Cloud allows you to use your own fork instead of relying onOpenStack-Helm repositories.

To override the Helm chart sources or fork any other code, override the content of the le vars/manifest.yml inside your extravars by defining your own upstream_repos variable.

47 Use Custom Patches

Page 55: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

5 Developer Documentation

In this section, you will nd documentation relevant to developing Containerized SUSE OpenS-tack Cloud.

5.1 Contributor Guidelines

5.1.1 Submodules

This repository uses submodules. The following guidelines apply only to the socok8s projectand repository. If your contribution affects other projects, please check those practices beforecontributing to them.

5.1.2 Before submitting code

This is a fast moving project. Please contact us before starting to work on it.

If you are willing to submit code, please remember the following rules:

All code should t with our Section 5.2.1, “General Guidelines for Submitting Code”.

All code is required to go through our Section 5.1.4, “Review process”.

Documentation should be provided with the code directly. See also Section 5.2.2, “Documen-

tation with Code”.

5.1.3 Bug reporting process

File bugs as Github issues.

When submitting a bug or working on a bug, please observe the following criteria:

The description clearly states or describes the original problem or root cause of the prob-lem.

The description clearly states the expected outcome of the user action.

48 Contributor Guidelines

Page 56: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

Include historical information about how the problem was identified.

Include any relevant logs or user configuration information, either directly or through apastebin.

If the issue is a bug that needs fixing in a branch other than master, please note the asso-ciated branch within the issue.

The provided information should be totally self-contained. External access to web ser-vices/sites should not be needed.

Steps to reproduce the problem if possible.

5.1.4 Review process

Any new code will be reviewed before it is merged into our repositories.

Two approving reviews are required before merging a pull request.

Any patch can be refused by the community if it does not match the Section 5.2.1, “General Guide-

lines for Submitting Code”.

5.1.5 Upstream communication channels

Most of this project is a thin wrapper around the Airship, OpenStack Helm, OpenStack LOCIupstream projects.

A developer should monitor the OpenStack-discuss openstack mailing lists (http://lists.open-

stack.org/cgi-bin/mailman/listinfo) , and the Airship-discuss airship mailing lists (http://lists.air-

shipit.org/cgi-bin/mailman/listinfo)

Please contact us on freenode IRC, in the #openstack-helm or #airshipit channels.

49 Review process

Page 57: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

5.2 Code rules

5.2.1 General Guidelines for Submitting Code

Write good commit messages. We follow the OpenStack Git Commit Good Practice (https://

wiki.openstack.org/wiki/GitCommitMessages) guide. If you have any questions regardinghow to write good commit messages, please review the upstream OpenStack documenta-tion.

All patch sets should adhere to the Section 5.2.4, “Ansible Style Guide” listed here and followthe Ansible best practices (http://docs.ansible.com/playbooks_best_practices.html) .

Refactoring work should never include additional rider features. Features that may per-tain to something that was refactored should be raised as an issue and submitted in prioror subsequent patches.

All patches including code, documentation and release notes should be built and testedlocally rst.

5.2.2 Documentation with Code

Documentation is a critical part of ensuring that the deployers of this project are appropriatelyinformed about:

How to use project tooling effectively to deploy OpenStack.

How to implement the right configuration to meet the needs of their specific use case.

Changes in the project over time which may affect an existing deployment.

To meet these needs developers must submit Section 5.2.3, “Code Comments” and documentationwith any code submissions.

All forms of documentation should comply with the guidelines provided in the OpenStack Doc-

umentation Contributor Guide (https://docs.openstack.org/contributor-guide/) , with particularattention to the following sections:

Writing style

RST formatting conventions

50 Code rules

Page 58: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

5.2.3 Code Comments

Code comments for variables should be used to explain the purpose of the variable.

Code comments for Bash/Python3 scripts should give guidance to the purpose of the code.This is important to provide context for reviewers before the patch has merged, and for latermodifications to remind the contributors what the purpose was and why it was done that way.

5.2.4 Ansible Style Guide

When creating tasks and other roles for use in Ansible, create them using the YAML dictionaryformat.

Example YAML dictionary format:

- name: The name of the tasks module_name: thing1: "some-stuff" thing2: "some-other-stuff" tags: - some-tag - some-other-tag

Example what NOT to do:

- name: The name of the tasks module_name: thing1="some-stuff" thing2="some-other-stuff" tags: some-tag

- name: The name of the tasks module_name: > thing1="some-stuff" thing2="some-other-stuff" tags: some-tag

Usage of the ">" and "|" operators should be limited to Ansible conditionals and commandmodules such as the Ansible shell or command .

5.3 TestingCode is tested using Travis and SUSE CI.

51 Code Comments

Page 59: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

5.3.1 Bash Linting

Bash coding conventions are tested using shellcheck.

5.3.2 Ansible Linting

Ansible convention are tested using ansible-lint , with the exception of:

Allow warning 204, which means lines longer than 120 characters are enabled.

5.3.3 Helm chart values linting

No test is implemented yet. Patches are welcomed.

5.4 Periodic workThis repository actively freezes the upstream code into vars/manifest.yml . It is necessary toregularly refresh the versions inside this le.

Similarly, we are using submodules, which also need a regular version updates.

Updating the manifest and the submodules are manual operations. There is no code availableto bump the versions yet.

5.5 Airship Developer Guide

5.5.1 Testing upstream patches

Carrying your own patches has been described as a user story on the page Section 4.12, “Use

Custom Patches”.

5.5.2 Build your own images

Carrying your own images has been described as a user story on the page Section 4.3, “Build and

Consume Your Own Images”.

52 Bash Linting

Page 60: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

5.5.3 Point to your own images in airship

This has been described in a user story on the page Section 4.3.3, “Consume Built Images”.

53 Point to your own images in airship

Page 61: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6 Administration and Operations Guide

This chapter contains extra reference information and more details about the socok8s GitHub

repository (https://github.com/SUSE-Cloud/socok8s) .

For information on how to deploy Containerized SUSE OpenStack Cloud, refer to Section 2.1,

“Deployment Guide”.

For information about how to manage and operate Containerized SUSE OpenStack Cloud, referto Chapter 3, Administration and Operations Guide.

For information on how to contribute to Containerized SUSE OpenStack Cloud, refer to Chapter 5,

Developer Documentation.

6.1 Project historyThis project started as a way to build and test the OpenStack-Helm charts for SUSE on SUSEproducts: The Container as a Service Platform (CaaSP) and the SUSE Enterprise Storage (SES).

It started as a series of shell scripts and Ansible playbooks, choosing the simplest and fastestway to bring a test infrastructure for the upstream charts. It was easier to start with shell scriptsthan writing a CLI in insert favorite language here , mostly because the shell scripts greworganically out of their usage and CI needs.

The mechanism of deployment was flexible from the beginning to allow developers to test theirchanges independently. It would allow them to override specific parts of the deployment, likeother users or customers would want to do.

6.2 Project goals

Simplicity

Stability

Use the latest stable products from SUSE

Carry the minimum amount of code to support upstream work on SUSE products

Be packagable and installable offline

Leverage upstream rst

54 Project history

Page 62: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6.3 Design considerations

6.3.1 Workspace

In order to avoid polluting the developer/CI machine (called localhost), all the data relevantfor a deployment (like any eventual override) is stored in a user-space folder, with unprivilegedaccess.

This also supports the use case of running behind a corporate firewall. The localhost can connectto a bastion host with the deployment actions happening behind the firewall.

6.4 Why...

... Ansible? Using Ansible is more robust than having socok8s running completely on shell scripts.Its ecosystem allows a nice interface to track deployment progress with ARA , run in a CI/CDlike Zuul or Tower/AWX.

... OpenStack on top of Kubernetes on top of OpenStack by default in run.sh ? SUSE has a cloudfor our Engineers, and that cloud is used for CI. From that point, creating a node for testing isas simple as doing an API call, and creating a stack of nodes is simple as reusing an existingHeat stack.

The run.sh script was mainly used for developers and CI. This is why the run.sh script pointsto openstack as the default DEPLOYMENT_MECHANISM .

... OpenStack on top of Kubernetes? Robust structure

... Splitting run.sh into so many steps? The current interface of run.sh is flexible enough towork for many different cases. It is semantically close to the actions that deploy OpenStack.run.sh itself is just an interface. Behind the scenes, it runs a DEPLOYMENT_MECHANISM -depen-dent script, starting the appropriate Ansible playbooks for the step called.

... A shell script for this interface? It was easier to start with a shell script rather than writinga CLI in insert favorite language here , mostly because the shell script grew organicallyout of actual use and CI needs.

... Installing from sources? Neither the socok8s repo nor the OpenStack-Helm project's reposito-ries have been packaged for Leap/SLE 15 yet.

55 Design considerations

Page 63: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6.5 Image building process

6.5.1 Upstream process

The OpenStack-Helm project tries to be neutral about the images by providing the ability fordeployers to override any image used in the charts.

However, the OpenStack-Helm project has a repository, openstack-helm-images (https://

github.com/openstack/openstack-helm-images) , containing a reference implementation for theimages. That repository holds the images used for the OpenStack-Helm project charts. All itsimages are built with Docker.

The openstack-helm-images repository provides Dockerfiles directly for all the non-OpenS-tack images.

For the OpenStack images, openstack-helm-images contains shell scripts, situated in open-stack/loci/ . The build.sh script is a thin wrapper around LOCI . LOCI is the official OpenS-tack project to build Lightweight Open Container Initiative (OCI) compliant images of OpenS-tack projects. It uses docker build to construct images from OpenStack sources. Their require-ments are expressed in bindep les (bindep.txt for rpm/apt packages, pydep.txt for Python3packages). The build.sh script runs LOCI for the master branch. Other branches can be builtusing build-BRANCHNAME.sh where BRANCHNAME is the name of the OpenStack release (forexample, rocky). See also Section 4.3.2, “Build LOCI images”.

In the future, openstack-helm-images could add images for OpenStack that would be basedon packages by simply providing the appropriate Dockerfiles. There is no announced plan tooffer such a resource.

Additionally, some images are not built in openstack-helm-images, and they are directly con-sumed/fetched from upstream projects official dockerfiles (such as xRally).

6.5.2 socok8s process

socok8s leverages the existing OSH-images code.

When running the build_images step, the localhost asks the deployer to build images basedon the code that was checked in on the deployer node using the vars/manifest.yml le.

For the non-LOCI images, the suse-build-images role invoked in the build_images step isrunning a docker build command.

56 Image building process

Page 64: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

For the LOCI images, the suse-build-images role runs the command available in open-stack-helm-images calling the LOCI build.

6.6 OpenStack-Helm chart overrides

6.6.1 Helm chart values overriding principle

A Helm chart installation (See the Helm chart documentation about customization

(https://helm.sh/docs/using_helm/#customizing-the-chart-before-installing) ) accepts an argu-ment named --values or -f .

This argument expects the filename of a YAML le to be present on the Helm client machine. Itcan be specified multiple times, and the rightmost le will take precedence.

In the following example, the different values of socok8s-glance.yaml overrides would winover the existing values in /tmp/glance.yaml :

helm upgrade --install glance ./glance --namespace=openstack \ --values=/tmp/glance.yaml --values=/tmp/socok8s-glance.yaml

6.6.2 OpenStack-Helm scripts

The OpenStack-Helm project provides shell scripts to deploy the Helm charts, with overridesper context (for example, multinode).

The shell scripts calling the Helm installation include an environment variable to allow usersto pass extra arguments.

See this example from the openstack-helm repository (https://github.com/openstack/open-

stack-helm/blob/c869b4ef4a0e95272155c5d5dd893c72976753cd/tools/

deployment/multinode/100-glance.sh#L49) .

57 OpenStack-Helm chart overrides

Page 65: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6.6.3 Customizing OSH charts for SUSE when deploying in OSHonly mode

socok8s uses the previously explained environment variable to pass an extra values le, a SUSE-specific YAML le. All the SUSE-specific les are present in playbooks/roles/deploy-osh/templates/ (for example socok8s-glance.yml ), if they are not part of upstream yet.

6.6.4 How deployers can extend a custom SUSE OSH chart in OSH-only mode

Deployers can pass their own YAML overrides in user space by using extravars to extend Helmchart behavior beyond the SUSE customizations.

These overrides are in playbooks/roles/deploy-osh/defaults/main.yml .

6.7 Summary Deploy on OpenStack diagrams

6.7.1 Simplified network diagram

58 Customizing OSH charts for SUSE when deploying in OSH only mode

Page 66: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6.7.2 OSH deploy on OpenStack

6.7.2.1 Set Up Hosts

This is the sequence of steps that generates, in OpenStack, the environment for deploying OSHlater.

6.7.2.2 Set Up OpenStack

This is the sequence of steps in your OpenStack-Helm deployment. The solid lines representAnsible plays and their connections.

The dotted lines represent extra connections happening on the Ansible targets.

59 OSH deploy on OpenStack

Page 67: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

60 OSH deploy on OpenStack

Page 68: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

61 OSH deploy on OpenStack

Page 69: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

6.8 Environment variables

6.8.1 In socok8s

run.sh behavior can be modified with environment variables.

DEPLOYMENT_MECHANISM contains the target destination of the deploy tooling. Currently set toopenstack by default, but will later include a baremetal and kvm .

SOCOK8S_DEVELOPER_MODE determines if you want to enter developer mode or not. This addsa step for patching upstream code, builds images and then continues the deployment.

62 Environment variables

Page 70: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

SOCOK8S_USE_VIRTUALENV determines if the script should set up and use a virtualenv forPython3 and Ansible requirements. Without this it is expected that ansible and the requirementsare installed via system packages. When SOCOK8S_DEVELOPER_MODE is set to True, this defaultsto True, otherwise this defaults to False.

USE_ARA determines if you want to store records in ARA. Set its value to 'True' for using ARA.

6.8.2 Ansible environment variables

You can use Ansible environment variables to alter Ansible behavior, for example by being moreverbose.

6.8.3 OpenStack-Helm environment variables

OpenStack Helm deployment scripts accepts environment variables to alter their behavior. Readeach of the scripts to know more about their override mechanisms.

63 Ansible environment variables

Page 71: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

7 Glossary

ARA

Ansible Run Analysis. The ARA tool provides an interface to the typically large amountof Ansible data.

CaaS Platform

SUSE Container as a Service Platform

CaaSP

SUSE Container as a Service Platform

Deployer

openSUSE Leap 15 host used to deploy CCP

LOCI

The official OpenStack project to build Lightweight Open Container Initiative (OCI) com-pliant images of OpenStack projects.

SES

SUSE Enterprise Storage

Workspace

A directory (by default) that is structured like an ansible-runner directory. The defaultworkspace directory can be changed by setting the environment variable SOCOK8S_EN-VNAME . The workspace directory name is always ${SOCOK8S_ENVNAME}-workspace .

64

Page 72: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8 SCOC Internal

8.1 SUSE ECP Internal (Experimental)

This guide provides instructions for performing a Containerized SUSE Openstack Cloud deploy-ment in the SUSE Engineering Cloud environment. It bootstraps a minimal SUSE CaaSP cluster,an SES-AIO, and a deployer VM.

This feature is experimental and is intended for SUSE developers.

8.2 Deploy in SUSE ECP Overview

SUSE Engineering Cloud Platform is an OpenStack environment. An experimental tool is pro-vided for you to bootstrap the Deployer VM, CaaS Platform nodes and SES on the OpenStackinfrastructure before deploying Airship and Containerized Openstack.

In this scenario, we introduce a new type of host called localhost , which runs shell scriptsand Ansible playbooks. This can be your CI node or your development laptop. It can be the sameas the Deployer, but that is not a requirement.

The following diagram shows the general workflow of a Containerized SUSE Openstack Clouddeployment on an Openstack environment.

65 SUSE ECP Internal (Experimental)

Page 73: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.3 Prepare Localhost

66 Prepare Localhost

Page 74: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.3.1 Base software

Install the following software on your localhost :

jq

ipcalc

git

python3-virtualenv

Optionally, localhost can be preinstalled with the following software:

ansible>=2.7.0

python3-openstackclient

python3-requests

python3-jmespath

python3-openstacksdk

python3-netaddr

Containerized SUSE OpenStack Cloud only supports the Python3 variant of packages. Generally,the python command invokes Python version 2. This version will not work with Container-ized SUSE OpenStack Cloud.

If the optional software packages are not installed, they will be installed in a venv in /.ansi-blevenv .

8.3.2 Cloning this repository

To get started, clone this repository. This repository uses submodules, so you must get all thecode to make sure the playbooks work.

git clone --recursive https://github.com/SUSE-Cloud/socok8s.git

Alternatively, one can fetch/update the tree of the submodules by running:

git submodule update --init --recursive

67 Base software

Page 75: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.3.3 Configure Ansible

8.3.3.1 Use ARA (recommended)

To use ARA, set the following environment variable before running run.sh .

export USE_ARA='True'

To set up ARA more permanently for your user on localhost, create an Ansible configurationle loading ARA plugins:

python3 -m ara.setup.ansible | tee ~/.ansible.cfg

For more details on ARA's web interface, please Read The ARA Docs (https://ara.readthedocs.io/en/

stable/webserver.html) .

8.3.3.2 Enable pipelining (recommended)

You can improve SSH connections by enabling pipelining:

cat << EOF >> ~/.ansible.cfg[ssh_connection]pipelining = TrueEOF

8.3.4 Defining a workspace

socok8s can create a Workspace , install things (eg. Ansible in a virtualenv), or create resources(for example, OpenStack Heat stacks if the deployment mechanism is openstack). For all of theseoperations, an environment variable called SOCOK8S_ENVNAME must be set. This variable mustbe unique if multiple environments are installed in parallel.

export SOCOK8S_ENVNAME='soc-west'

68 Configure Ansible

Page 76: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.3.5 Set the Deployment Mechanism

The SUSE Containerized OpenStack tooling can work with two different mechanisms:

Bring your own environment

Deploy everything on top of OpenStack (experimental).

This behavior can be changed by setting the environment variable DEPLOYMENT_MECHANISM .Its default value is kvm . To deploy CaaSP , SES , and Containerized OpenStack on top of anOpenStack environment (for CI for example), run:

export DEPLOYMENT_MECHANISM='openstack'

8.3.5.1 Configure OpenStack Deployment Mechanism (experimental)

Your environment must have an OpenStack client configuration le. For that, create the~/.config/openstack/clouds.yaml le.

The following is an example if you are running on an engcloud :

clouds: engcloud: region_name: CustomRegion auth: auth_url: https://keystone_url/v3 username: john # your username here password: my-super-secret-password # your password here or add it into secure.yaml project_name: cloud project_domain_name: default user_domain_name: ldap_users # this is just an example, adapt to your needs identity_api_version: 3ansible: use_hostnames: True expand_hostvars: False fail_on_errors: True

Now pre-create your environment. The convention here is to use your username as part of thename of objects you create.

Create a keypair on your cloud (named further engcloud) using either the Horizon web inter-face or the OpenStackClient (OSC) openstack keypair create command for accessing theinstances created. Remember the name of this keypair (which appears as soc-west-key in theexample below).

69 Set the Deployment Mechanism

Page 77: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

Set this for all the following scripts in a deployment:

export SOCOK8S_ENVNAME='soc-west'# 'engcloud' is the name in the `clouds.yaml`export OS_CLOUD=engcloud# Set to the name of the keypair you createdexport KEYNAME=soc-west-key#replace with the actual external network name in your OpenStack environmentexport EXTERNAL_NETWORK=floating

Proceed to next section of the documentation, Section 8.4, “Prepare the Target Hosts”.

8.4 Prepare the Target Hosts

ImportantSkip this step if you are bringing your own SES , CaaSP , and deployer environment(recommended).

Apply these commands if you are running on OpenStack and want to construct your environmentfrom scratch.

70 Prepare the Target Hosts

Page 78: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

WarningYou must export the right environment variables for run.sh to work with the openstackdeployment mechanism. Verify that they are set appropriately. See Section 8.3.5.1, “Config-

ure OpenStack Deployment Mechanism (experimental)”.

8.4.1 In Separate Steps

Create your SES node. The SES All-In-One (AIO) node has the following requirements:

(v)CPU: 6

Memory: 16GB

Storage: 80GB

When SES is deployed as AIO, two additional 60GB storage disks must be added to thenode for OSD.

Configure SES-AIO:

./run.sh deploy_ses

Create the CaaSP cluster nodes in the cloud:

./run.sh deploy_caasp

Create the deployer node:

./run.sh deploy_ccp_deployer

Enroll all the CaaSP nodes into their roles (master, admin, and workers):

./run.sh enroll_caasp_workers

8.4.2 In a Single Step

Alternatively, you can do all of the above in one step:

./run.sh setup_hosts

Do not run both separate steps and single step methods.

71 In Separate Steps

Page 79: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.5 Configure the Deployment

All the les for the deployment are in a Workspace , whose default location is on localhost. Thedefault name can be changed via the environment variable SOCOK8S_ENVNAME .

This workspace is structured like an ansible-runner directory. It contains:

an inventory folder

an env folder

This folder must also contain extra les necessary for the deployment, such as the ses_con-fig.yml and the kubeconfig les.

8.5.1 Configure the inventory

If you are bringing your own cluster, create an inventory based on our example located in theexamples folder.

---caasp-admin: vars:

72 Configure the Deployment

Page 80: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

ansible_user: root

caasp-masters: vars: ansible_user: root

caasp-workers: vars: ansible_user: root

soc-deployer: vars: ansible_user: root

ses_nodes: vars: ansible_user: root

airship-openstack-compute-workers: vars: ansible_user: root

airship-openstack-control-workers: vars: ansible_user: root

airship-openstack-l3-agent-workers: vars: ansible_user: root

airship-ucp-workers: vars: ansible_user: root

airship-kube-system-workers: vars: ansible_user: root

This inventory only contains the group names.

For each group, a hosts: key must be added, with, as value, each of the hosts you will need.For example:

caasp-admin: hosts: my_user-57997-admin-x6sugiws4g34: ansible_host: 10.86.1.144

73 Configure the inventory

Page 81: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

See also Ansible Inventory Hosts and Groups (https://docs.ansible.com/ansible/2.7/user_guide/in-

tro_inventory.html#hosts-and-groups) .

NoteDo not add localhost as a host in your inventory. It is a host reserved by Ansible. If youwant to create an inventory node for your local machine, add your machine's hostnameinside your inventory, and specify this host variable: ansible_connection: local

8.5.2 Make the SES pools Known by Ansible

Ansible relies on two things to know the SES pools created for the Airship/OpenStack deploy-ment:

a ses_config.yml le present in the workspace

the ceph admin keyring, in base64, present in the le env/extravars of your workspace.

You can nd an example ses_config.yml in examples/workdir.

8.5.3 Configure the VIP for OpenStack service public endpoints

Add socok8s_ext_vip : with its appropriate value for your environment in your env/ex-travars . This should be an available IP on the external network (in a development environ-ment, it can be the same as a CaaSP cluster network).

For example:

socok8s_ext_vip: "10.10.10.10"

8.5.4 Configure the VIP for Airship UCP service endpoints

Add socok8s_dcm_vip : with its appropriate value for your environment in your env/ex-travars . This should be an available IP on the data center management (DCM) network (in adevelopment environment, it can be the same as a CaaSP cluster network).

For example:

socok8s_dcm_vip: "192.168.51.35"

74 Make the SES pools Known by Ansible

Page 82: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.5.5 Provide a kubeconfig File

socok8s relies on kubectl and Helm commands to configure your OpenStack deployment. Youmust provide a kubeconfig le on your localhost in your workspace. You can fetch this lefrom the Velum UI on your CaaSP cluster.

8.5.6 Advanced configuration

socok8s deployment variables respect Ansible general precedence. All the variables can be adapt-ed.

You can override most user facing variables with host vars and group vars.

socok8s is flexible and allows you to override the value of any upstream Helm chart value withappropriate overrides.

NotePlease read Section 4.1, “Advanced Users” for inspiration on overrides.

8.6 Set Up OpenStack

75 Provide a kubeconfig File

Page 83: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

You can either run the following steps separately or run them in a single step all at the same time.

8.6.1 In separate steps

8.6.1.1 Configuring CaaSP

Run the following to configure the CaaSP nodes for OpenStack:

./run.sh setup_caasp_workers_for_openstack

This will update your CaaSP workers to:

Point to your deployer host in /etc/hosts

Copy your registry certificates (if developer mode is enabled)

Create some directories of your workers with read/write mode for OpenStack software

8.6.1.2 Run developer plays

If you are a developer and want to apply upstream patches (but not carry your own fork), youmight want to run:

export SOCOK8S_DEVELOPER_MODE='True'./run.sh patch_upstream

Build your own images by running:

export SOCOK8S_DEVELOPER_MODE='True'./run.sh build_images

8.6.1.3 Deploy OpenStack

Tip

If you are a Helm chart developer, you can run OpenStack-Helm deployment on top of CaaSPwithout Airship:

./run.sh deploy_osh

76 In separate steps

Page 84: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

To deploy OpenStack using Airship, run:

./run.sh deploy

8.6.2 In a Single Step

All of the above steps can be summarized in a single command (Do not run both separateand single step).

8.6.2.1 For Airship deployment

Run the following to deploy Airship:

./run.sh setup_airship

If you want to patch upstream Helm charts or build your own images, run the following:

export SOCOK8S_DEVELOPER_MODE='True'./run.sh setup_airship

NoteThe process might take several minutes to finish. If you want to know what is happening,check out the Operations Guide page on Section 2.7.1, “Track Deployment Progress”.

8.6.2.2 For OpenStack-Helm only (developers)

Run the following to deploy OpenStack-Helm only:

./run.sh setup_openstack

If you want to patch upstream Helm charts and/or build your own images, run the run thefollowing:

export SOCOK8S_DEVELOPER_MODE='True'./run.sh setup_openstack

77 In a Single Step

Page 85: Containerized SUSE OpenStack Cloud Technology Preview · 1 Welcome to Containerized SUSE OpenStack Cloud Technology Preview1 2 Containerized SUSE OpenStack Cloud Tech Preview2 2.1

8.6.2.3 Verify the installation

The Section 2.8, “Verify Deployment” page has information for testing your Containerized SUSEOpenStack Cloud installation.

8.6.2.4 Uninstalling Containerized SUSE OpenStack Cloud

See the Section 2.10, “Uninstall” page for instructions.

78 In a Single Step