the docker ecosystem
TRANSCRIPT
1CONFIDENTIAL
The Docker Ecosystem
DZMITRY SKAREDAU, SOLUTION ARCHITECT
NOVEMBER 5, 2015
2CONFIDENTIAL 2
AGENDA
• Introduction to Docker
• Docker’s Key Use Cases
• Docker Toolbox
• Docker Machine
• Docker Compose
• Docker Swarm
• Multi-Host Docker Networking
3CONFIDENTIAL
WHAT IS DOCKER?
4CONFIDENTIAL 4
WHAT IS DOCKER?
Open source engine that leverage LXC and AUFS to package
an application and its dependencies in a virtual container
that can run on any Linux server.
WHAT!?
We are using Windows!most of us
5CONFIDENTIAL 5
WHAT IS DOCKER?
6CONFIDENTIAL 6
LXC
Wikipedia
https://en.wikipedia.org/wiki/LXC
“
„
Linux Containers (LXC) provide a means to isolate individual services or applications as well as of
a complete Linux operating system from other services running on the same host. To accomplish
this, each container gets its own directory structure, network devices, IP addresses and process
table. The processes running in other containers or the host system are not visible from inside a
container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU
or disk I/O.
LXC combines kernel's cgroups and support for isolated namespaces to provide an isolated
environment for applications.
7CONFIDENTIAL 7
CGROUPS
“ „cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for, and
isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.
Wikipedia
https://en.wikipedia.org/wiki/Cgroups
8CONFIDENTIAL 8
NAMESPACE ISOLATION
“ „namespace isolation, where groups of processes are separated such that they cannot "see"
resources in other groups. For example, a PID namespace provides a separate enumeration
of process identifiers within each namespace. Also available are mount, UTS, network and SysV IPC
namespaces.
Wikipedia
https://en.wikipedia.org/wiki/Cgroups
9CONFIDENTIAL 9
AUFS
“ „aufs (short for advanced multi layered unification filesystem) implements a union
mount for Linux file systems.
Wikipedia
https://en.wikipedia.org/wiki/Aufs
10CONFIDENTIAL 10
AUFS
A typical Linux start to run to two FS:
• bootfs (boot file system) Including bootloader
and kernel, bootloader is the main kernel boot
loader, when after the success of the boot
kernel is loaded into memory after the bootfs
was umount
• rootfs (root file system) Is a typical Linux
system consists of /dev, /proc, /bin, /etc and
other standard directory and file.
11CONFIDENTIAL 11
AUFS
Thus for different Linux distributions, the bootfs is
basically the same, the rootfs will be different, so
different distributions can be public bootfs as
shown below:
• Debian is a Unix-like computer operating
system and a Linux distribution
Size: 136.1 MB
• BusyBox is software that provides several
stripped-down Unix tools in a single executable
file. It was specifically created for embedded
operating systems with very limited resources.
Size: 1.109 MB
12CONFIDENTIAL 12
AUFS
2 custom images:
1. With Apache/Emacs over Debian
2. Over BusyBox
13CONFIDENTIAL 13
CONTAINERS VS VMS
14CONFIDENTIAL 14
DOCKER UNDER THE HOOD
15CONFIDENTIAL 15
DOCKER UNDER THE HOOD
16CONFIDENTIAL 16
DOCKER AT LINUX AND MACOS/WINDOWS
17CONFIDENTIAL 17
DOCKER CONTAINERS IN PRODUCTION
There is currently a pervasive (and faulty)
perception that Docker containers are only
being utilized in dev-test and proof-of-
concept projects. In fact, the question I am
most often asked by IT colleagues and
customers goes like this: “Is anyone using
Docker containers for critical workloads, or
even in production?” The answer is an
unequivocal “Yes” – critical workloads are
being run in Docker containers, and much
more pervasively than is commonly
understood.
Here are a few examples:
• Global financial services corporation ING is
using Docker containers to help accelerate
its continuous delivery process and drive
500 deployments/week, meeting speed to
market goals
• Global investment bank Goldman Sachs uses
Docker containers to centralize application
builds and deployments
• Streaming music leader Spotify uses Docker
containers to make software
deploymentsrepeatable, straightforward,
and fault-tolerant
• Application performance management
leader New Relic is using Docker containers
to solve its most challenging deployment
issues
18CONFIDENTIAL
DOCKER’S KEY USE CASES
19CONFIDENTIAL 19
SIMPLIFYING CONFIGURATION
Cloud Services with built-in Docker support
20CONFIDENTIAL 20
CODE PIPELINE MANAGEMENT
The immutable nature of Docker images, and the ease
with which they can be spun up, help you achieve zero
change in application runtime environments across dev
through production.
ENV DEV
Private Docker Hub
ENV INT ENV QA ENV PRE PROD
ENV PROD
21CONFIDENTIAL 21
DEVELOPER PRODUCTIVITY
In a developer environment, we have two goals that are at
odds with each other:
1. We want it be as close as possible to production; and
2. We want the development environment to be as fast as
possible for interactive use.
22CONFIDENTIAL 22
APP ISOLATION
A couple of such cases to consider are server consolidation
for decreasing cost or a gradual plan to separate a
monolithic application into decoupled pieces.
23CONFIDENTIAL 23
SERVER CONSOLIDATION
Just like using VMs for consolidating multiple applications,
the application isolation abilities of Docker allows
consolidating multiple servers to save on cost. However,
without the memory footprint of multiple OSes and the
ability to share unused memory across the instances,
Docker provides far denser server consolidation than you
can get with VMs.
24CONFIDENTIAL 24
MULTI-TENANCY
Using Docker, it was easy and inexpensive to create
isolated environments for running multiple instances of
app tiers for each tenant.
25CONFIDENTIAL 25
RAPID DEPLOYMENT
Docker creating a container for the process and not
booting up an OS, brings it down to seconds.
26CONFIDENTIAL
DOCKER TOOLBOX
27CONFIDENTIAL 27
• Docker Machine for running the docker-machine binary
• Docker Engine for running the docker binary
• Kitematic, the Docker GUI
• a shell preconfigured for a Docker command-line
environment
• Oracle VM VirtualBox
DOCKER TOOLBOX
28CONFIDENTIAL 28
The Docker VM is lightweight Linux virtual machine made
specifically to run the Docker daemon on Windows. The
VirtualBox VM runs completely from RAM, is a small ~29MB
download, and boots in approximately 5s.
DOCKER MACHINE
docker-machine create --driver virtualbox my-defaultCreating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM...
To see how to connect Docker to this machine, run: docker-machine env my-default
docker-machine --native-ssh create -d virtualbox dev
29CONFIDENTIAL 29
DOCKER MACHINE
docker-machine --native-ssh create -d virtualbox dev
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev virtualbox Running tcp://192.168.99.100:2376
List your available machines
Create a new Docker VM
Get the environment commands for your new VM
docker-machine env dev --shell cmd
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.100:2376
set DOCKER_CERT_PATH=C:\Users\Dzmitry_Skaredau\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell:
# copy and paste the above values into your command prompt
30CONFIDENTIAL 30
DOCKER MACHINE
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev * virtualbox Running tcp://192.168.99.100:2376
List your available machines
docker run ^
-d ^
-p 80:80 ^
-v $(pwd)/src/vhost.conf:/etc/nginx/sites-enabled/vhost.conf ^
-v $(pwd)/src:/var/www ^
nginx
Run container
pwd: The pwd command will allow you to know in which directory you're located (pwd stands for "print working directory")
31CONFIDENTIAL 31
DOCKER MACHINE
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ef9b3f99a05f nginx "nginx -g 'daemon off" 13 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp, 443/tcp sad_elion
Show containers
docker-machine ip
192.168.99.100
Find machine IP
32CONFIDENTIAL
docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
rnd-saas/service-discovery latest 9f7499191ada 10 seconds ago 722.7 MB
java 8 bdd93cb6443c 4 days ago 641.9 MB
busybox latest c51f86c28340 4 days ago 1.109 MB
nginx latest 914c82c5a678 7 days ago 132.8 MB
ubuntu precise 38f2c35e1b51 13 days ago 136.1 MB
32
DOCKERFILE
FROM java:8
EXPOSE 8761
VOLUME /tmp
ADD service-discovery-0.1.0.jar app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
Dockerfile content
docker build -t rnd-saas/service-discovery .
Build new image
Show images
33CONFIDENTIAL
DOCKER COMPOSE
34CONFIDENTIAL 34
Running multiple containers
• Run your stack with one command: docker-compose up
• Describe your stack with one file: docker-compose.yml
DOCKER COMPOSE
35CONFIDENTIAL 35
HOW TO RUN WORDPRESS
FROM orchardup/php5
ADD . /code
Dockerfile
web:
build: .
command: php -S 0.0.0.0:8000 -t /code
ports:
- "8000:8000"
links:
- db
volumes:
- .:/code
db:
image: orchardup/mysql
environment:
MYSQL_DATABASE: wordpress
docker-compose.yml
36CONFIDENTIAL 36
The features of Compose that make it effective are:
• Multiple isolated environments on a single host
DOCKER COMPOSE
37CONFIDENTIAL 37
Compose uses a project name to isolate environments from each other.
You can use this project name to:
• on a dev hostto create multiple copies of a single environment (ex: you want to run a stable copy for each feature branch of a project)
• on a CI serverto keep builds from interfering with each other, you can set the project name to a unique build number
• on a shared host or dev hostto prevent different projects which may use the same service names, from interfering with each other
ISOLATED ENVIRONMENTS
38CONFIDENTIAL 38
The features of Compose that make it effective are:
• Multiple isolated environments on a single host
• Preserve volume data when containers are created
DOCKER COMPOSE
39CONFIDENTIAL 39
Compose preserves all volumes used by your services. When docker-
compose up runs, if it finds any containers from previous runs, it copies
the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
PRESERVE VOLUME DATA
40CONFIDENTIAL 40
The features of Compose that make it effective are:
• Multiple isolated environments on a single host
• Preserve volume data when containers are created
• Only recreate containers that have changed
DOCKER COMPOSE
41CONFIDENTIAL 41
Compose caches the configuration used to create a container. When you
restart a service that has not changed, Compose re-uses the existing
containers. Re-using containers means that you can make changes to your environment very quickly.
RECREATES ONLY CHANGED CONTAINERS
42CONFIDENTIAL 42
The features of Compose that make it effective are:
• Multiple isolated environments on a single host
• Preserve volume data when containers are created
• Only recreate containers that have changed
• Variables and moving a composition between environments
DOCKER COMPOSE
43CONFIDENTIAL 43
Your configuration options can contain environment variables. Compose uses the variable values from the shell environment in which docker-
compose is run. For example, suppose the shell contains
POSTGRES_VERSION=9.3 and you supply this configuration:
VARIABLE SUBSTITUTION
db:
image: "postgres:${POSTGRES_VERSION}"
44CONFIDENTIAL 44
Common use case is multiple compose files: changing a Compose app for different environments
MOVING A COMPOSITION BETWEEN ENVIRONMENTS
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
45CONFIDENTIAL
DOCKER SWARM
46CONFIDENTIAL 46
Docker Swarm is used to host and schedule a cluster of Docker containers.
DOCKER SWARM
47CONFIDENTIAL 47
Since Swarm ships as a standard Docker image with no
external infrastructure dependencies, getting started is a
simple, three-step process:
1. Run one command to create a cluster.
2. Run another command to start Swarm.
3. On each host where the Docker Engine is running, run a command to join said cluster.
SETUP
48CONFIDENTIAL 48
Swarm is aware of the resources available in the cluster and will place
containers accordingly.
To choose a ranking strategy, pass the --strategy flag and a strategy value
to the swarm manage command. Swarm currently supports these values:
• spread
• binpack
• random
RESOURCE MANAGEMENT
docker run -d -m 1g redis
49CONFIDENTIAL 49
In order to meet the specific requirements of each container, their
placement can be fine-tuned using constraints.
CONSTRAINTS
docker run -d -e constraint:storage==ssd mysql
Constraints operate on Docker daemon labels. To make the previous
example work, Docker must be started with the–label storage=ssd option.
More advanced expressions are also supported:
docker run --rm -d -e constraint:node!=fed*
docker run --rm -d -e constraint:node==/ubuntu-[0-9]+/
50CONFIDENTIAL 50
In some cases, the placement of a container must be relative to other containers. Swarm lets you define
those relationships through affinities.
The following will run two Redis servers, while guaranteeing they don’t get scheduled on the same
machine:
AFFINITY
docker run -d --name redis_1 -e ‘affinity:container!=redis_*’ redis
docker run -d --name redis_2 -e ‘affinity:container!=redis_*’ redis
51CONFIDENTIAL 51
At some point, Swarm will be able to reschedule containers upon host failure.
Let’s say you schedule a frontend container with some constraints:
FAULT-TOLERANT SCHEDULING
docker run -d -e constraint:storage==ssd nginx
If the host of this container goes down, Swarm will be able to detect the outage
and reschedule this container on another host that can respect the constraint storage==ssd
52CONFIDENTIAL
MULTI-HOST DOCKER NETWORKING
53CONFIDENTIAL 53
Networking is a feature of Docker Engine that allows you to
create virtual networks and attach containers to them so you
can create the network topology that is right for your application.
NETWORKING
54CONFIDENTIAL 54
1. Connect containers to each other across different physical or virtual hosts
2. Containers using Networking can be easily stopped, started and restarted
without disrupting the connections to other containers
3. You don’t need to create a container before you can link to it. With
Networking containers be created in any order and discover each other using
their container names
NETWORKING
55CONFIDENTIAL 55
You can create a new network with docker network create. In this example,
we’ll create a network called “frontend” and run an nginx container inside it:
NETWORKING
docker network create frontend
docker run -d --net=frontend --name web nginx
Then we could run a web application in a network called “app” and then use the docker network connect command so our Nginx container can forward
connections to it.
docker network create app
docker run -d --name myapp --net=app <my application container>
docker network connect app web
Now Nginx should be able to connect to your application using the hostname “myapp.app”