introducing & playing with docker | manel martinez | 1st docker crete meetup
TRANSCRIPT
Introducing and playing with Docker
Manel Martinez Gonzalez@manel_martinezg
About me
● I am a Catalan (from Barcelona)
● +10 years of experience as a Systems Engineer
● I focus my career on helping companies with:
○ Stabilizing Production environments○ Building Agile-friendly development environments (Continuous Integration and
Continuous Delivery)○ Automating everything
● Discovered Docker at 2014
○ Since then, my working tool
2
About Docker
● A recent product that makes it easy to work with Linux Containers
● Mainly composed by:
○ The Docker daemon, which controls containers○ The Docker images, where we pack (build) our services○ The Docker containers, where we run our services○ The Docker CLI, which help us interact with Docker daemon
3
About Docker
● Main Docker features:
○ Dockerfile => image descriptor○ Docker Hub => public & private image repositories○ Linking => interconnecting containers○ Labeling => container/image metadata○ Volumes => persist & share containers data ○ Docker Machine => automated server provisioning○ Docker Compose => automated multi-container app deploying○ Docker Swarm => clustering solution
● Install last daemon & CLI version (debian & redhat like distros):
wget -qO- https://get.docker.com/ | sh
4
Using Docker
● Useful docker commands
docker run => Create a new containerdocker stop => Stop a running containerdocker start => Start an existing containerdocker restart => Restart an existing containerdocker ps => List existing containersdocker inspect => Get low-level container/image informationdocker rm => Delete an existing containerdocker exec => Run a command in a running container
5
Using Docker
● Using existing images - the ‘Hello world’ container
docker run ubuntu:14.04 /bin/echo 'Hello world'
● Dockerizing an application (creating an image)
○ Write the Dockerfile and add required image sources○ Build the image docker build -t
repo/image-name:tag .○ Push the image docker push repo/image-
name:tag○ Run the containers docker run [options] repo/image-
name:tag
6
Dockerfile
7
● Environment variables
● Volumes
● Ports
● Commands
● And more
Docker Hub
● Public & private Docker image repositories
● Docker images are based on a Dockerfile and source files
● Dockerfile is used to describe the image
● Once the image is built we push it to aDocker Hub repository
● Then it is ready to be deployed onany environment
8
Docker Hub
9
Linking
● Linking is useful to interconnect containers
docker run [options] --link some-mysql:mysql ...
● Link name resolves the linked container IP (added on /etc/hosts file)
172.10.0.7 mysql
● Environment variables and ports on linked container are exposed as new environment variables
MYSQL_ENV_MYSQL_ROOT_PASSWORD=abc123MYSQL_PORT_3306_TCP_PORT=3307
10
Labeling
● Labeling is useful to set metadata on containers / images
docker run [options] --label key=”value” ...
● Used to extend Docker functionalities
○ Setting variables that won’t be exposed when linking○ Setting custom information○ Identifying / grouping containers
docker ps --filter “label=key=’value’”
11
Volumes
● Containers are volatile, all their data is deleted when you delete the container
● Volumes help us to persist data
docker run [options] --volume=/path/on/server:/path/on/container ...
● Or to share data across containers
docker run [options] --volumes-from [some container] ...
12
Docker Machine
● Binary used to provision remote Docker hosts and set up communication with your local Docker CLI
curl -L https://github.com/docker/machine/releases/download/v0.4.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machinechmod +x /usr/local/bin/docker-machine
● Can manage multiple servers on different cloud providers (or your own infrastructure): Amazon EC2, DigitalOcean, Azure, Google Compute Engine, etc
docker-machine create -d digital-ocean --digitalocean-access-token=XXXX server1
13
Docker Compose
● Binary used to manage multiple containers and volumes
curl -L https://github.com/docker/compose/releases/download/1.4.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-composechmod +x /usr/local/bin/docker-compose
● Based on a compose YAML definition file
$ docker-compose up -dCreating test_db_1…Creating test_wordpress_1…
14
wordpress: image: wordpress links: - db:mysql ports: - 8080:80db: image: mariadb environment: MYSQL_ROOT_PASSWORD: example
Docker Swarm
15
Docker Swarm
● Docker client (swarm manager) that sends requests to appropriate Docker daemons (swarm agent) running in a cluster
● Useful to manage all Docker hosts and containers from a single Docker CLI
● Used with Machine and Compose to automatically provision hosts and containers
● Create a swarm manager using Machine:
docker-machine create -d digital-ocean --swarm --swarm-master --swarm-discovery token://d78a10ca90563d464e19e2246404d22b swarm-master
● Create a swarm agent using Machine:
docker-machine create -d digital-ocean --swarm --swarm-discovery token://d78a10ca90563d464e19e2246404d22b swarm-agent-00
16
Docker Swarm
Scheduling rules used to decide in which hosts to deploy containers based on filters. For example:
● Constraint filter: deploy containers on hosts that meet a condition
docker run [options] -e constraint:storage==disk …
● Container affinity filter: deploy containers next to other containers
docker run -d -p 80:80 --name frontend nginxdocker run -d -e affinity:container==frontend
logger
17
Docker Swarm
● Label affinity filter: deploy containers next to other containers using labels
docker run -d -p 80:80 --label com.example.type=frontend nginx
docker run -d -e affinity:com.example.type==frontend logger
● Port filter: deploy containers on hosts where exposed ports are available
docker run -d -p 80:80 nginx => deployed on host1docker run -d -p 80:80 nginx => deployed on host2docker run -d -p 80:80 nginx => deployed on host3
● Dependency filter: deploy inter-dependent containers together (links, volumes)
18
Docker Swarm
● Strategies for ranking nodes based on available CPU and RAM
● When nodes have the same ranking, these strategies can be set:
○ Spread: deploy containers on nodes that have the fewest containers
■ Best container distribution - if a node fails, less resources are impacted
○ Binpack: deploy containers on nodes that have more containers
■ Less fragmentation - fewer nodes are required
○ Random: choose a node randomly
19
Demo time
20