containers through the ages - syseleven
TRANSCRIPT
Container through the ages
Christoph Glaubitz Cloud Architect at SysEleven GmbH ContainerDays Hamburg, 28.06.2016
WHAT COULD YOU EXPECT FROM THIS TALK?
A little bit of history on containersHow we used containers in the pastWhat we did to manage our infrastructure
Which isolates runtime environments, but does not isolateprocess space and devices, and does nothing with
resource control.
In 2000, FreeBSD jails were introduced. chroot wasextended by further isolation mechanisms, like processnamespaces. Processes in one jail can only see processes
in the same jail.
My first contact with containers was around 2007
https://en.wikipedia.org/wiki/Star_Trek:_First_Contact#/media/File:Star_Trek_08-poster.png
When I worked in the Tivoli Storage Manager Team atIBM, where we had to support AIX WPARs and Solaris
Zones.
The ideas are still the same, also in the times ofApplication Containers.
Using the same kernel.Isolate environments as strong as possible by using thementioned namespaces and cgroups.
I think, it the easy way to create and deploy images.
https://www.flickr.com/photos/smemon/
Unfortunately it is also very easy to build crappy images!!
https://www.flickr.com/photos/57402879@N00/
In my experience, deployments fail more likely because ofmisconfigured next-hops (like DB), instead of crappy
software.
I strongly encourage you to read on this.
https://charity.wtf/2016/05/31/wtf-is-operations-serverless/
https://charity.wtf/2016/05/31/operational-best-practices-serverless/
"Microservices: because solving business problems is hardbut building loosely coupled fault-tolerant distributed
systems is easy."
https://twitter.com/neil_conway/status/743086761493008384
… but most important: Provide, maintain and monitor theinfrastructure into which our customers can deploy their
code into.
In the past, Virtuozzo was basically
image formatlocal storagenetwork isolationLinux based container runtime(but maintained outside of mainline)
… treating containers like usual virtual machines
with a complete Linux running insidestarting with the init systemsure, except of the kernel
target distribution has to be supported by virtuozzoto set up the network, hostname, etcthis is extensible by shell scripts
But with a huge performance benefit over VMs
https://www.flickr.com/photos/17612257@N00/
From the early days, we have decent trending and alertingfor all the services running in the containers.
We even built a web frontend, which gathered data frommultiple nagios instances to give us a single overview.
We run a pool of hardware nodes…
… on which we schedule the containers of all customers.
https://www.flickr.com/photos/prinsotel/
Having a look into the sheet, and select a host that doesnot run the same kind of instance of this customer
… with a recommend system to return the best Hardwarefor the requested container.
https://www.flickr.com/photos/jm3/
Hostnames are partly calculated. customer.project is pre-filled, but the name app2 has to be selected by hand.
Everything works via phone and ticketing system.
https://www.flickr.com/photos/nekudo/
… and we added a Puppet Master to our infrastructure,and registered all the containers to it. Based on this, weautomated much stuff, like provisioning of basic stuff tothe container and registering services to the monitoring.
We wrote much glue-code
https://www.flickr.com/photos/samcatchesides/
This also triggers puppet to install required software inthe container, setting up configuration and register them
to our monitoring.
We hand over exactly this container to the customer.
https://www.flickr.com/photos/lindzgraham/
Customers deploy their software to the admin server.From there it will be rsynced to the app server.
https://www.flickr.com/photos/auxesis/
Same thing for dev or test setups.
https://www.flickr.com/photos/christianjann/
Often a new container is just a clone of an old one!
https://www.flickr.com/photos/arenamontanus/
There is
in it.
https://www.flickr.com/photos/trevorandmarjee/
So we keep the containers alive, no matter what happens.In the worst case, we clone one back from the nightly
backup.
https://www.flickr.com/photos/vagawi/
Updates have to go to all containers…
https://www.flickr.com/photos/bovinity/
… rather than building one new image and replace oldcontainers.
https://www.flickr.com/photos/nicowa/
Rollbacks to a defined set of installed software anddeployed software are nearly impossible.
https://www.flickr.com/photos/thejesse/
And we did not want to build another proprietary
again!
https://www.flickr.com/photos/simuh/
The core concept of running the managed setups is prettymuch the same as before.
https://www.flickr.com/photos/yoroy/
But with lessons learned from the old platform!
https://www.flickr.com/photos/pictoquotes/
We enable customers to be able to do relevant tasks ontheir own, and get the full power out of the Cloud.
Sure. The containers run in VMs. But there is IMHO nocontainer vs. VM, while VMs are one possible
infrastructure for containers.
Maybe in a far
https://www.flickr.com/photos/79909830@N04/
But there are many things to think about!
https://www.flickr.com/photos/dharmabum1964/
SOME RESOURCES
https://charity.wtf/2016/05/31/wtf-is-operations-serverless/
https://charity.wtf/2016/05/31/operational-best-practices-serverless/
https://twitter.com/neil_conway/status/743086761493008384
http://kubernetes.io/
https://docs.docker.com/swarm/
https://github.com/chrigl/heat-examples
https://chrigl.de/slides/sysconf15-docker/#/ecosystem
https://chrigl.de/slides/sysconf15-docker/#/resources
THANKS! QUESTIONS?Contact me: [email protected]
Get Awesome Hosting: SysEleven.de