docker meetup talk - chicago march 2014
DESCRIPTION
Slides from Chicago Docker Meetup #2 at MediaflyTRANSCRIPT
Run your Business in the Cloud.
copyright 2014
Oh, hello
���2
During Business Hours++
Ryan Koop @ryankoop Director of Product & Marketing, Co-founder
Ryan is responsible for product development and manages teams for public relations, international events, and content marketing. His role spans the technical product development, customer support, business development and thought leadership needs of a growing company. !Before CohesiveFT, Ryan worked at a trading platform software company in the US Derivative Markets.
After Hours NAME Ryan Koop CLUB Royal Fox CC - Men LOCAL# 2024 Assoc# 20005661 EFFECTIVE DATE 10/15/2013 SCORES POSTED 12 USGA HDC INDEX
18.9SCORE HISTORY - MOST RECENT FIRST
1 96*I 98 I 95*I 89*AI 96*AI6 95*AI 99 H 99 I 99 AI 94*I11 97 H 96*I 106 A 97 H 95 H16 97 I 94*H 91*H 96 I 94*H
Chicago District Golf Association - www.cdga.org
Ryan Koop
2013 GOLD MEMBER
specifications/vmdk.html#streamOptimized" ovf:populatedSize="1167196160"/>
<Disk ovf:allocationUnits="1048576" ovf:capacity="1" ovf:capacityAllocationUnits="byte * 2^20" ovf:diskId="vmdisk2"
ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="0"/>
copyright 2014
About Us
• Cohesive Flexible Technologies Corp. (CohesiveFT)
• Founded by IT and capital markets professionals with years of experience in operations, enterprise software and client-facing services
• VNS3 product launched in 2008 with multiple product revisions each year
• Customers have secured 150M+ virtual device hours in public, private, & hybrid clouds with our solution
• Offices in Chicago, London, Belo Horizonte and Palo Alto
!
!
!
• Deliver hybrid IaaS cloud use cases such as Cloud VPN, Cloud WAN and Cloud Partner Networks
• Provide VNS3, a network routing and security virtual machine delivered as part of the application deployment in virtualized infrastructures.
• Extend existing enterprise networks and applications to public, private and hybrid clouds.
• Federate physical, virtual and cloud infrastructure on a common network platform, interface and API
• Enable enterprises to run business operations in the cloud helping extend both customer facing systems and internal operational platforms
���3
What We Do Who We Are
Public Cloud Solution Partner
copyright 2014
Our Customers Run Their Businesses in the Cloud
���4
700+ customers in 20+ countries • 200+ Self Service Customers • 15+ SI Resellers • 5+ ISV OEM
Including Industry Leaders • Global Mutual Fund Company • Global ERP provider • Global BPMS provider • Global Cloud-based Threat Detection • Global Fashion Brand • Global Toy Manufacturer • US National Sports Association
References Available Upon Request
copyright 2014
VNS3 Allows Cloud Production Use Cases
���5
Hybrid Cloud Cloud AD Cloud Migration Cloud WAN Partner/Customer Network
App Modernization Capacity Expansion Cloud DR Cloud Federation
!
copyright 2014
Can my Cloud based systems be made HIPPA
PCI compliant?
Can I attest to the security of my data?
Can I continue to use my current NOC and monitoring
tools?
How do I connect and secure my cloud servers?
Can I have High Availability and still benefit from Cloud
pricing?
How can I avoid Vendor Lock-In?
Enterprises Want to Know…
���6
copyright 2014confidential 2014
Everywhere these cloud applications go, they need connectivity, integration and security.
���7
This creates the market for application network services (Layers 3-7) for applications deployed to public cloud.
Connectivity Integration Security
copyright 2014
VNS3 Virtualizes 6 Key Network Functions•Allows control, mobility & agility by separating network location and network identity
•Control over end to end encryption, IP addressing and network topology
���8
Router Switch Firewall
VPN Concentrator IPsec/SSL
Protocol Redistributor Scriptable SDN
copyright 2014
VNS3 allows customers to extend their network to any cloud.
Interoperability is Key to Cloud Leverage
���9
copyright 2014
Docker and CohesiveFT
���10
copyright 2014
Docker Containers Run Inside the Network Device
���11
Router Switch Firewall Protocol Redistributor
VPN Concentrator
Scriptable SDN
VNS3 Core Components
Proxy Reverse Proxy Content Caching Load Balancer IDS Custom Container
✓ Deployed as part of customer’s cloud-based application. ✓ Patented system for network control in the cloud. ✓ Platform for customer and partner cloud network innovation
copyright 2014copyright 2013
Docker is an open source project released in March 2013 that automates the deployment of applications in Linux Containers (was LXC, now libcontainer). It is an engine that allows users to encapsulate any application or set of applications as a lightweight, portable, self-sufficient virtual container. Increasingly Docker is becoming an application delivery solution. !
Docker offers a different granularity of virtualization that allows for greater isolation between applications.
Docker Overview
���12
Cloud Provider OS/Hypervisor
Server Hardware
VNS3 bins/ libs
bins/ libs
bins/ libs
Guest OS
Guest OS
Guest OS
AppStack
AppStack
AppStack
VM
Cloud Provider OS/Hypervisor
Server Hardware
VNS3
Docker
bins/ libs
bins/ libs
AppStack
AppStack
AppStack
AppStack
Container
copyright 2014
Docker Version 0.9 - LXC vs libcontainer
���13
March 10, 2014 - Docker version 0.9 replaces LXC with (docker.io) libcontainer as the default execution environment. Version 0.9 Implications 1. libcontainer does the same thing as LXC - it's all still an interface to the underlying kernel-based container system 2. Docker controls libcontainer, it didn’t control LXC 3. Added support for OpenVZ, systemd-nspawn, libvirt-lxc, libvirt-sandbox, qemu/kvm, BSD Jails, Solaris Zones, and
chroot 4. Docker out of the box can now manipulate namespaces, control groups, capabilities, apparmor profiles, network
interfaces and firewalling rules (all from within the docker container) 5. Backward compatible with previous LXC 6. Libcontainer supports container systems from kernels other than the linux kernel - FreeBSD, NetBSD, OpenBSD,
Solaris, OpenSolaris and Illumos. Support for OpenVZ and qemu/kvm 7. Not decoupled from the kernel (that would be virtualization) but added support opens the door for Windows
and OSX dockerness
copyright 2014
Docker Container Network and VNS3
���14
VNS3 is a network appliance that runs in public clouds. That means there are multiple network interfaces. VNS3 bridges a separate (and customizable) Docker network subnet to the VNS3 Manager instance’s primary network interface (usually eth0). Default docker subnet is 172.17.0.0/16. We allow our users to change the default docker address block to any private IP subnet from a /24 (254 addresses) to a /30 (4 addresses).
Cloud Provider OS/Hypervisor
Server Hardware
VNS3
Docker
Container
Physical NIC - 54.10.201.73
bridge/map to VM eth0
bridge to 172.17.0.0/24
Virtual Interface (eth0) - 10.1.106.89
Docker Interface (docker0) - 172.17.0.1
copyright 2014
Challenges for an ISV Offering Docker
���15
Layer 4-7 network services that customers want to add to the Manager instance need to intimately part of the VNS3 mesh network without screwing up our transport device. !
VNS3 is shipped to all customers as an appliance. VNS3 controls are surfaced to the customer via our UI and API. We do not allow cmd line access due to concerns over IP and tampering.“Privileged” Mode - It allows you to run some containers with (almost) all the capabilities of their host machine, regarding kernel features and device access.
copyright 2014
Instance Size - Docker Memory Profiling
���16
!
Containers based on a Node-Red Image available in the Docker Index (source CFT CTO Chris Swan’s Blog) 138204 used – no container 179816 used – first container added (+41M) 203252 used – second container added (+23M) 226276 used – third container added (+22M) Expected initial overhead of running Docker, then fairly lightweight.Isolation by containers is cheaper than isolation by virtual machines. Our customers need to make an economic decision in trade off between running a larger VNS3 instance size or multiple VMs. We are seeing this isn’t an all or nothing decision, some VMs will move to docker and other will remain as is.
!
copyright 2014
Future Plans with Docker
���17
We might follow Deis’ lead, to a point. - Deis the opensourse PaaS is available as a series of Docker Images.
- Each component of Deis delivered as a separate image which then connect with on another to provide the PaaS system.
- Each individual component can be swapped/upgraded independently as needed for easier deployment and management.
!
Each process running in VNS3 is put in a Container -> The loosely coupled Application Appliance.
!
copyright 2014
Demo
���18
copyright 2014
Demo Topology - Before Docker
���19
VNS3 ManagerPublic IP: 107.22.16.203
Overlay IP: 172.31.1.5
US East
VNS3 Overlay Network 172.31.0.0/22
Primary DB Overlay IP: 172.31.1.1
Wordpress & Web Server
Active IPsec Tunnel Firewall / IPsec Cisco ASAPublic IP: 50.16.146.76
CohesiveFT Office NOC
Chicago LAN IP: 192.168.5.1LAN: 192.168.5.0/24
Overlay IP: 172.31.1.9Nginx
copyright 2014
Demo Topology - With Docker
���20
VNS3 ManagerPublic IP: 107.22.16.203
Overlay IP: 172.31.1.5
US East
VNS3 Overlay Network 172.31.0.0/22
Primary DB Overlay IP: 172.31.1.1
Wordpress & Web Server
Active IPsec Tunnel Firewall / IPsec Cisco ASAPublic IP: 50.16.146.76
CohesiveFT Office NOC
Chicago LAN IP: 192.168.5.1LAN: 192.168.5.0/24
copyright 2014copyright 2013
Container Network Setup
���21
To start using Docker you must first setup a Docker subnet where your containers will run. The default VNS3 Docker subnet is 172.0.10.0/28. VNS3 allows you to choose a custom address block. Make sure it will not overlap with the Overlay Subnet or any subnets you plan on connecting to VNS3. The Docker subnet can be thought of as a VLAN segment bridged to the VNS3 Manager’s public network interface.
The Container Networking Page shows the available container IP addresses for the chosen Container Network. IP addresses listed as reserved are either used by Docker (for routing, bridging, and broadcast) or are being used by a currently running container.
To change the Container Network first enter a new network subnet in CIDR notation.
Click Validate to ensure the subnet accommodates the Container Network requirements.
Click Set once validation is passed.
You will prompted with a popup warning that a Container Network change will require a restart of any running container. Click OK.
copyright 2014copyright 2013
Container Images: Upload a Container
���22
To Upload a Container Image click on the Images left column menu item listed under the Container heading.
Click Upload Image.
On the resulting Upload Container Image Window enter the following;
- Input name
- Description
- Url - the publicly accessible URL of the .tar.gz Container Image file
Click Upload.
!
Once the Container Image has finished the import process, you will be able to use the action button to edit and delete the Image or allocate (launch) a Container.
copyright 2014copyright 2013
Container Images: Allocate a Container
���23
To launch a Container click the Actions drop down button next to the Container Image you want to use and click Allocate.
On the resulting pop up window enter the following:
- Name of the Container
- Command used on initiation of the Container
- Description
Click Allocate.
You will be taken to the Containers page where you newly created Container will list its status.
!
copyright 2014copyright 2013
Access Consideration: Public Internet
���24
Accessing a Container from the Public Internet will require additions to the AWS Security Group associated with the VNS3 Manager as well as VNS3 Firewall.
The following example shows how to access an Nginx server running as a Container listening on port 80 (substitute port 22 if the Container is running SSHD).
AWS Security Group Allow port 80 from your source IP (possibly 0.0.0.0/0 if the Nginx server is load balancing for a public website).
VNS3 Firewall Enter rules to port forward incoming traffic to the Container Network and Masquerade outgoing traffic off the VNS3 Manger’s public network interface.
#Let the Docker Subnet Access the Internet Via the Managers Public IP-o eth0 -s <Manager Private IP> -j MASQUERADE
#Port forward 9080 to the nginx docker containerPREROUTING_CUST -i eth0 -p tcp -s 0.0.0.0/0 --dport 9080 -j DNAT --to <Container Network IP>:80