secure kubernetes container workloads
TRANSCRIPT
Secure Kubernetes Container Workloads
with Production-Grade Networking Cynthia ThomasIrena Berezovsky
Tim Hockin
CIA IT operations have top secret apps for their agents, most of which require isolation
- Antoni is in Ops and wants to help CIA embrace DevOps
- Berta is a Dev eager to design efficiently and deliver excellent apps
Antoni Berta
1. New project defined: Developer needs an environment
2. Dev asks SysAdmin for some resources
3. SysAdmin installs Server OS and asks Network people for a VLAN (ewww!)
4. Network people ask Security team to open a port in a firewall rule
∞. Someone plugs into wrong port or wrong
requirements: start over!
The world before Neutron: can I plug in your cable?
The CIA IT takes weeks, even months to deliver isolated resources for the various projects at CIA.
Servers and VLANs and firewalls, oh my!
Antoni & Berta were doing it the hard way
● OpenStack core project since Folsom
● Tenant and Admin solution
agnostic API
● Pluggable Framework
● Provides extensible API to build rich
topologies (vendor extensions)
● Advanced Services Support, i.e. LBaaS, VPNaaS, FWaaS
Neutron for higher layer network services
Operator (networking, security, etc) versus App Developer
● OpenStack has reduced the time to deliver compute resources
● Security policies allows the CIA to keep business units separate
● Each department admin can manage its own resources
OpenStack Networking on the fly at CIA
Guest OS
App
Libs
Guest OS
App
Libs
OS
Hardware
Hypervisor
Guest OS
App
Libs
Can we do better?
● Spawning a VM is slow expensive
● There is a lot of management overhead
● VMs are not portable: run on a specific Hypervisor
If only there was a way to virtualize an OS to enable multiple workloads to run on a single OS...
container container container
Along came Docker
Containers are an alternative to VMs● Bundle your app & deps, but not the OS
Faster and lower overhead than VMs● O(milliseconds) to spawn
Developer-focused● Enables fast iteration, less non-app concerns
Ridiculously simple UX
It’s the technology of the decade!
OS
Hardware
Libs
App
Libs
App
Libs
App
CIA developers demand containers
Launch in milliseconds!
Dev-Prod parity, on my laptop!
MUST HAVE!
But it is very chaotic -- they need need help managing it all...
Kubernetes changed everything
The Kubernetes API is app-centric● De-emphasizes infrastructure and operations
concerns● Those can exist, but are not the primary focus● Integrate with existing infrastructure and ops,
but don’t replace it
Networking is infrastructure, security is ops
We still need to address the concerns of ops!
Kubernetes network model
Assumes a single, shared network space● No noun for Network (yet?)
Network plugins decide what technology● veth, VXLAN, OVS, etc.
All connectivity is enabled by default
Implicitly single tenant● Also reflected in services like DNS
Compare to Docker model:● Noun for Network● App-centric networks
Namespaces: Kubernetes
Scopes for named objects within a cluster● Pods, Services, etc. are all namespaced
Logical grouping of related things● Could be 1 user● ...or 1 app● ...or 1 tier of an app
No relationship to nodes or networks● All Namespaces exist on all nodes● Network is not segmented
Seems like an obvious hook for networking
Kubernetes NetworkPolicy
API to lock down the network● Describe the graph of your app● Specify which connectivity to allow
Applies per Namespace● Exists alongside the apps it decribes● Default-deny plus explicit allow rules
Network infrastructure can enforce it● Many vendors have implementations
Does not cover egress (yet?)
Kubernetes NetworkPolicy
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: front-to-mid namespace: cia-spy-appspec: podSelector: matchLabels: role: middleware ingress: - ports: - protocol: TCP port: 6379 from: - podSelector: matchLabels: role: frontend
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: mid-to-db namespace: cia-spy-appspec: podSelector: matchLabels: role: db ingress: - ports: - protocol: TCP port: 3306 from: - podSelector: matchLabels: role: middleware
Kubernetes NetworkPolicy
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: front-to-mid namespace: cia-spy-appspec: podSelector: matchLabels: role: middleware ingress: - ports: - protocol: TCP port: 6379 from: - podSelector: matchLabels: role: frontend
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: mid-to-db namespace: cia-spy-appspec: podSelector: matchLabels: role: db ingress: - ports: - protocol: TCP port: 3306 from: - podSelector: matchLabels: role: middleware
Kubernetes NetworkPolicy
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: front-to-mid namespace: cia-spy-appspec: podSelector: matchLabels: role: middleware ingress: - ports: - protocol: TCP port: 6379 from: - podSelector: matchLabels: role: frontend
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: mid-to-db namespace: cia-spy-appspec: podSelector: matchLabels: role: db ingress: - ports: - protocol: TCP port: 3306 from: - podSelector: matchLabels: role: middleware
Kubernetes NetworkPolicy
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: front-to-mid namespace: cia-spy-appspec: podSelector: matchLabels: role: middleware ingress: - ports: - protocol: TCP port: 6379 from: - podSelector: matchLabels: role: frontend
apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: mid-to-db namespace: cia-spy-appspec: podSelector: matchLabels: role: db ingress: - ports: - protocol: TCP port: 3306 from: - podSelector: matchLabels: role: middleware
Where Neutron is ahead of k8s
Neutron Kubernetes
Multi Tenant environment Single Tenant
Rich network topologies with overlapping IPs Flat, shared network with IP per pod
Security Groups, Port Security (ARP Spoofing) Network Policy (ingress only)
Port Quality of Service -
Admin and Tenant facing API Primarily application-centric API
Containers Challenges
● A lot of new products/companies have emerged in the container orchestration and integration ecosystem
● With multi-host, multi orchestration environment, networking becomes critical
● Run containers in VMs for better isolation and security
● Multi-tenancy, host or cluster per tenant● VMs and Containers share the same
network
What is Kuryr?
● Neutron as a production-ready networking abstraction that containers need
● Kuryr translates container orchestration events into Neutron entities, performs API calls and manages the response to the orchestrator
Kuryr as a translator between k8s and Neutron
● Map container networking abstractions to the Neutron API
● Allow Container, BM and VM networking under the same API
● Implements all the common code for Neutron vendors, allowing them to provide advanced container networking by just having a binding script
Kubernetes Neutron
Namespace Network, Subnet
Pod Port
Service Load Balancer
External IP Floating IP
Network Policy Security Groups
Example: CIA Security
Antoni can satisfy CIA security requirements with Kubernetes & Kuryr: ● Kubernetes made it easy for the app devs to express
the application in terms of required deployment
● With NetworkPolicy devs can specify the intended application connectivity
● With Kuryr mapping of Kubernetes requests to Neutron constructs, and NetworkPolicy realization by Neutron security groups, true isolation and security is achieved via the Kubernetes API
Kubernetes + Kuryr + MidoNet: Scalable Neutron
Neutron plugin scaling with ease, and flexible API for fine-grain security policies
● Event-based design (receives events from k8s-api)
● Compatible with Kubernetes >=1.2● API watcher + CNI driver● Asynchronous event-loop based on asyncio
python 3.4 library● No kube-proxy
Example: CIA MidoNet deployment
Antoni uses MidoNet for his OpenStack Neutron plugin for production-grade networking
● He knows he can scale compute for the CIA with confidence with an HA solution
● MidoNet Manager provides Antoni with every single network flow and each security policy applied
Example: CIA MidoNet deployment with kuryr-k8s
● Antoni is trying the kuryr-k8s tech preview solution with MidoNet:
https://docs.midonet.org/
● Today, he can launch a script to automatically deploy the k8s-master and k8s-worker to try MidoNet with k8s
● Antoni wants native API calls for pods and VMs while using the same operator tools
MidoNet-enhanced Security
Neutron Security Groups● white-list of allowed traffic ● port-level firewall
MidoNet implements SG+:● low-level constructs called chains
and rules● richer feature set for
matching/filtering and actions
Future of networking in Kubernetes
Multi-tenancy is probably unavoidable● Ripples are deep and wide
Possible evolution:● Network objects in API● More fine-grained policy● Egress policy● L7 policy● QoS/shaping● Multi-tenant DNS (and other services)
Kuryr-Kubernetes status
● Current○ Early stage ○ CNI Driver and k8s API watcher in progress
● Future work○ K8s Network Policy support ○ High Availability○ Kuryr-openshift○ Bridging OpenStack VMs and kuryr-k8s
MidoNet
Community Site
www.midonet.org
Project Git Repo
https://github.com/midonet/midonet
Join Slackslack.midonet.org
Try MidoNet with one command:
$> curl -sL quickstart.midonet.org | sudo bash
Get Involved!
Kuryr
Community Site
https://wiki.openstack.org/wiki/Kuryr
Project Git Repo
https://github.com/openstack/kuryr
IRC weekly meeting
https://wiki.openstack.org/wiki/Meetings/Kuryr
Kubernetes
Community Site
http://kubernetes.io/community/
Project Git Repo
https://github.com/kubernetes
Join Slackslack.k8s.io/