network plugins for kubernetes

39
Dive into CNI: Network Plugins for Kubernetes 林哲緯, Intern, Linker Networks

Upload: inwin-stack

Post on 21-Jan-2018

451 views

Category:

Technology


2 download

TRANSCRIPT

Page 1: Network plugins for kubernetes

Dive into CNI: Network Plugins for Kubernetes

林哲緯, Intern, Linker Networks

Page 2: Network plugins for kubernetes

Who am I?

• Intern, Linker Networks Inc.

• github.com/John-Lin

• @johnlin__

2

Page 3: Network plugins for kubernetes

Outline• CNI

• CNI Introduction

• How to Build?

• How to Use?

• Linen CNI

• Linen CNI Introduction

• Kubernetes & Linen CNI

• Distinguish between OVN-Kubernetes and Linen CNI

3

Page 4: Network plugins for kubernetes

CNI

4

Page 5: Network plugins for kubernetes

What is CNI?

• CNI - the Container Network Interface

• A Open Source Project supported by CNCF (Cloud Native Computing Foundation) and it has two main repositories

• containernetworking/cni: libraries for writing plugins to configure network interfaces

• containernetworking/plugins: additional CNI network plugins

• Support rkt, Docker, Kubernetes, OpenShift and Mesos5

Page 6: Network plugins for kubernetes

What is CNI?

• CNI (Container Network Interface) is an API for writing plugins to configure network interfaces in Linux containers

6

Page 7: Network plugins for kubernetes

CNI Spec

• 3 Commands: ADD, DELETE, and VERSION

• Configuration on stdin, results on stdout

• Runtime parameters via env. CNI_ARGS & CAP_ARGS

7

Page 8: Network plugins for kubernetes

How to Build?

• parseConf: parses the network configuration from stdin

• cmdAdd is called for ADD requests (When pod is created)

• cmdDel is called for DELETE requests (When pod is deleted)

• Add your code to the cmdAdd and cmdDel functions.

• Simple CNI code sample at :https://github.com/containernetworking/plugins/tree/master/plugins/sample

8

type PluginConf func parseConfig(stdin []byte) (*PluginConf, error) func cmdAdd(args *skel.CmdArgs) error func cmdDel(args *skel.CmdArgs) error

Page 9: Network plugins for kubernetes

CNI Quick Start$ cat mybridge.conf { "name": "mynet", "type": "bridge", "ipam": { "type": "host-local", "subnet": "10.15.20.0/24" } }

9

Page 10: Network plugins for kubernetes

CNI Quick Start

$ sudo ip netns add ns1

$ sudo CNI_COMMAND=ADD \ CNI_CONTAINERID=ns1 \ CNI_NETNS=/var/run/netns/ns1 \ CNI_IFNAME=eth2 \ CNI_PATH=`pwd` ./bridge <mybridge.conf

$ sudo docker run --name cnitest --net=none \ -d busybox

Or

10

Page 11: Network plugins for kubernetes

Bridge

11

Page 12: Network plugins for kubernetes

CNI Plugins• bridge : Create a bridge adds the host and the container to it

• IPAM : IP address allocation

• host-local : maintains a local database of allocated IPs

• DHCP : Runs a daemon on the host to make DHCP requests on behalf of the container

• Flannel: responsible for providing a layer 3 IPv4 network between multiple nodes in a cluster

• Huge variety of different types plugins, such as loopback, PTP, IPVLAN, MACVLAN, etc.

12

Page 13: Network plugins for kubernetes

3rd Party Plugins• Project Calico - a layer 3 virtual network

• Weave - a multi-host Docker network

• Multus - a Multi plugin

• CNI-Genie - generic CNI network plugin

• Silk - a CNI plugin designed for Cloud Foundry

• Linen - designed for overlay networks and compatible with OpenFlow protocol through Open vSwitch

• More than 10 third-party party plugins !!

13

Page 14: Network plugins for kubernetes

Linen CNI

14

Page 15: Network plugins for kubernetes

What is Linen CNI?

A 3rd party CNI plugins designed for “Overlay Networks” and compatible with “OpenFlow Protocol” through Open vSwitch

15

Page 16: Network plugins for kubernetes

Overlay Network

16

• Underlay network (built using physical devices and links)

• Create a new virtual network topology on top of underlay

• GRE tunnel, VxLAN tunnel, MPLS and VPN

Underlay Network

Page 17: Network plugins for kubernetes

Comparison of multi-host networking

17

Comparison of multi-host overlay networking solutions

Calico Flannel Weave Docker Overlay Network

Network Model

Pure Layer-3 Solution

VxLAN or UDP Channel

VxLAN or UDP Channel VxLAN

Protocol Support

TCP, UDP, ICMP & ICMPv6 ALL ALL ALL

Reference from Battlefield: Calico, Flannel, Weave and Docker Overlay Network

Page 18: Network plugins for kubernetes

Why Open vSwitch?

18

• Multi-host overlay networking

• Provide flexible network management

• Boosts packet processing, performance and throughput

Page 19: Network plugins for kubernetes

Multi-host Overlay Networking

19

• All containers can communicate with all other containers

• All nodes can communicate with all containers (and vice-versa)

Page 20: Network plugins for kubernetes

Network Management

20

• Support SDN controller to manage flow control to the switches

Page 21: Network plugins for kubernetes

Performance

21

• Open vSwitch with the Data Plane Development Kit (OvS-DPDK)

• Intel DPDK accelerated switching and packet processing

Page 22: Network plugins for kubernetes

Linen CNI Overview

22

Linen CNI is

• designed to meet the requirements of overlay networks and compatible with OpenFlow protocol

• inspired by the document from Kubernetes OVS networking

• a chained plugin and it depends on bridge plugin

Page 23: Network plugins for kubernetes

Linen CNI Usage

23

On Host1:

$ ip netns add ns1 $ ip netns exec ns1 ip link 1: lo: <LOOPBACK> ...

$ CNI_PATH=`pwd` NETCONFPATH=/root ./cnitool \ add mynet /var/run/netns/ns1 $ ip netns exec ns1 ip link 1: lo: <LOOPBACK> ... 3: eth0@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 ...

Page 24: Network plugins for kubernetes

24

Page 25: Network plugins for kubernetes

Linen CNI Usage

25

On Host2:

$ ip netns add ns2 $ ip netns exec ns2 ip link 1: lo: <LOOPBACK> ...

$ CNI_PATH=`pwd` NETCONFPATH=/root ./cnitool \ add mynet /var/run/netns/ns2 $ ip netns exec ns2 ip link 1: lo: <LOOPBACK> ... 3: eth0@if100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 …

Page 26: Network plugins for kubernetes

26

Page 27: Network plugins for kubernetes

Linen CNI Usage

27

# ON HOST 1 $ ip netns exec ns1 ip address 3: eth0@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 ... ... inet 10.244.1.17/16 scope global eth0 ... # ON HOST 2 $ ip netns exec ns2 ping 10.244.1.17 PING 10.244.1.17 (10.244.1.17) 56(84) bytes of data. 64 bytes from 10.244.1.17: icmp_seq=1 ttl=64 time=0.089 ms 64 bytes from 10.244.1.17: icmp_seq=2 ttl=64 time=0.037 ms

ping to verify network connectivity

Page 28: Network plugins for kubernetes

Kubernetes & Linen CNI

28

Page 29: Network plugins for kubernetes

Kubernetes & Linen CNI

29

• Management Workflow

• Packet Processing

Page 30: Network plugins for kubernetes

Management Workflow

30

• linen-cni: Executed by the container runtime and set up the network stack for containers

• flax daemon: DaemonSet. Runs on each host in order to discover new nodes joining and manipulate ovsdb

Page 31: Network plugins for kubernetes

Packet Processing

31

• The docker bridge is replaced with linux bridge (kbr0)

• OVS bridge is created (obr0) and added as a port to the kbr0 bridge

• All OVS bridges across all nodes are linked with VxLAN tunnels

Page 32: Network plugins for kubernetes

Installation on K8S

32

• The Open vSwitch is required

• kubelet setting

kubelet ... --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin

Page 33: Network plugins for kubernetes

Installation on K8S

33

• Create a configuration list file in /etc/cni/net.d and file name must be name with linen.conflist

• Make sure linen, bridge and host-local binaries are in /opt/cni/bin

• (Optional) Apply a Daemon Set flaxd.yaml to discover new node joining

Page 34: Network plugins for kubernetes

Network configuration reference

34

• ovsBridge: name of the ovs bridge to use/create

• vtepIPs: list of the VxLAN tunnel end point IP addresses

• controller: sets SDN controller, assigns an IP address, port number

{ "name":"mynet", "cniVersion": "0.3.1", "plugins":[ { //… bridge configurations }, { "type":"linen", "runtimeConfig":{ "ovs":{ "ovsBridge":"br0", "vtepIPs":[ "172.120.71.50" ], "controller":"192.168.2.100:6653" } } } ] }

Page 35: Network plugins for kubernetes

Distinguish between OVN-Kubernetes and Linen CNI

35

Page 36: Network plugins for kubernetes

Linen CNI

36

Page 37: Network plugins for kubernetes

OVN-Kubernetes Overlay

37

• K8S Switches (1 per node): In node networking

• K8S Router: Cross-node networking

• Join Switch

• External Router: access external network (NAT)

• External Switch

Page 38: Network plugins for kubernetes

Network Models

38

Comparison of multi-host overlay networking solutionsCalico OVN-Kubernetes Flannel Linen

Network Model Layer-3 Solution Layer-3 Solution VxLAN or

UDP Channel VxLAN

Performance High High Medium Medium

Complexity High High Low Low

Page 39: Network plugins for kubernetes

Takeaway

39

More network virtualization projects https://github.com/John-Lin/linen-cni

@johnlin__

SDN-DS.TW: https://www.facebook.com/groups/sdnds.tw/

Contact me

https://github.com/John-Lin/tinynet

39