known basic of nfv features

Post on 13-Apr-2017

165 Views

Category:

Engineering

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Raul LeiteSR Solution Architect / Red Hat

@sp4wnr0othttps://sp4wnr0ot.blogspot.com.br

Known Basic of NFV Features#vbrownbag

2

agenda

● NFV architecture/concepts

● NFV bottlenecks

● sr-iov

● pci passthrough

● hugepages

● cpu pinning / numa

● dpdk

3 INSERT DESIGNATOR, IF NEEDED

● NFV architecture

4

NFV - Network Functions Virtualization

5

NFV – basic concepts

● independent common hardware● automatic network operation● flexible application development/deploy/high reliability● provide the same service quality and safety as Telecom network

6

NFV – Architecture

7

8

NFV bottlenecks

● packet loss● hypervisor overhead● low routing/traffic throughput● CPU● vm/instances resources allocation and scheduling

9

NFV – workload example

● Some NFV applications may have been developed to make specific use of certain CPU instructions.

Example :● A VPN appliance that requires a high-performance cryptography library Intel®

Advanced Encryption Standard New Instructions (Intel® AES-NI).

10 INSERT DESIGNATOR, IF NEEDED

pci passtrough

11

pci passtrough

12

pci passtrough

● Allocation of the PCIe NIC entirely dedicated to the VMs

● No sharing of the NIC

● If your hardware does not have an I/O MMU (known as "Intel VT-d" on Intel-based machines and "AMD I/O Virtualization Technology" on AMD-based machines), you will not be able to assign devices in (hypervisor) KVM.

13

pci passtrough

# grep -i nic_PF /etc/nova/nova.confpci_alias={ "vendor_id": "8086", "product_id": "1528", "name": "nic_PF"}

# openstack flavor create --ram 4096 --disk 100 --vcpus 2 m1.small.pci_passthrough# openstack flavor set --property "pci_passthrough:alias"="nic_PF:1" m1.small.pci_passthrough

# grep -i nic_PF /etc/nova/nova.confpci_alias={ "vendor_id": "8086", "product_id": "1528", "name": "nic_PF"}

# openstack flavor create --ram 4096 --disk 100 --vcpus 2 m1.small.pci_passthrough# openstack flavor set --property "pci_passthrough:alias"="nic_PF:1" m1.small.pci_passthrough

14 INSERT DESIGNATOR, IF NEEDED

sr-iov

15

sr-iov (single-root input/output virtualization)

16

sr-iov (single-root input/output virtualization)● Allows a device, such as a network adapter, to separate access to its resources

among various PCIe hardware functions: physical function (PF) and one or more virtual functions (VFs)

● Improves manageability and performance

● Enables network traffic to bypass the software layer of the hypervisor and flow directly between the VF and the virtual machine

● The PCI SR-IOV specification indicates that each device can have up to 256 VFs. Depending on the SR-IOV device.

17

sr-iov

# lspci -nn | grep -i 82576

05:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)05:10.0 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)

# neutron net-create nfv_sriov --shared --provider:network_type vlan --provider:physical_network physnet_sriov --provider:segmentation_id=2150

# neutron subnet-create --name nfv_subnet_sriov --disable-dhcp --allocation-pool start=10.0.5.2,end=10.0.5.100 nfv_sriov 10.0.5.0/24

# neutron port-create nfv_sriov --name sriov-port --binding:vnic-type direct

# lspci -nn | grep -i 82576

05:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)05:10.0 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)

# neutron net-create nfv_sriov --shared --provider:network_type vlan --provider:physical_network physnet_sriov --provider:segmentation_id=2150

# neutron subnet-create --name nfv_subnet_sriov --disable-dhcp --allocation-pool start=10.0.5.2,end=10.0.5.100 nfv_sriov 10.0.5.0/24

# neutron port-create nfv_sriov --name sriov-port --binding:vnic-type direct

18 INSERT DESIGNATOR, IF NEEDED

hugepages

19

hugepages

20

hugepages

● Linux uses a mechanism in the CPU architecture called "Translation Lookaside Buffers" (TLB) to manage the mapping of virtual memory pages to actual physical memory addresses.

● The TLB is a limited hardware resource, so utilizing a huge amount of physical memory with the default page size consumes the TLB and adds processing overhead

● Boost CPU computing performance

● The obvious optimization is to cache the results

21

hugepages

Host# grubby --update-kernel=ALL --args=”hugepagesz=2M hugepages=2048# echo 4 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

# nova flavor-key m1.small.performance set hw:mem_page_size=2048

Host# grubby --update-kernel=ALL --args=”hugepagesz=2M hugepages=2048# echo 4 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

# nova flavor-key m1.small.performance set hw:mem_page_size=2048

22 INSERT DESIGNATOR, IF NEEDED

numa / cpu pinning

23

numa ● Each node (or CPU) has its own memory bank

● memory access management is now distributed

● memory proximity

● performance and latency characteristics differ depending on the core a process is executing on and where the memory a process is

accessing is located.

24

numa / cpu pinning● Define vCPUs to cores 1 -> 1● Improves performance on the VM processing capacity● Related to NUMA and PCI-Passthrough

$ numactl --hardwarenode 0 cpus: 0 1 | 2 3node 1 cpus: 4 5 | 6 7

Host Processes: Cores -> 0,1,4,5Guest: Cores -> 2,3,6,7

/etc/nova/nova.confvcpu_pin_set=2,3,6,7

$ numactl --hardwarenode 0 cpus: 0 1 | 2 3node 1 cpus: 4 5 | 6 7

Host Processes: Cores -> 0,1,4,5Guest: Cores -> 2,3,6,7

/etc/nova/nova.confvcpu_pin_set=2,3,6,7

25

numa / cpu pinning

$ nova flavor-create small.numa auto 2048 20 2$ nova flavor-key small.numa set hw:cpu_policy=dedicated$ nova flavor-key small.numa set aggregate_instance_extra_specs:pinned=true

$ nova flavor-create small.numa auto 2048 20 2$ nova flavor-key small.numa set hw:cpu_policy=dedicated$ nova flavor-key small.numa set aggregate_instance_extra_specs:pinned=true

26

numa: nova-scheduler

● /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter,CoreFilter,NUMATopologyFilter

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter,CoreFilter,NUMATopologyFilter

27 INSERT DESIGNATOR, IF NEEDED

dpdk

28

dpdk

● Data Plane Development Kit (DPDK) greatly boosts packet processing performance and throughput, allowing more time for data plane applications.

29

THANK YOU!

top related