Distributed and Trustable SDN-NFV-enabled Network Emulation on Testbeds
and Cloud Infrastructures22/03/2021
Candidate:Giuseppe Di Lena
Supervisors:Thierry Turle1, Directeur de recherche , INRIAFrédéric Giroire, Directeur de recherche, CNRSChidung Lac, Ingénieur-chercheur, Orange Labs
Jury members:Guillaume Urvoy-Keller, Professeur, Université Côte d'AzurMarcelo Dias de Amorim, Directeur de recherche, CNRS Stefano Secci, Professeur des Universités, CnamMathieu Bouet, Ingénieur, ThalesLuigi Iannone, Maître de conférence, Telecom ParisTech
Context
2
Context
3
•Next genera*on networks (5G, massive IoT connec*ons)
•PhD funded by <I/O Lab>
•Two core elements of these new networks:•Network Func-on Virtualiza-on (NFV)• So6ware Defined Networking (SDN)
Network Func/on Virtualiza/on
4
•Decoupling of network elements from underlying hardware
•Advantages:• flexibility• cost savings• scalability
So6ware Defined Networking•Decoupling of network control func*on and network forwarding func*on
•Advantages:• centralized management•programmable configura8on•dynamic rou8ng
5
SDN and NFV
6
•NFV and SDN are independent of each other but are complementary•A symbiosis between them can improve resource
management and service orchestra-on: • increased efficiency and lower costs • faster innova2on and 2me to market • no vendor lock-in • complex network services
Challenges•Evalua*on of SDN and NFV solu*ons
•New protocols and standardiza?on
•Provision of resiliency, scalability, etc.
•Resource alloca?on7
Contributions•Distrinet Tool (Chapter 3)•CloudTrace Tool - 1 slide (Chapter 5)•Placement Algorithms for Distributed Network Emula?on (Chapter 4)•Bandwidth-op?mal Failure Recovery in SDN/NFV Networks (Chapter 6)
8
Distrinet Tool
9
Emula/on
10
• Evaluate new networking ideas
•Model realistic scenarios
• Find the limits of a system or a solution beforeimplementing it
Mininet• SDN emulator: Mininet [1,2]
•Use of Openflow/OpenVSwitchand shell subprocesses to create a virtual network
•Crea8on of large networks in few seconds
11
_______________________[1] B. Lantz et al. "A network in a laptop: rapid prototyping for so8ware-defined networks." ACM SIGCOMM Workshop, 2010.[2] h8ps://github.com/mininet/mininet
Mininet Limita/ons
12
•Mininet works well when the virtual hosts and the virtual switches do not require a lot of resources
•When the physical host is overloaded, Mininet can return wrong results
• To do large experiments, we need to distribute the load
Distributed Emulation• Tools exist to carry out distributed emula7on:• Maxinet [3]• Mininet Cluster Edi0on (CE) [4]
•Mininet CE extends directly Mininet and distributes the experiments via GRE or SSH tunnels•Maxinet creates mul7ple Mininet experiments in
different hosts, and connects vSwitches in different hostswith GRE tunnels
13
_______________________[3] We?e, Philip, et al. "Maxinet: Distributed emulaIon of so8ware-defined networks." IFIP Networking Conference, 2014.[4] Lantz, Bob, and Brian O'Connor. "A mininet-based virtual testbed for distributed SDN development." SIGCOMM CCR, 2015.
Maxinet Limita/ons•Placement algorithms do not take in considera8on
the physical infrastructure• It is not possible to place a vHost and a vSwitch in
different hosts if they are connected via a vLink
14
Mininet Cluster Edition Limitations•Placement algorithms do not take in
consideration the physical infrastructure• It is not possible to limit a vLink if the vNodes are
assigned to two different hosts
15Host 1 Host 2
H1
H2
H1 H210 Mbps10 Mbps
Host 1 Host 2
1 Gbps 1 Gbps
Iperf from H1 and H2 should return 10 Mbps
With this placement, itwill return ~500 Mbps
Contribu/on: Distrinet• Proposi'on of a new tool, Distrinet:• compa&bility with Mininet• emula&on of any topology without any constraints• correct distribu&on of the experiment• crea4on of an isolated environment for each vNode• automa4c cloud deployment (Amazon AWS)
16
_______________________[J1] “ Distrinet: a Mininet Implementation for the Cloud”. ACM Computer Communication Review 2021.[C1] “Mininet on steroids: exploiting the cloud for Mininet performance”, In: 2019 IEEE 8th International Conference CloudNet. [D1] “Demo Proposal - Distrinet: A Mininet Implementation for the Cloud”, In: Proceedings of CoNEXT 2019. [W1] “Distributed Network Experiment Emulation”. Global Experimentation for Future Internet - Workshop. Nov. 2019
Distrinet Features
17
Distrinet Architecture
General Deployment AWS infrastructure
•Ansible: configura8onof the Hosts
• LXD/LXC: running of the containers
•Boto3: deployment in Amazon AWS
18
Mininet Distribution• Async SSH for vNodes management• Veth interfaces and VxLan tunnels to create the vLinks
N1
Distrinet Client Distrinet Master
mn
proc
ess N2
PTY 1
PTY 2PIPE
PIPE
N1
PTY 1OUT
IN
SSHSSH
N2
PTY 1OUT
IN
SSHJump
Distrinet Worker
OUT
IN
IN
OUT
C1
C2
H1
Worker 1 Worker 2
H2
eth0eth0
C1
C2 C3
Br1
Br2
Br3
Veth1
Vx1
S1Br4
Vx1
Physical Interface
Veth peer interfaceLXC Virtual interface
Linux Bridge
VxLan interface
Physical Switch
19
Experiment Descrip/on
20
•Gros cluster, 36 Cores and 96 GB RAM, 10 Gbps Link•Network Intensive Experiment:• run Iperf in a virtual network (Linear 2, 10, 20, etc.)• single host vs two hosts
•CPU Intensive Experiment:• Hadoop task in a virtual network (Linear 2, 10, 20, etc.)• single host vs mul;ple hosts (up to 100 hosts)
Network Intensive Experiment
• Distrinet results are the closest to the ones expected with Mininet• Mininet CE (GRE) is very close, but it cannot limit all the vLinks
21
h1 h2 h3 h4 h9 h10
s1 s2 s3 s4 s9 s10
h1 h2 h3 h4
s5 s6 s7 s8Host 1 Host 2
CPU Intensive Experiment• Hadoop Workload:• calcula=on of π using a quasi-Monte Carlo method (CPU intensive task) • vHost requires 2 vCore and 6 GB RAM• vSwitch requires 1 vCore and 3.5 GB RAM• linear Topology
• h1: Hadoop Master Host
22
CPU Intensive Experiment
Distributed Emula2on Single Host Emula2on• Behavior using a single machine hardly predictable when overloaded
• 10 vHosts and 10 vSwitches for each physical machine in the distributedemula2on
23
Takeaways•Distrinet proposal, a distributed extension of Mininet•Distrinet manages the physical infrastructure with
Ansible and creates isolated vNodes with LXC/LXD• It uses Boto3 to create environments in AWS•Compared with main exis8ng tools, Distrinet shows
beOer reliability•Distrinet can emulate a network with 3000+ vNodes
24
Emula/on in Public Clouds• In public cloud platforms, the network is managed by the
Cloud provider• The users have no access to the network• Objective: check the network before running an emulation
or porting delay-sensible applications• CloudTrace [5] creates Regional or Multiregional
experiments to test the network delay in Amazon AWS
25
_______________________[5] h?ps://github.com/Giuseppe1992/CloudTrace
Placement Algorithmsfor Distributed Network
Emula<on
26
Distributed Network Emula/on
27
vS1
vS3
vS2
vHost 1 vHost 2 VNF 1
Network to emulate
Cores: 2RAM: 2 GB
Cores: 2RAM: 2 GB
Cores: 4RAM: 32 GB
Cores: 2RAM: 2 GB
Cores: 2RAM: 2 GB
Cores: 2RAM: 2 GB
Physical host 1 Physical host 2 Physical host 3Cores: 8
RAM: 16 GBCores: 16
RAM: 32 GBCores: 16
RAM: 32 GB
Physical cluster
VNF 1
vS2
vS3
vHost 2vHost 1
vS1Cores: 4
RAM: 32 GB
VNF 1
vS2
vS3
vHost 2vHost 1
vS1
Mininet CE and Maxinet• Implementation of different algorithms•Mininet CE:• Round Robin• Switch Bin Packing• Random
•Maxinet:•Multilevel Recursive Bisectioning (METIS)
•Physical topology not taken into account28
Virtual Network Embedding
29
• Input: Virtual Network, Physical Infrastructure•Output: Feasible mapping that minimizes the # hosts used
Related Work & Contribu/ons
30
• Virtual Network Embedding is a classical problem [6]• Tackling the VNE problem with specific seXngs:• mul<-dimensional (RAM, CPU)• with a non usual objec=ve, minimiza<on of the number of hosts used• <ming constraints (emula=on should start quickly)
• Integer Linear Program (ILP) to find the op2mal solu2on• Three algorithms proposed:• Greedy Par<<on• Divide and Swap• K-Balanced_______________________
[6] A. Fischer et al. “Virtual network embedding: A survey”. In: IEEE Communications Surveys & Tutorials, 2013. [C2] “A Right Placement Makes a Happy Emulator: a Placement Module for Distributed SDN/NFV Emulation”. IEEE ICC, 2021.[JS] “Placement Module for Distributed SDN/NFV Network Emulation”. Submitted to Computer Networks Journal, 2021.
Experiments: K-Fat Tree in Cluster
31
• Physical: Nancy Cluster Vhost and Vswitch: 2 Cores and 8Gb Memory, bandwidth 0.2Mbps
• ILP cannot find a solu0on for k > 2• The proposed algorithms return a solu0on in less than a second for k <= 6
and in reasonable amount of 0me for k <= 12
Simula/on Scenarios
32
•Performance comparison with Mininet CE and Maxinet algorithms •Physical infrastructures:• Gros Cluster (Star Topology)• Rennes Cluster (Random Topology)
•Virtual networks:• Fat Tree• Random generated
Homogeneous Infrastructures
c1
c2
c3
c4. . . s1
s2
s3
s4. . .
vFatTree Network Star Network (Gros Cluster Grid5000)
25 Gbps
Gros 1
25 Gbps
Gros 2 Gros 3
25 Gbps
Gros n
25 Gbps
Gros: 32 vCpu 96GB RAM
33
Heterogeneous Infrastructures
1 Gbps
10 Gbps
40 Gbps
10 Gbps1 Gbps
10 Gbps
40 Gbps
10 Gbps
Parapide[1-2]
Parapluie[1-8]
Parasilo [1-5]
Paravance [1-5]
Paravance [41-45]
Parapide: 8 vCpu 24GB RAMParapluie: 24 vCpu 48GB RAMParavance: 32 vCpu 128GB RAMParasilo: 32 vCpu 128GB RAM
Grid5000 Rennes ClusterRandom vNetwork
h2 h3 h4h1
1Gbps1Gbps
5 Gbps
3 Gbps5 Gbps5 Gbps
1 Gbps1 Gbps 2 Gbps3 Gbps
3 Gbps1 Gbps3 Gbps
1 vCpu2GB RAM
4 vCpu6GB RAM
2 vCpu4GB RAM
8 vCpu16GB RAM
34
Placement Simulations• 4 types of scenarios with heterogeneous and homogeneous vNetwork and
physical infrastructure• 5 000+ virtual topologies, 70 000+ tests
• The proposed algorithms solved almost all the placement problems• Heterogeneous (whether virtual or physical) topologies are a lot
harder to solve 35
Resource Overloading
36
•CPU, RAM or Link overloading
H1 H22 Gbps2 Gbps
Host 1 Host 2
1 Gbps 1 Gbps
Host 1 Host 2
H1
H2
300% overcommitment in two physical links
4 Gbps assigned in 1 Gbps links
Overloading Analysis (1/2)
37
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
100
101
102
103
104
0.17% 0.82% 0.17% 0.17%Percentage of instances with overcommitment
# Links % Link overcommitment
Gros Link Overcommitment
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
100
101
102
103
104
21.87% 69.58% 58.53% 51.26%Percentage of instances with overcommitment
# Links % Link overcommitment
Rennes Link Overcommitment
Overloading Analysis (2/2)
38
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
100
101
102
103
104
37.26% 43.53% 5.85% 54.99%Percentage of instances with overcommitment
# Hosts % CPU overcommitment
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
100
101
102
103
104
54.43% 54.69% 26.28% 58.13%Percentage of instances with overcommitment
# Hosts % CPU overcommitment
Gros CPU Overcommitment Rennes CPU Overcommitment
Overloaded Experiments
39
•We present 3 experiments with CPU, RAM, and Link
•We show the behaviour of the emulated network in case of overloading
•We consider the cluster Gros with 20 physical nodes
CPU Intensive Experiment• vFatTree where each vHost requires 24 vCPU and 32GB of RAM
Gre
edyP
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
60
80
100
120
140
Job
Com
ple
tition
Tim
e[S
econds]
0.0
0.2
0.4
0.6
0.8
1.0
Pro
bab
ility
tocr
ash
vFatTree K=4
40• CPU overloading slows down the emulation, but without
experiencing crashes
Memory Intensive Experiment
Gre
edyP
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
60
80
100
120
140
Job
Com
ple
tition
Tim
e[S
econds]
0.0
0.2
0.4
0.6
0.8
1.0
Pro
bab
ility
tocr
ash
Gre
edyP
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
60
80
100
120
140
Job
Com
ple
tition
Tim
e[S
econds]
0.0
0.2
0.4
0.6
0.8
1.0
Pro
bab
ility
tocr
ash
• vFatTree where each vHost requires 12 vCPU and 50GB of RAM
vFatTree K=4 (SWAP Enabled) vFatTree K=4 (SWAP Disabled)
41• Memory overloading slows down the emula0on and generates
crashes
Network Intensive Experiment
Gre
edyP
Kba
lanc
edD
ivid
eSwap
Met
is
Ran
dom
Rou
ndRob
inSwitch
Bin
0
100
200
300
400
500
600
Thro
ughput
[Mbps]
Expected value
Gre
edyP
Met
is
Ran
dom
Rou
ndRob
in
Switch
Bin
0
100
200
300
400
500
600
Thro
ughput
[Mbps]
Expected value
vFatTree K=4 vFatTree K=6
• Each vLink is set at 500 Mbps
c1
c2
c3
c4. . . s1
s2
s3
s4. . .
vFatTree K=4
42
• Link overloading massively downgrades networking performances in the emula0on
Takeaways•Comparison of the algorithms with the ILP:• the algorithms return a solution in few seconds• ILP is not able to find a feasible solution for most of the
cases•Comparison with Maxinet and Mininet CE algorithms:• the proposed algorithms solved almost all the instances• other algorithms often return overloaded solutions, leading
to wrong emulation results
43
Bandwidth-op<malFailure Recovery in SDN/NFV Networks
44
Context• ISP network dimensioning problem, with protec8on
against Shared Risk Link Group (SRLG) failures in SDN/NFV networks• Shared Risk Link Group:• set of links that can fail simultaneously in the
network
45
Mo/va/on•Nodes and Links are failure-prone [7]
•Network failures such as (multiple) link or node failures may have a significant impact on the QoS
• Failures tend to be correlated between them (→ SRLGs) [8]
46
_____________________[7] D. Turner et al., "California fault lines: understanding the causes and impact of network failures, ACM SIGCOMM 2010 [8] S. Kandula et al., “Shrink: a tool for failure diagnosis in IP networks”, ACM SIGCOMM Workshop 2005
Related Work & Approach
47
_______________________[9] D. Xu et al. “Failure protec^on in layered networks with shared risk link groups,” IEEE network, 2004.[10] S. Rai et al. “Ip resilience within an autonomous system: current approaches …” IEEE Communica^ons Magazine, 2005.[11] P. Fonseca and E. Mota, “A survey on fault management in socware defined networks,” IEEE Communica^ons Surveys, 2017.
• Network failures are widely inves'gated [9,10]• But SDN opens new opportuni'es with fast rerou'ng [11]• Idea: Use mul'ple rou'ng configura'ons to the extreme,
completely different rou'ng for each demand in response to an SRLG failure situa'on:• op&mal rou&ng can be obtained in each failure situa4on• a flow not affected by a failure can be rerouted• in the worst case we may reroute all the flows
• The idea of using a set of pre-configured network configura8ons to achieve failure recovery is not new•A small set of backup rou8ng configura8ons is to be used
in the case of a single link or node failure [12, 13]•OpenFlow Fast Failover Group Tables help to select quickly
a preconfigured backup path in case of link-failure [14]
48
Related Work
_______________________[12] A. Kvalbein, et al. “Fast ip network recovery using mul^ple rou^ng configura^ons,” IEEE INFOCOM, 2006.[13] A. Kvalbein, et al. “Post-failure rou^ng performance with mul^ple rou^ng configura^ons,” IEEE INFOCOM, 2007.[14] A. Sgambelluri et al. “Openflow-based segment protec^on in ethernet networks,” Op^cal Communica^ons and Net., 2013.
Example (1)
49
Example (2)
50
Contribu/ons• Scalable exact and approximated methods for the global
rerouting problem in SDN/NFV-enabled networks with SRLG constraints• Demonstration of the applicability of our proposed
protection method in Mininet• Discussion of technical choices to be taken into account by
the network operator in order to put in practice our proposed technique
51
_______________________[J2] “Design of Robust Programmable Networks with Bandwidth-op^mal Failure Recovery Scheme”. Computer Net. Journal, 2021.
[C3] “Bandwidth-opImal Failure Recovery Scheme for Robust Programmable Networks ”. CloudNet, 2019. [P1] “Poster: design of survivable SDN/NFV-enabled networks with bandwidth-opImal failure recovery ”. IFIP Networking, 2019.
Problem Statement• Input: a Network G and a set of demands, each with a
source, a des-na-on, the total amount of bandwidth and an ordered sequence of network func-ons•Output: a path for each demand and SRLG failure
situa8on•Objec-ve: minimize the total amount of required
bandwidth to guarantee the protec8on
52
Example
53
FirewallIntrusion Detection System
Video Optimizer
H1 H2
H3 H4
H1
H2
H4
H3
H1
H2
H4
H3
A path for each route ineach SRLG scenario
Problem Complexity•Dimensioning problem for an ISP who wants to
minimize resource usage•A NP-Hard problem
54
Op/miza/on Model•Problem modeled as an ILP (Integer Linear Program)•Column Genera-on as a tool to speed up the
computa8on:• decompose the original probleminto a restricted master problemand several subproblems (pricingproblems)
55
Simula/on Results
56
• Comparison of Global Rerouting (GR), Dedicated Path Protection (DP) and No Protection (NP)
• Global Rerou+ng requires only between 30% and 60% addiConal bandwidth• Dedicated Path Protec+on requires almost 3 +mes more bandwidth• Huge savings in terms of capital expenditure (CAPEX)
Main Issue
57
AKer a failure, we may have a complete rerou?ng for all demands.
Is it possible to put in prac?ce the global rerou?ng protec?on scheme?
Emula/on Testbed•Mininet used to emulate
the network and OpenDaylight used as SDN controller•Testbed environment:• Dual Intel Xeon E5-2630• 128 GB of RAM•OpenDaylight Oxygen•Mininet 2.2.2 58
Experimental Evaluation•Possible implementa*ons:• Full: re-installa8on of all rules in the switches by the
controller•Delta: installa8on by the controller of rules for the
flows which have changed•No-fy: pre-installa8on of all the rules in the
switches and no8fica8on sent to the switches whose links are down
59
Full Implementa/on
60
L1
Controller
S1 S2 S3None:d,SL1:d,SL2:d,EL3:d,EL4:d,SEL5:d,EBlob1
None:d,WL1:d,SL2:d,SL3:d,SL4:d,WL5:d,SBlob2
None:d,EL1:d,EL2:d,EL3:d,EL4:d,EL5:d,NBlob3
L2 L3
L5
L4N
S
E W
SSE
E
d,SBlob1
d,EBlob3
d,WBlob2
S1
S3
S2
d
L1
Controller
L5
S1 S2 S3None:d,SL1:d,SL2:d,EL3:d,EL4:d,SEL5:d,EBlob1
None:d,WL1:d,SL2:d,SL3:d,SL4:d,WL5:d,SBlob2
None:d,EL1:d,EL2:d,EL3:d,EL4:d,EL5:d,NBlob3
L2 L3
XL4
N
S
E W
SSE
E
d,EBlob1
d,NBlob3
d,SBlob2
S1
S3
S2
d
Full:d,EBlob1 Full:
d,NBlob3
Full:d,SBlob2
Delta Implementa/on
61
L1
Controller
S1 S2 S3None:d,SL1:d,SL2:d,EL3:d,EL4:d,SEL5:d,EBlob1
None:d,WL1:d,SL2:d,SL3:d,SL4:d,WL5:d,SBlob2
None:d,EL1:d,EL2:d,EL3:d,EL4:d,EL5:d,NBlob3
L2 L3
L5
L4N
S
E W
SSE
E
d,SBlob1
d,EBlob3
d,WBlob2
S1
S3
S2
d
L1
Controller
L5
S1 S2 S3None:d,SL1:d,SL2:d,EL3:d,EL4:d,SEL5:d,EBlob1
None:d,WL1:d,SL2:d,SL3:d,SL4:d,WL5:d,SBlob2
None:d,EL1:d,EL2:d,EL3:d,EL4:d,EL5:d,NBlob3
L2 L3
XL4
N
S
E W
SSE
E
d,EBlob1
d,NBlob3
d,SBlob2
S1
S3
S2
d
Delta:d,E
Delta:d,N
Delta:d,S
Notification Implementation
L1
Controller
S1 S2 S3None:GoTo T1
L1:GoTo T2 L2:GoTo T3L3:GoTo T4L4:GoTo T5 L5:GoTo T6
Blob1
None:GoTo T1 L1:GoTo T2 L2:GoTo T3L3:GoTo T4L4:GoTo T5 L5:GoTo T6
Blob2
None:GoTo T1 L1:GoTo T2 L2:GoTo T3L3:GoTo T4L4:GoTo T5 L5:GoTo T6
Blob3
L2 L3
L5
L4N
S
E W
SSE
E
d,SBlob1
S1
S3
S2
d
T0:GoTo T1T1:d,ST2:d,ST3:d,ET4:d,E
T5:d,SET6:d,EBlob1
T0:GoTo T1T1:d,ET2:d,ET3:d,ET4:d,ET5:d,ET6:d,NBlob3
T0:GoTo T1T1:d,WT2:d,ST3:d,ST4:d,ST5:d,WT6:d,SBlob2
62
L1
Controller
S1 S2 S3None:GoTo T1
L1:GoTo T2 L2:GoTo T3L3:GoTo T4L4:GoTo T5 L5:GoTo T6
Blob1
None:GoTo T1 L1:GoTo T2 L2:GoTo T3L3:GoTo T4L4:GoTo T5 L5:GoTo T6
Blob2
None:GoTo T1 L1:GoTo T2 L2:GoTo T3L3:GoTo T4L4:GoTo T5 L5:GoTo T6
Blob3
L2 L3
L5
L4N
S
E W
SSE
E
d,SBlob1
S1
S3
S2
d
T0:GoTo T6T1:d,ST2:d,ST3:d,ET4:d,E
T5:d,SET6:d,EBlob1
T0:GoTo T6T1:d,ET2:d,ET3:d,ET4:d,ET5:d,ET6:d,NBlob3
T0:GoTo T6T1:d,WT2:d,ST3:d,ST4:d,ST5:d,WT6:d,SBlob2
X
Notification:T0: GoTo T6
Notification:T0: GoTo T6
Notification:T0: GoTo T6
Results
63
• Tradeoff: the reduc2on of the recovery 2me comes at the cost of increasing flow table sizes on switches
• Small overhead with a classical Dedicated Path Protec2on scheme• Bandwidth op2mal failure recovery achievable thanks to SDN
Polska Network 12 switches, 12 hosts, 18 links, 1 controller
Takeaways
64
• Study of the ISP network dimensioning problem with protection against one Shared Risk Link Group failure•Optimization model based on a column generation
approach• Evaluation of several implementation options•Bandwidth optimal failure recovery achieved thanks
to SDN
Conclusionsand
Future Directions
65
Conclusions (1)
66
• Distrinet Tool:• proposal of Distrinet, showing its benefits and architectural choices• comparison of Distrinet with main exis2ng tools (Maxinet and Mininet
CE) showing its beXer reliability• Placement Algorithms for Distributed Network Emula;on:• comparison of the algorithm proposed to the ones implemented in
Maxinet and Mininet CE• analysis of the Virtual Network Embedding problem applied to the
Distributed Network Emula0on scenario• analysis of overloaded experiments (CPU, RAM, Link)
Conclusions (2)
67
• Bandwidth-optimal Failure Recovery:• ISP network dimensioning problem with protection against one
Shared Risk Link Group failure• optimization model based on a column generation approach• evaluation of several implementation options that makes Global
Rerouting feasible on SDN networks
Future Research Directions
68
• Distrinet Tool and Placement Algorithms:• synchroniza0on issues when distribu2ng the emula2on (short term)• models for slice of SFC failures and network slicing
• Bandwidth-op;mal Failure Recovery:• virtual func0ons failure (short term)• dynamic SFCs placement• Global Rerou2ng in edge compu0ng and massive IoT scenarios • integra2on with AI/ML techniques for large topologies
Publica/onsDistrinet Tool[J1] G. Di Lena, A. Tomassilli, D. Saucez, F. Giroire, T. Turle7, and C. Lac “Distrinet: a Mininet Implementa?on for the Cloud”. ACM Computer Communica?on Review (2021).[C1] G. Di Lena, A. Tomassilli, D. Saucez, F. Giroire, T. Turle7, and C. Lac, “Mininet on steroids: exploi?ng the cloud for Mininet performance”, In: 2019 IEEE 8th Interna?onal Conference on Cloud Networking (CloudNet). [D1] G. Di Lena, A. Tomassilli, D. Saucez, F. Giroire, T. Turle7, and C. Lac, “Demo Proposal - Distrinet: A Mininet Implementa?on for the Cloud”, In: Proceedings of the 15th Interna?onal Conference on Emerging Networking EXperiments and Technologies. CoNEXT 2019. [W1] G. Di Lena, A. Tomassilli, D. Saucez, F. Giroire, T. Turle7, C. Lac, and W. Dabbous “Distributed Network Experiment Emula:on”. Global Experimenta?onfor Future Internet - Workshop. Nov. 2019Placement Algorithms for Distributed Network Emula;on[JS] G. Di Lena, A. Tomassilli, D. Saucez, F. Giroire, T. Turle7 , and C. Lac“Placement Module for Distributed SDN/NFV Network Emula?on”. Submi&ed to Computer Networks Journal, 2021.[C2] G. Di Lena, A. Tomassilli, F. Giroire, D. Saucez, T. Turle7 , and C. Lac,“A Right Placement Makes a Happy Emulator: a Placement Module for Distributed SDN/NFV Emula?on”. IEEE Interna?onal Conference on Communica?ons (ICC), 2021.Bandwidth-op;mal Failure Recovery in SDN/NFV Networks[J2] A. Tomassilli, G. Di Lena, F. Giroire, I. Tahiri, D. Saucez, S. Perennes, T. Turle7, R. Sadykov, F. Vanderbeck, and C. Lac.“ Design of Robust ProgrammableNetworks with Bandwidth-op?mal Failure Recovery Scheme”. Computer Networks Journal. 2021.[C3] A. Tomassilli, G. Di Lena, F. Giroire, I. Tahiri, D. Saucez, S. Perennes, T. Turle7, R. Sadykov, F. Vanderbeck, and C. Lac. “Bandwidth-op?mal FailureRecovery Scheme for Robust Programmable Networks ”. In: 2019 IEEE 8th Interna?onal Conference on Cloud Networking (CloudNet). 2019.[P1] A. Tomassilli, G. Di Lena, F. Giroire, I. Tahiri, D. Saucez, S. Perennes, T. Turle7, R. Sadykov, F. Vanderbeck, and C. Lac. “Poster: design of survivableSDN/NFV-enabled networks with bandwidth-op?mal failure recovery ”. IFIP Networking Conference (IFIP Networking), 2019.
69Thank You!