end-to-end latency measurement in a multi-access edge computing...

100
Doctoral Dissertation End-to-End Latency Measurement in a Multi-access Edge Computing Environment Jonghwan Hyun ( X) Department of Computer Science and Engineering Pohang University of Science and Technology 2019

Upload: others

Post on 04-Sep-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Doctoral Dissertation

End-to-End Latency Measurement in a

Multi-access Edge Computing Environment

Jonghwan Hyun (현 종 환)

Department of Computer Science and Engineering

Pohang University of Science and Technology

2019

Page 2: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

MEC 환경에서의 종단간 지연 측정 방법

End-to-End Latency Measurement in a

Multi-access Edge Computing Environment

Page 3: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

End-to-End Latency Measurement in a

Multi-access Edge Computing Environment

by

Jonghwan Hyun

Department of Computer Science and Engineering

Pohang University of Science and Technology

A dissertation submitted to the faculty of the Pohang

University of Science and Technology in partial fulfillment of

the requirements for the degree of Doctor of Philosophy in the

Computer Science and Engineering

Pohang, Korea

06. 18. 2019

Approved by

James Won-Ki Hong (Signature)

Academic advisor

Page 4: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

End-to-End Latency Measurement in a

Multi-access Edge Computing Environment

Jonghwan Hyun

The undersigned have examined this dissertation and hereby

certify that it is worthy of acceptance for a doctoral degree

from POSTECH

06. 18. 2019

Committee Chair James Won-Ki Hong (Seal)

Member Jae-Hyoung Yoo (Seal)

Member Jong Kim (Seal)

Member Young-Joo Suh (Seal)

Member Hongtaek Ju (Seal)

Page 5: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

DCSE

20111088

현 종 환. Jonghwan Hyun

End-to-End Latency Measurement in a Multi-access Edge

Computing Environment,

MEC 환경에서의 종단간 지연 측정 방법

Department of Computer Science and Engineering , 2019,

74p, Advisors : James Won-Ki Hong, Prof. Jae-Hyoung

Yoo. Text in English.

ABSTRACT

After worldwide deployment of 4G LTE, 5G has been defined by IMT-2020

standard and commercially launched in April, 2019. Compared to 4G LTE,

5G aims to higher performance, including high data rate, reduced latency, en-

ergy saving, cost reduction, higher system capacity, and massive device support.

Multi-access Edge Computing (MEC) is one of the key enabler technologies for

5G to achieve low latency, by bringing down the cloud computing resources and

application servers to the edge. Based on the enhanced performance, 5G enables

various mission critical services those are not possible in current 4G mobile net-

work, such as autonomous driving, factory automation, VR/AR. Those services

have their own required latency and data rate and are generally hosted in the

MEC, to meet the requirement. Those latency and data rate requirements can be

described as a QoS-related metrics in Service Level Agreement (SLA) between

I

Page 6: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

each service provider and end-users. In the perspective of network operators,

they need to measure the QoS-related metrics of each service to ensure that they

are meeting the SLA in the MEC environment. They also need to prevent SLA

violations and when it happens, the root cause of the violation needs to be de-

tected promptly and precisely to alleviate the situation. To solve this problem, we

propose a new latency management method for 5G MEC environment. As basic

requirement, the proposed method measures end-to-end latency of each service,

including both network latency and server latency, detect SLA violation situa-

tion. It adopts in-band OAM methods to accurately measure actual data packets’

end-to-end latency with packet-level granularity. It is also able to detect the root

cause of the violation by measuring both network latency and processing latency

at servers. It is based on the NFV environment and considers service function

chaining; it can measure hop-by-hop latency of each network device and VNF

for each packet so that the fine-grained latency measurement is achieved. We

validated our proposed method by conducting several experiments with different

scenarios.

– II –

Page 7: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a
Page 8: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Contents

I. Introduction 1

1.1 Motivation and Problem Statement . . . . . . . . . . . . . . . . . . 1

1.1.1 Towards 5G era . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.2 The needs for measuring accurate end-to-end latency . . . . 2

1.2 Research Goals and Approach . . . . . . . . . . . . . . . . . . . . . 4

1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

II. Background and Related Work 6

2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.1 P4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.2 eXpress Data Path . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.1 Measurement approaches . . . . . . . . . . . . . . . . . . . 9

2.2.2 In-band Network Telemetry (INT) . . . . . . . . . . . . . . 10

2.2.3 In-band OAM . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.4 INT Solutions and Collectors . . . . . . . . . . . . . . . . . 12

III. Design 15

3.1 INT Management System . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.1 INT-capable Data Plane . . . . . . . . . . . . . . . . . . . . 15

3.1.2 Control plane . . . . . . . . . . . . . . . . . . . . . . . . . . 20

III

Page 9: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

3.1.3 INTCollector . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.2 Enhancing INT Specification for MEC Environment . . . . . . . . 27

3.2.1 Problems when applying INT to MEC environment . . . . 27

3.3 Mechanism of the end-to-end latency measurement . . . . . . . . . 32

IV. Implementation 36

4.1 Data plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.1.1 Ingress Parser . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.1.2 Ingress Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.1.3 Egress Parser . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.1.4 Egress Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.2 Control plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3 INTCollector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.3.1 Fast path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.3.2 Normal path . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.3.3 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

V. Validation 49

5.1 Testbed setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.2 INT processing overhead in the data plane . . . . . . . . . . . . . . 50

5.3 Scenario 1: Comparison with active measurement method . . . . . 51

5.4 Scenario 2: Detecting link congestion . . . . . . . . . . . . . . . . . 54

5.5 Scenario 3: Detecting overloaded host . . . . . . . . . . . . . . . . 56

5.6 Scenario 4: Measuring in-server packet forwarding . . . . . . . . . 58

IV

Page 10: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

VI. Conclusion 62

6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

6.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

6.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6.3.1 Improving packet processing performance of BMv2 soft-

ware switch . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6.3.2 Design and implement time synchronization algorithm us-

ing INT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

6.3.3 Enhance the collector logic to support bigger hops . . . . . 65

Summary (in Korean) 66

References 68

V

Page 11: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

List of Tables

1.1 Typical latency and data rate requirements for different services

in 5G [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.1 INT solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.1 INT metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

VI

Page 12: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

List of Figures

2.1 P4 switch architecture [2] . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 INT working process . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3 INT header format [3] . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.1 INT Service Architecture . . . . . . . . . . . . . . . . . . . . . . . 21

3.2 INTCollector Architecture . . . . . . . . . . . . . . . . . . . . . . . 23

3.3 A problem when applying INT to MEC . . . . . . . . . . . . . . . 28

3.4 Another problem when applying INT to MEC . . . . . . . . . . . . 29

3.5 INT header with seq no field . . . . . . . . . . . . . . . . . . . . . 30

3.6 INT anchor switch operations . . . . . . . . . . . . . . . . . . . . . 31

3.7 End-to-end latency measurement in 5G using INT . . . . . . . . . 32

3.8 Telemetry report aggregation process . . . . . . . . . . . . . . . . . 34

3.9 Mechanism to measure server latency using proposed method . . . 35

4.1 Ingress Parser Implementation . . . . . . . . . . . . . . . . . . . . 37

4.2 Ingress Pipeline Implementation . . . . . . . . . . . . . . . . . . . 38

4.3 Egress Parser Implementation . . . . . . . . . . . . . . . . . . . . . 40

4.4 Egress Pipeline Implementation . . . . . . . . . . . . . . . . . . . . 41

4.5 ONOS Subsystems with INT Control Plane Implementation . . . . 43

4.6 INT management applcation GUI . . . . . . . . . . . . . . . . . . . 44

4.7 INTCollector implementation . . . . . . . . . . . . . . . . . . . . . 45

VII

Page 13: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

4.8 INTCollector parsing procedure . . . . . . . . . . . . . . . . . . . . 46

5.1 Testbed setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.2 Latency in the data plane with and without INT processing . . . . 51

5.3 Service function chain configuration . . . . . . . . . . . . . . . . . 51

5.4 End-to-end latency measurement result in Scenario 1 . . . . . . . . 52

5.5 End-to-end latency measurement result in Scenario 1 (excluding

latency from VNFs’) . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.6 Service function chain configuration in Scenario 2 . . . . . . . . . . 54

5.7 End-to-end latency measurement result in Scenario 2 . . . . . . . . 55

5.8 End-to-end latency measurement result in Scenario 2 (excluding

latency from VNFs’) . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.9 Switch latency measurement result in Scenario 2 . . . . . . . . . . 57

5.10 Queue length in each port on the path in Scenario 2 . . . . . . . . 57

5.11 Topology and packet path in Scenario 3 . . . . . . . . . . . . . . . 58

5.12 End-to-end latency measurement result in Scenario 3 . . . . . . . . 58

5.13 End-to-end latency measurement result in Scenario 3 (excluding

latency from VNFs’) . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.14 Topology and packet path in Scenario 4 . . . . . . . . . . . . . . . 59

5.15 End-to-end latency measurement log in Scenario 4 . . . . . . . . . 60

5.16 End-to-end latency measurement result in Scenario 4 . . . . . . . . 60

– VIII –

Page 14: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

I. Introduction

1.1 Motivation and Problem Statement

1.1.1 Towards 5G era

After worldwide deployment of 4G LTE, 5G has been standardized to meet

the requirement of IMT-2020 set by the ITU-R specification [4] and commercially

launched in April, 2019. Compared to 4G LTE, 5G aims to higher performance,

including high data rate, reduced latency, energy saving, cost reduction, higher

system capacity, and massive device connectivity. 5G adopted many new tech-

nologies to meet the performance target: mmWave [5], massive MIMO [6], SDN

(Software-Defined Networking) [7], NFV (Network Function Virtualization) [8],

5G SON [9], and MEC (Multi-access edge computing) [10].

5G has three major use case classes, which support enhanced mobile broad-

band (eMBB [11], up to 20 Gbps peak downlink data rate), ultra-reliable low

latency communication (uRLLC [12], less than 1ms), and massive machine-type

communication (mMTC [13], 1M devices per km2).

A lot of 5G use cases, such as autonomous driving, factory automation,

VR/AR requires uRLLC. One of the key technologies of 5G is MEC to achieve

low latency for uRLLC. Traditional cloud computing architecture in the mobile

network cannot meet this requirement since the application servers are hosted in

the remote data centers so the delay is contributed by the distance between radio

– 1 –

Page 15: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

access network (RAN) and centralized data center. By bringing down the cloud

to the edge, the latency and network congestion can be drastically reduced while

improving application performance.

There are several options where to deploy MEC hosts and it is the role

of UPF (User Plane Function) to steer user plane traffic to the target MEC

applications in the data network. It is the role of network operator where to place

the physical computing resources based on the business and technical parameters,

such as site availability, CAPEX and OPEX, supported applications and their

requirements and etc. The ETSI white paper suggested four different feasible

physical deployment options: collocated with the base station, collocated with

a transmission node, collocated with a network aggregation point, or collocated

with the core network functions (i.e., in the same data center) [10]. Ideally,

collocating MEC hosts with the base station is the best option considering only for

the performance. However, MEC needs to host various services and applications,

so one or two servers will not be sufficient for a single place. Thus, it is infeasible

to collocate MEC hosts with the base station at the early stage of 5G deployment.

Instead, deploying MEC hosts on the network aggregation points to cover several

base stations is a viable option as a starting point.

1.1.2 The needs for measuring accurate end-to-end latency

With the higher data rate and reduced latency, 5G enables various mission

critical services that are not possible in current 4G mobile network. Table 1.1

lists typical latency and data requirements for different mission critical services.

Each service has its own required latency and data rate and those parameters

– 2 –

Page 16: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

are specified as a QoS-related metrics in Service Level Agreement (SLA) [14]. In

the perspective of network operators, they need to measure those QoS-related

metrics of each service to ensure that they are meeting the SLA. They also need

to avoid SLA violation and when it happens, the root cause of the violation needs

to be discovered promptly and precisely to alleviate the situation.

Table 1.1: Typical latency and data rate requirements for different services in 5G

[1]

Use case Latency Data rate

Factory Automation 0.25 - 10 ms 1 Mbps

Intelligent Transport Systems 10 - 100 ms 10 - 700 Mbps

Robotics and Telepresence 1 ms 100 Mbps

Virtual Reality (VR) 1 ms 1 Gbps

Health Care 1 - 10 ms 100 Mbps

Serious Gaming 1 ms 1 Gbps

Smart Grid 1 - 20 ms 10 - 1500 Kbps

Education and Culture 5 - 10 ms 1 Gbps

In this thesis, we propose a new latency management method for 5G MEC

environment. The proposed method measures end-to-end latency of each service,

including both network latency and server latency, detect SLA violation situation.

Since MEC is based on the NFV environment [10] and considers service function

chaining, the proposed method can measure hop-by-hop latency of each network

device and VNF (Virtual Network Function) [15] for each packet so that the

fine-grained latency measurement is achieved.

– 3 –

Page 17: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

1.2 Research Goals and Approach

In this section, the research problems and goals are briefly listed. This thesis

tries to answer the following key questions.

• How to measure E2E (end-to-end) latency of different services with different

SLA?

• How to measure hop-by-hop latency of each service packet?

• How to detect latency SLA violation?

• How to find out the root cause of SLA violation?

Pertaining to these questions, this thesis will develop an end-to-end latency

management method in 5G environment with the following features:

• Monitor E2E latency of services

• Support hop-by-hop latency measurement, including SFC (Service Function

Chain) [16]

• Provide hop-by-hop latency measurement data, to detect flows which vio-

lates latency SLA and to discover the root cause of detected SLA violation

1.3 Organization

This thesis is organized as follows: Chapter II introduces backgrounds and

three different measurement approaches with their pros and cons. Chapter III

– 4 –

Page 18: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

describes the proposed end-to-end latency measurement method for MEC en-

vironment. Chapter IV explains the implementation details for realizing the

proposed measurement method. Chapter V presents the validation result of our

proposed method. Chapter VI concludes this thesis and states the future work.

– 5 –

Page 19: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

II. Background and Related Work

In this chapter, we first present background that is relevant to our work: P4,

and XDP (eXpress Data Path). Second, we survey three different approaches

for network monitoring and show that in-band measurement method is most

appropriate technique in our approach. After that, we present two different

in-band measurement method, In-band Network Telemetry (INT) and iOAM.

Finally, we survey related work on existing INT implementations and collectors.

2.1 Background

2.1.1 P4

P4 [17] is a high-level language for programming protocol-independent packet

processors, and it is used to program how packets are processed in the data path,

e.g., in P4-supported switches. In traditional switches, the specification indicates

what a switch can and cannot do. For example, an OpenFlow [18] switch has a

fixed set of functions defined in the OpenFlow specifications [19]. This leads to

very complex switch designs as the OpenFlow specification grows and supports

more network protocols. P4 solves this problem by directly programming switch

data paths, which allows programmers to decide how a switch processes packets

using custom actions and packet formats.

Figure 2.1 shows one common architecture of a P4 switch. A general proce-

dure for a pipeline is as follows:

– 6 –

Page 20: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Figure 2.1: P4 switch architecture [2]

1. The Parser extracts bits from packet headers (e.g., Ethernet, IPv4, or

TCP).

2. Packets pass through the Ingress pipeline for packet header modifications

and egress port selection.

3. Packets are pushed to Queues/Buffers for queueing, packet scheduling, and

packet replication.

4. Packets pass through the Egress pipeline for further modifications.

5. Packets pass through the Deparser for serialization and then exit the switch.

P4 is designed using two key ideas: (1) match + action tables for simpli-

fication and compatibility and (2) pipeline processing for improved throughput.

The Match/Action tables are in Ingress and Egress. P4 packet processing can

be described as a flow of packets running through the tables. A table compares

the matching fields in flow entries with packet fields or metadata to determine

– 7 –

Page 21: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

the appropriate action (adding, removing, changing fields or passing the table

without any action). After finishing the modification, packets pass to the next

table for the next match + action process.

P4 defines two types of metadata for each packet. These carry intermediate

data generated during the execution of a P4 program. Standard metadata is a

common metadata structure defined for every packet in the programmable data

plane. User-defined metadata is a custom metadata that is defined in a P4

program.

2.1.2 eXpress Data Path

eXpress Data Path (XDP) [20] is a kernel framework that allows packet pro-

cessing inside Linux kernel. Unlike kernel modules that affect the stability of the

kernel, XDP is safe and secure. XDP programs process packets immediately after

they arrive at a NIC to achieve high throughput. In this way, XDP avoids cost

of kernel networking stack processing and also avoids kernel-user space switching

[21].

However, XDP has a limitation. XDP has a restricted programming capa-

bility to ensure the safety of the kernel. One such restriction is limited number

of instructions. Thus, XDP should be used only for tasks that are simple but

require real-time processing.

Other than XDP, there are other frameworks for fast packet processing,

such as Data Plane Development Kit (DPDK) [22]. However, XDP has several

advantages compared with DPDK:

• XDP does not require a dedicated CPU core for packet polling.

– 8 –

Page 22: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

• XDP does not need to allocate large pages.

• XDP can work in conjunction with a kernel networking stack.

• XDP does not require special hardware NICs with XDP-supported drivers

(however, XDP requires hardware NICs with XDP-supported drivers for

higher performance).

2.2 Related Work

2.2.1 Measurement approaches

Operations, Administration and Maintenance (OAM) is a well-known term

that refers to a toolset for fault detection and isolation, and for performance

measurement [23]. There are three different types of measurement approaches:

passive measurement, active measurement, and in-band measurement.

Passive measurement uses passive probes to monitor performance metrics.

The monitored metrics include packet and byte counters, queue status, latency

statistics. It is based solely on observations of an undisturbed and unmodified

packet stream of interest, which means that the passive measurement should not

add, change or remove packets or fields or change field values anywhere along the

path. While it is able to monitor the performance of live user traffic, it does not

give network-wide information about network paths or dropped packets [24].

Active measurement carries OAM and telemetry information within the

packet dedicated to OAM. Examples of the active measurement are ping and

traceroute. While it is not affected by data traffic and it can explicitly control

the generation of packets for measurement scenarios, it cannot capture the exact

– 9 –

Page 23: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

behavior of the network (e.g., packet scheduling algorithm leveraging 5-tuple or

MAC) and generates additional traffic to the network.

In-band measurement embeds telemetry information into a data packet when

it traverses the network. The embedded telemetry information is associated with

the packet that carries the information. Besides, it does not require extra packets

to be sent and hence don’t change the packet traffic mix within the network.

However, it increases data packet size, which in turn increases traffic volume and

causes fragmentation. IPv4 route recording [25], in-situ OAM [26], and INT [27]

are an example of in-band measurement.

In this thesis, in-band measurement approach is adopted to measure end-to-

end latency of data packets since it is able to associate certain packet with the

measurement data so that we can accurately measure the latency of each packet.

2.2.2 In-band Network Telemetry (INT)

In-band Network Telemetry (INT)[3] is a network monitoring framework

designed to collect and report network states directly from the data plane. In

every switch in the flow path, INT attaches the information to the packets that

pass through the switches (Figure 2.2). At the penultimate switch (sink switch in

INT), the information is extracted, encapsulated as a report packet called teleme-

try report [28], and then sent to a collector. The information can be any data

provided by the switches, such as timestamp, hop latency or link utilization. INT

parameters, such as which information to collect, are included in the INT header

as “telemetry instruction”. Usually, the telemetry instructions are controlled by

a central controller. The structure of INT report packet and other information

– 10 –

Page 24: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Figure 2.2: INT working process

Figure 2.3: INT header format [3]

of INT can be found in the specification [3]. It provides various encapsulation

options: VXLAN-GPE, NSH, GRE, Geneve and INT over L4.

INT provides real-time, fine-grained, and end-to-end network monitoring.

INT enables many advanced applications such as network troubleshooting, ad-

vanced congestion control, advanced routing, or network data plane verification

[3]. INT performs all the monitoring function in the data plane, thus the net-

– 11 –

Page 25: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

work with INT enabled should still be able to run at full line-rate. Usually, INT

is implemented with programmable data plane, which means network functions

(switches) can be re-programmed. When a packet with INT header is forwarded

to a legacy switch which cannot recognize INT header, INT header portion is

ignored. Since the switch does not parse after L4 header where INT header is

located, the packet is treated as a normal packet without knowing the existance

of INT header.

2.2.3 In-band OAM

In-band OAM (also known as in-situ OAM) [26] is another in-band OAM

method by IETF In-situ OAM working group. In-situ OAM also provides real-

time telemetry of individual packets and flows. The telemetry information is

embedded within the data packets as part of an existing or additional header.

It provides various encapsulation options: IPv6, VXLAN-GPE, NSH, GRE, and

Geneve. The collected data can be exported via IPFIX/NetFlow/Kafka. Data

fields include node ID, ingress/egress interface ID, timestamp, and queue length.

2.2.4 INT Solutions and Collectors

IntMon [29] focuses on INT implementations in the data plane and the INT

controller service in the ONOS (Open Network Operating System) controller.

A simple collector for INT reports is included in IntMon. However, it is not

able to query history of network information because IntMon collector does not

store historical data. Moreover, it has a low processing rate and limited scalability

because IntMon collector is implemented as an ONOS application, so every single

– 12 –

Page 26: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

report packet needs to be processed in ONOS core.

Prometheus INT exporter [30] is another collector for INT. For every INT

report packet, Prometheus INT exporter extracts network information into met-

rics and pushes the metrics to a gateway. A central Prometheus database server

periodically scrapes the latest data from the gateway. Prometheus INT exporter

has two problems. First, a high overhead is incurred for processing and sending

data to the gateway for every INT report. Second, Prometheus database stores

only the latest data from the gateway, that for each scrape. All other data are

discarded, although network events, like short traffic bursts, can occur between

two scrapes.

Netcope[31] implemented a 100G INT sink on FPGA, to strip and export

INT header and data to Flowmon Collector. However, only two types of INT

data (Switch ID, timestamp) are allowed and INT data stack length is limited to

two.

There are also commercial solutions from Barefoot [32] and Broadcom [33].

Table 2.1 summarizes characteristics of INT solutions and the proposed ar-

chitecture.

– 13 –

Page 27: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Tab

le2.

1:IN

Tso

luti

ons

Featu

reIN

TFu

ncti

on

sT

arg

et

Collecto

rO

pen

Sou

rce

IntM

on

[29]

Src

/Sin

k/T

ran

sit

P4

dev

ices

ON

OS

O

Pro

meth

eu

sIN

TC

ollecto

r[30]

N/A

P4

dev

ices

Pro

met

heu

sX

Bare

foot

Deep

Insi

ght[

32]

Src

/Sin

k/T

ran

sit

P4

dev

ices

Dee

pIn

sigh

tX

NE

TC

OP

E100G

INT

[31]

Sin

kF

PG

AF

low

mon

X

Bro

ad

com

In-b

an

d

Tele

metr

yin

Tri

dent

3[3

3]

Src

/Sin

k/T

ran

sit

Bro

adco

m

Tri

den

t3

Bro

adV

iew

An

alyti

csX

Pro

pose

dA

rch

itectu

reS

rc/S

ink/T

ran

sit

P4

dev

ices

INT

Col

lect

orO

– 14 –

Page 28: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

III. Design

In this chapter, we firstly present the general INT management system de-

sign, including an INT-capable data plane, a control plane and a collector. We

then describe how we enhanced the existing INT specification to be adopted

to MEC environment. Finally, we present the design of the end-to-end latency

measurement method for MEC environment.

3.1 INT Management System

3.1.1 INT-capable Data Plane

The design of our INT-capable data plane is based on the P4 abstract for-

warding model [17]. Regarding the architecture, the processing flow of an INT-

capable data plane contains three principal parts: Parser, Ingress pipeline and

Egress pipeline. The Parser parses packet headers, including INT headers. The

Ingress pipeline performs packet forwarding and populates INT metadata for a

packet. Finally, the Egress pipeline adds INT data to the packet.

We defined two roles for an INT-capable data plane based on the INT speci-

fication [3]: source/sink and transit. A transit switch performs INT operations:

parsing INT headers and adding INT data specified in the header. A source/sink

switch includes the capability of a transit switch. Additionally, it works as a first

hop and last hop switch. A source/sink switch adds an INT header to a packet

from a host (including VNF) and removes INT header and INT data from a packet

– 15 –

Page 29: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

that is going to be forwarded to a host. It also generates and sends telemetry

report packets [28] to the collector. The INT management system assigns a role

to each switch and populates corresponding table entries.

INT metadata are defined as user-defined metadata for INT processing.

They consist of switch id, a source flag and a sink flag. switch id is assigned by

the controller and is fed as one of the INT data items defined in the specification.

source and sink flags are 1-bit fields that indicate if a packet is being forwarded

in the first hop switch or last hop switch, respectively. mirror id field is defined

in standard metadata and is used as an identifier of a cloned packet in the data

plane. By assigning a specific value to that field, the INT-capable data plane

identifies cloned INT packets during INT processing. The behavior is described

in detail in the following sections.

Parser

The main role of the Parser is identifying the existence of INT headers in

incoming packets. INT header can be identified by DSCP value in IPv4 header

since INT over TCP/UDP encapsulation changes DSCP value to a predefined

value. For packets with INT header, the Parser parses INT header and data.

Ingress pipeline

When packet parsing is completed, packets are fed to the Ingress pipeline for

packet forwarding, i.e., egress port selection. A basic forwarding Match/Action

table (e.g., L2 switching and L3 routing) is implemented in the Ingress pipeline.

INT processing in the Ingress pipeline is started after a packet passes through

– 16 –

Page 30: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

forwarding tables so that it can use egress port information. INT sink operation

is executed if a host is connected to the egress port.

Algorithm 1 shows processing algorithms in the Ingress pipeline. INT-related

packet metadata(pkt meta) are populated after a packet passes through forward-

ing tables. First, Ingress pipeline determines whether a switch is source (first

hop switch in the path) or sink (last hop switch in the path). Then, it sets

corresponding flags in the metadata (source or sink, respectively). Since the

control plane has network topology information, it identifies ports that a host is

connected to and populates source table and sink table accordingly. As a packet

passes those tables, source flag is set if packets are from those ports and sink flag

is set if packets are going to be forwarded to those ports. In addition, a packet is

cloned and mirror id in the metadata is populated with the given ID from the

controller if the switch is a sink switch. After completing the Ingress pipeline,

packets are sent to Queues/Buffers and then sent to the Egress pipeline.

Egress pipeline

The Egress pipeline adds an INT header and INT data to a packet, mainly

because many INT data (e.g., hop latency, queue occupancy) become available

in Egress. First, Egress checks source flag in the packet metadata. If source flag

is set, it checks whether the packet header matches an entry in watchlist tables,

which determines which data packets to monitor by matching packet header fields

[28]. If the packet header matches an entry in the table, an INT header is inserted

into the packet with parameters sent from the control plane. Second, if an INT

header exists in the packet header, INT information of the switch is attached to

– 17 –

Page 31: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Algorithm 1 INT processing in Ingress pipeline

Require: pkt hdr - parsed packet header, mirror id - ID of cloned INT packet

Ensure: pkt hdr, pkt meta

function Ingress(pkt hdr)

Apply basic forwarding table . Set egress port

if pkt hdr matches source table then

Set pkt meta.source flag . The packet is coming from a host

end if

if pkt hdr matches sink table then

Set pkt meta.sink flag . The packet is going to be forwarded to a host

end if

if pkt meta.sink then

Clone pkt and set mirror id to the cloned pkt . Using a pre-defined

mirror id

end if

end function

– 18 –

Page 32: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Algorithm 2 INT processing in Egress pipeline

Require: pkt hdr, pkt meta, mirror id

Ensure: pkt hdr

function Egress(pkt hdr, pkt meta)

if pkt meta.source and pkt hdr matches watchlist table then

Add INT header . The packet is going to be monitored

end if

if pkt hdr has INT header then

Add INT data

if pkt hdr is cloned and pkt meta.mirror id is mirror id then

Encapsulate pkt into a telemetry report . Outer ETH/IP/UDP

header is added

end if

if pkt meta.sink then

Remove INT header and data from pkt hdr

Restore original header

end if

end if

end function

– 19 –

Page 33: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

the end of the INT data. The type of INT data is determined by the value in the

instruction field in INT header. Finally, the packet is sent to the Deparser and

sent out of the switch.

If a switch is the last hop switch before the destination host, an entire packet

with its INT data is cloned. INT data of the sink switch is attached to the cloned

packet and is encapsulated within a telemetry report header. The encapsulated

packet is forwarded to an external collector. In order to restore an original packet,

INT header and INT data are removed from the packet. The packet is then

forwarded to the destination. In this way, INT monitoring process is transparent

to end hosts.

The Ingress and Egress pipelines are composed of Match/Action tables. Al-

though P4 supports if-else branching, most of the decision logic is implemented

with tables for the sake of simplicity. Entries in those tables are populated by

the control plane right after the switch is connected to it.

3.1.2 Control plane

The INT management system (Figure 3.1) controls INT-related behavior of

INT-capable switches. INT-related behavior includes installing target flows to

monitor, specifying types of INT data to collect and configuring the collector

information for sink switches. The proposed system is composed of INTIntent,

INT Service, an INT driver interface, and a control application.

• INTIntent is a network-level abstraction that carries information for con-

trolling INT-related behavior. With this abstraction, an application can

– 20 –

Page 34: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Core

Protocol

Driver

- Start/stop INT monitoring- Generate INTIntent- Configure events to trigger report generation- Configure collector information

INT control application

INT Service INT NB API (Network-level)

INT driver interface (Switch-level)

Tofino BMv2 P4Runtime

gRPC

- Assign INT role to each switch- Decompose INTIntent into table entries- Populate INT-related tables- Configure collector information

INT-capable devices

Network Controller

Figure 3.1: INT Service Architecture

easily populate tables of all INT-capable switches with monitoring rules in

the network, without knowing anything about the network it is monitoring,

such as network topology or data plane structure of each switch. INTIn-

tent consists of traffic slices (as a 5-tuple) and network states to monitor

(defined in the INT specification).

• INT Service is an implementation of a pipeline-agnostic northbound API.

It orchestrates generation and collection of INT data. The API includes

functions for starting/stopping INT, adding/removing INTIntent, and set-

ting/getting INTConfig. INT Service decomposes INTIntent into flow en-

– 21 –

Page 35: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

tries and populates those entries in the INT-capable pipelines. It also config-

ures INT-related parameters, such as external collector IP/port. It assigns

a role of each INT-capable switch, either src/sink or transit, and populates

flow rules according to the role.

• INT driver interface defines a common interface for managing hetero-

geneous INT-capable switches. It defines common INT-related behavior

that all INT-capable switches should support, such as adding target flows

to monitor or configuring the collector IP address and port number. Each

INT-capable switch implements the driver interface according to its own

data plane structure.

• INT control application is a web GUI. It is used to specify which flows

and which network state to monitor. This information is translated into an

INTIntent which is then sent to the INT Service.

3.1.3 INTCollector

The role of INTCollector is to collect INT data from INT-capable data plane,

in the form of telemetry report. It also extracts and filters useful network infor-

mation from collected data into INT metric values. It then stores those metric

values into a database. The SDN controller can query network information from

databases and use the information to understand and control the network.

Figure 3.2 presents the design of INTCollector. The rest of this section

explains in detail how INTCollector works.

– 22 –

Page 36: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

INT Parser Switch metric

Flow metric

Eventdetector

Web UI Alert ...

Stop

INTTelemetry

reports

INTC

ollector

Push metrics

Eventprocessor Exporter

Time-seriesDBs

Flow-switchmetric

Normal path

Fast path

Metric tables

Figure 3.2: INTCollector Architecture

Metrics

A metric is a data structure to represent network information. Since storing

raw INT data is inefficient for processing and querying in databases, the data in

telemetry reports are re-organized and defined as a metric with a metric key and

a metric value. A metric key is a tuple of (IDs, measurement) or (ids, m).

• IDs: A tuple of one or several characteristics of flows, networks, or switches

that do not change with time (e.g., a tuple of switch ID = 2 and egress port

ID = 1).

• measurement : The type of INT data (e.g., switch ID and egress port ID

together identify a network link and utilization of this link is a measurement

– 23 –

Page 37: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

that changes over time).

• metric value: the value of a measurement of one metric key at a certain

time. For example, hop latency of (sw id = 4, queue id = 1) is 1.2 ms at

the time point of 10 s.

Metrics can be divided into three types: flow metric, switch metric, and flow-

switch metric. Flow metrics include values that are related to flow identification

and timestamps. Switch metrics include values that are related to the identifi-

cation of a switch or switch components and timestamps. Flow-switch metrics

include values that are related to the identification of both flow and switch/switch

components and timestamps.

The INT specification [3] defines nine fields of INT data: four identification

fields (switch ID, ingress port ID, egress port ID, and queue ID), a time field

(timestamp) and four measurement fields (hop latency, queue occupancy, queue

congestion and link utilization). From these nine fields, we define six metrics

(Table 3.1).

Table 3.1: INT metrics

IDs Measurement Metric Type

<5-tuple> Flow path Flow

<5-tuple> Flow latency Flow

<5-tuple + sw id> Flow per-hop latency Flow-switch

<sw id, queue id> Queue occupancy Switch

<sw id, queue id> Queue congestion Switch

<sw id, egress id> Link utilization Switch

– 24 –

Page 38: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Processing flows

INTCollector has two processing paths: a fast path and a normal path. The

fast path processes every INT report. Thus, the fast path is required to achieve

a high packet processing rate. The normal path processes events sent from the

fast path and stores INT metric values in the database.

In the fast path, INT telemetry report packets are passed to the INT parser,

which deserializes the packets to extract INT header and INT data. The event

detector converts INT data into network metric values and detects network events

by comparing them with the latest values stored in the Info tables. If a network

event is detected, it is sent to the event processor in the normal path. Last,

metric tables store the latest metric value for each metric key according to the

metric type.

In the normal path, the event processor processes network events sent from

the fast path. The Exporter gets metric values from two sources: from the event

processor in the normal path and from tables in the fast path. It then sends these

values to the database.

Event detection mechanism

The event detector helps to detect network events from INT data. Most of

the time, the INT data from several consecutive telemetry report packets will

not change significantly (e.g., hop latency of a port in a switch may remain the

same or change very little over several consecutive telemetry reports). Instead

of storing network metrics for each report, the event detector filters important

– 25 –

Page 39: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

network events to reduce the number of metric values that need to be stored.

We define an event as INT data that contains either a new metric key or a

significant change in the value of an existing metric value. Let M be the set of

all (IDs,measurement) or (ids,m) in the collector. Let Vids,m(t) be the metric

value of (ids,m) at time t. A new event happens when at least one of the following

conditions occurs:

• There is a new (ids,m) /∈M . For example, a new flow generates events for

flow path, flow latency, and flow per-hop latency.

• ∃(ids,m) ∈M which satisfies |Vids,m(t2)−Vids,m(t1)| > T (m) , where t1 and

t2 are timestamps with t2 > t1 and T (m) is a threshold for the measurement

m. For example, a significant increase in hop latency of (switch 1, port 2)

generates an event.

IDs uniquely identifies certain metric value among a set of measurement

results with same measurement type.

Using a threshold for event detection significantly reduces the amount of

data to be stored in the database, with a trade-off in terms of accuracy. With

a smaller threshold T , more accurate metric values can be collected and event

detection becomes more sensitive to value changes. With a larger threshold, the

number of events and metric values are reduced but the accuracy is also reduced.

Exporter

The Exporter sends metric values to the database in two different ways: it

either periodically pushes the latest values from metric tables or pushes the values

– 26 –

Page 40: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

when a new event happens. The Exporter pushes metric values into the database

periodically for two reasons: to update live status of a metric (especially for flow

and flow-switch metrics) and to update the latest value even when there is no

network event. Sending data to the database periodically helps in checking live

status of a metric.

Database

The database stores historical INT metric values. A network controller can

query network information from the database. The database should support a

high write throughput because it is expected that multiple instances of INTCol-

lector will send data to the same database instance. Because INT metrics have

their own timestamp and INTCollector needs to push event data, the database

should support a custom timestamp and push mechanism.

3.2 Enhancing INT Specification for MEC Environ-

ment

In this section, we discuss the necessity of the INT specification enhancement

for MEC environment and how we enhanced the specification.

3.2.1 Problems when applying INT to MEC environment

Figure 3.3 shows the problematic situation when applying INT to MEC

environment. INT is transparent to end-hosts in principle, which means that

INT data (INT headers and metadata) does not present in packets when being

forwarded to end-hosts. INT sink switch does this role, extracting INT data,

– 27 –

Page 41: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Thesis Defense

HeaderPayload

Header

Payload

INT hdrINT data #1

Header

Payload

INT hdrINT data #1INT data #2INT data #3

Header

Payload

INT hdrINT data #1INT data #2

1 (source)

2 (transit)

3 (sink)

IPSconsidered as

a packet payload

Ethernet

Payload

INT hdrINT data #1INT data #2

IPTCP/UDP

Figure 3.3: A problem when applying INT to MEC

encapsulating as a telemetry report and sending it to a remote collector. However,

there is no consideration in the INT specification when a packet with INT data

pass through VNFs located in the middle of a service function chain. When a

packet with INT data is being forwarded to VNFs, they may not understand

INT data unless INT is deployed to them. Since INT data is located after L4

headers (TCP/UDP), it is considered as a packet payload in VNFs. In this case,

VNF may not work as expected, because INT data is located in the application

payload position. For example, IPS (Intrusion Prevention System) inspects first

few bytes in a packet payload to find malicious data and intrustions. If INT data

is located in that packet, IPS does not understand INT data and consider it as

a packet payload. As a result, IPS may not detect malicious data in the packet

even if it contains the malicious data. VNFs which inspect packet payload (e.g,

IPS, IDS (Intrusion Detection System) and DPI (Deep Packet Inspection)) may

have this kind of problem. Open-source software that are widely used as IDS/IPS

and DPI (e.g., suricata [34], snort [35], Bro [36], nDPI [37], Netifyd [38]) do not

– 28 –

Page 42: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

recognize INT data. To prevent this, each VNF needs to be considered as an

end-host and INT data needs to be extracted before being forwarded to VNFs.

switch switch switch

server

Hypervisor

vSwitch

VNF VNF

Header

Payload

INT hdrINT data #1INT data #2

1 3

Report hdrINT hdr

INT data #4

Report hdrINT hdr

INT data #1INT data #2INT data #3

1 2Report hdr

INT hdrINT data #10

3 4Collector

Report hdrINT hdr

INT data #5INT data #6INT data #7INT data #8INT data #9

2server

Hypervisor

vSwitch

VNF VNF4

MEC

Figure 3.4: Another problem when applying INT to MEC

This approach causes another problem in the INT process, which is explained

in Figure 3.4. When a packet is being forwarded to certain service function chain,

a partial telemetry report packet is generated when it is forwarded to each VNF.

As a result, a single packet generates a number of telemetry report packets and

each of the report only contains a partial information. A collector is not aware

of this case and each report packet is considered as an end-to-end measurement

result.

– 29 –

Page 43: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

To solve this problem, we propose an enhancement in the INT specifica-

tion, to be applied to MEC environment. We enhanced INT specification in the

followings:

• Add a sequence number in INT header.

• Define INT anchor switch to keep INT header fields when packets are for-

warded to VNFs.

Figure 3.5: INT header with seq no field

Firstly, we added a sequence number in INT header, as depicted in Figure

3.5. We used 16-bit reserved field to carry a sequence number in INT header.

It is unique in certain flow which has same 5-tuple and is incremented by 1 for

each packet in a flow. Sequence number is allocated in an INT source switch.

It is used in the collector as a key to find partial report packets generated by a

certain data packet. To perform this operation, the sequence number should be

kept when a packet with INT header passes through VNFs.

We defined an INT anchor switch to keep INT header fields when packets are

forwarded to VNFs (Figure 3.6). The procedure of INT anchor switch operates

as source mode or sink mode, according to the packet path.

– 30 –

Page 44: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Switch

VNF

Collector

Header

Payload

INT hdrINT data #1INT data #2

HeaderINT hdr

INT data #1INT data #2INT data #3 Header

Payload

INT hdrINT data #4

RegisterINT hdr

HeaderPayload

Figure 3.6: INT anchor switch operations

When the anchor switch forwards packets to a VNF, it operates as sink

mode. It firstly extracts, encapsulates the INT data and sends the INT data to

the collector. It also keeps the sequence number in the switch register. A hash

of 5-tuple with an egress port number is used as a key in the register.

When the anchor switch receives packets from a VNF, it operates as source

mode. It adds an INT header to the packet and restores the sequence number

read from the register. A hash of 5-tuple with an ingress port number is used as

a key for lookup in the register.

With these enhancements, partial report packets can be aggreated into a

single INT data.

– 31 –

Page 45: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

3.3 Mechanism of the end-to-end latency measure-

ment

In this section, we propose the mechanism of the end-to-end latency mea-

surement in 5G MEC environment.

AU

AU

AU: Access UnitCU: Central UnitUPF: User Plane FunctionCN: Core NetworkOLT: Optical Line TerminalONT: Optical Network Terminal

5G Edge Cloud

UPF

CN

APPAPP

APP

INT-enabled

INT-enabled

INT-enabled

Measuring latency using INT

MEC5G Core (UP)

CU

E2E latency measurement process using INT

OLT

ONT

Figure 3.7: End-to-end latency measurement in 5G using INT

Figure 3.7 depicts the range and the mechanism of the end-to-end latency

measurement. We are using in-band OAM method to measure the end-to-end

latency for each packet accurately. In the rest of the thesis, INT is used to

describe the framework and the measurement process, but iOAM can be also

used. Latency measurement starts from the base station. To be precise, we

suppose that there is an INT-capable switch is installed on each base station,

between access unit (AU) and central unit (CU). The platform adds flow entries

to INT watchlist [3] in the switches so that INT instructions are added to the

– 32 –

Page 46: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

packet which matches an entry in the watchlist. Each hop will add its own INT

metadata to the packet and forward it to the next hop until it is forwarded to a

host. Since multi-access edge computing environment can be also connected to

other types of access networks rather than 5G fronthaul, the figure also depicts

PON as an example. The proposed method covers wired networks in the network

operator’s management domain.

By applying the enhanced INT proposed in previous section, end-to-end

latency for each data packet can be collected, including SFC. When a packet in

a single flow enters the network, INT source switch adds INT header and assign

a sequence number to the header. Each report packet that this packet generates

also contains INT header with the assigned sequence number. In the collector,

partial report packets can be aggregated using 5-tuple in the inner IP header

and the sequence number in the INT header as a key. Remaining hop count

field in INT header [3], which is decreased by 1 for each hop, is used as an index

to make an order among the aggregated report packets. In this way, an end-

to-end measurement data can be generated from partial report packets. Figure

3.8 depicts how the collector aggregate partial report packets. The collector

aggregates partial report packets by looking up packets with same 5-tuple and

INT sequence number.

Server latency measurement

The server latency, processing time of each VNF and the application, con-

tribute the large portion of the end-to-end latency, so it also needs to be measured

accurately. The proposed method can be also used to measure server latency. The

– 33 –

Page 47: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Report hdr

INT hdrSeq. #1

INT data #1INT data #2INT data #3

HeaderReport hdr

INT hdrSeq. #1

INT data #8

HeaderReport hdr

INT hdrSeq. #1

INT data #5INT data #6INT data #7

HeaderReport hdr

INT hdrSeq. #1

INT data #4

Header

Report hdr

INT hdrSeq. #2

INT data #1INT data #2INT data #3

HeaderReport hdr

INT hdrSeq. #2

INT data #8

HeaderReport hdr

INT hdrSeq. #2

INT data #5INT data #6INT data #7

HeaderReport hdr

INT hdrSeq. #2

INT data #4

Header

Report hdr

INT hdrSeq. #3

INT data #1INT data #2INT data #3

HeaderReport hdr

INT hdrSeq. #3

INT data #8

HeaderReport hdr

INT hdrSeq. #3

INT data #5INT data #6INT data #7

HeaderReport hdr

INT hdrSeq. #3

INT data #4

Header

A single flow

PayloadHeader

PayloadHeader

PayloadHeader

Collector

Figure 3.8: Telemetry report aggregation process

mechanism is shown in Figure 3.9. When a packet is forwarded to a VNF in a

service function chain, it is forwarded back to the same switch. We are exploiting

this characteristic to measure server latency using our method. By extracting

the egress timestamp of the packet being forwarded to the VNF from the ingress

timestamp of the returning packet, the server latency can be captured.

Although our method can measure end-to-end latency of each packet accu-

rately, it is insufficient to find the root cause on the SLA violation. Server-side

information is also required to correctly diagnose the root cause. By installing

an agent on each VNF, server-side information can be collected. The agent ac-

cesses network socket-level information to associate each packet to the specific

process it handles. The information includes the process’s CPU and memory

usage, in/out packet rate, I/O data bytes/sec, page fault/sec. It also includes

– 34 –

Page 48: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

switch

server

Hypervisor

vSwitch

VNF VNF

Egresstimestamp

Ingresstimestamp

time

Server latency

Figure 3.9: Mechanism to measure server latency using proposed method

application-specific variables. For example, in case of web server, requests/sec

and the type of request (GET, POST, PUT, and etc.), connection attempts/sec,

responses/sec can be collected.

– 35 –

Page 49: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

IV. Implementation

In this chapter, the implementation details for realizing the proposed latency

measurement system design are described mentioned in the previous chapter.

4.1 Data plane

The data plane implementation is written in P4 language, a domain specific

language for programmable data plane. The implementation is based on TNA

(Tofino Native Architecture), which is a switch architecture for Tofino switching

ASIC [39]. The architecture is composed of Ingress Parser, Ingress Pipeline,

Ingress Deparser, Egress Parser, Egress Pipeline and Egress Deparser.

4.1.1 Ingress Parser

Ingress Parser parses the packet from RX MAC, converted into PHV (Packet

Header Vector) representation, attaches metadata from parser and sends to Ingress

Pipeline. Fig. 4.1 depicts the packet parsing process in the Ingress Parser. It

firstly parses Ethernet header from the very beginning of the incoming packet.

After the Ethernet header, either MPLS header or IPv4 header can be located

since SFC is implemented using MPLS in this thesis. Based on the ETH TYPE

field value in ethernet header, the parser parses MPLS header or IPv4 header.

After parsing MPLS header, it then parses IPv4 header. Lastly, the parser parses

L4 header, either TCP or UDP, according to the IP PROTO field value. The

– 36 –

Page 50: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Ethernet

IPv4

TCP UDP

IngressParser

MPLS

Figure 4.1: Ingress Parser Implementation

parser then finishes packet parsing and forward the PHV to Ingress Pipeline. If

a packet header does not match this parsing process (e.g., LLDP packet which

does not have an IPv4 header or ICMP packet which does not have TCP or UDP

header), packet parsing process finishes when the lastly matches state and the

parsed result is forwarded to Ingress Pipeline. Ingress parser does not parse INT-

related headers because Ingress Pipeline does not perform any operation related

to the headers.

4.1.2 Ingress Pipeline

In Ingress Pipeline (Fig. 4.2), it firstly checks whether the packet has IPv4

header and the DSCP field in the IPv4 header. It checks the DSCP field because

it is used as an indicator of INT header presence. If a packet does not match these

conditions, it follows normal forwarding path (L2 forwarding in this implementa-

tion). If a packet mathes these conditions, it then checks the existance of MPLS

– 37 –

Page 51: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

IngressPipeline

Has IPv4 header &&ipv4.dscp > 0

Has MPLSheader

TableMPLS_add

TableMPLS_forward

TableMPLS_egress

TableINT_role

TableForwarding

Y N

NY

__END__

Figure 4.2: Ingress Pipeline Implementation

header which indicates that the packet is in a certain service function chain. If it

does not have an MPLS header, it is then matched flow entries in the MPLS add

table which adds MPLS header and appropriate MPLS label based on different

service type. MPLS forward table decides an egress port for the packet based on

the ingress port number and MPLS label. If the packet is forwarded to the end of

chain, MPLS egress table matches the packet and removes MPLS header. Lastly,

– 38 –

Page 52: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

INT role tables assigns an INT role to the packet, to specify how a switch should

behave (among one of the following roles: INT Source, INT Sink, INT Anchor

Source, and INT Anchor Sink). The assigned role is stored in the user-defined

packet metadata and forwarded to Egress Pipeline for INT processing. When the

assigned role is either INT Sink or INT Anchor Sink, the packet is cloned and

forwarded to Egress Pipeline. In this case, the original packet is sent to the next

hop while the cloned packet is encapsulated and sent to the collector. Note that

the INT role assignment should be performed at this stage because it needs both

ingress and egress port information and egress port for a packet is decided after

passing the MPLS forward table.

Ingress Deparser simply deparses the packet header and re-builds the entire

packet.

4.1.3 Egress Parser

Fig. 4.3 depicts the packet parsing process in the Egress Parser. States

until L4 headers are same as Ingress Parser. After parsing L4 headers, it checks

whether a DSCP value in a IPv4 header has pre-defined value which indicates

the existance of INT header (0x17 in this implementation). If it matches, then it

parses INT shim header and INT header. INT metadata is parsed only when the

packet is not cloned and the switch acts as either INT Sink or INT Anchor Sink.

In these cases, INT data is extracted from the packet and the original packet is

restored. So the portion of INT metadata is parsed and discarded.

– 39 –

Page 53: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Ethernet

IPv4

TCP UDP

EgressParser

MPLS

INT Shim

INT Header

INT Metadata

Figure 4.3: Egress Parser Implementation

4.1.4 Egress Pipeline

In Egress Pipeline (Fig. 4.4), it firstly checks whether the packet has IPv4

header and the DSCP field in the IPv4 header. If it does not exist, it then checks

whether the assigned role to the packet is either INT Source or INT Anchor

Source. If it is the case, INT add hdr table is applied to the packet and INT

headers are added to the packet, with parameters (e.g., types of metadata to

collect, maximum available hops and etc.). If it is not the case, egress processing

is finished. After that, it checks whether the packet is cloned or not. For cloned

packet, it adds the INT metadata to the packet, encapsulates it with INT report

– 40 –

Page 54: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

EgressPipeline

Has INT header?

Is INT rolesource?

TableINT_add_hdr

TableINT_anchor_source

N Y

Y

Is cloned?

INT anchor role?

TableINT_anchor_sink

anchorsource

anchorsink

N

TableINT_add_metadata

Is INT role sink?

TableINT_remove_hdr

Y TableINT_generate_report

Y

TableINT_add_metadata

__END__

N

N

N

Figure 4.4: Egress Pipeline Implementation

– 41 –

Page 55: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

headers, outer Ethernet, IPv4 and UDP headers and then emits the packet to

be routed to the collector. For the original packet, it checks the anchor role of

the packet. If the switch acts as an INT Anchor Sink, the INT anchor sink table

is applied to extract INT header (INT seq no and remaining hop count value,

specifically) and stores in a switch register. The key is (5-tuple, (checksum (UDP)

or sequence number (TCP)) hash value. If the switch acts as an INT Anchor

Source, the INT anchor source table is applied to restore INT header field values

(INT seq no and remaining hop count fields, specifically) from a switch register,

using the same key. In either case, INT metadata is added to the packet and

checks whether the switch acts as an INT Sink. The INT headers and metadata

are removed by applying INT remove hdr table for INT Sink role.

Egress Deparser also re-assembles the modified packet headers and emits the

entire packet to TX MAC.

4.2 Control plane

The control plane of the proposed latency measurement method is imple-

mented as a part of ONOS, an open-source SDN controller that supports P4.

Figure 4.5 depicts ONOS subsystem components and the control plane implemen-

tations in this thesis are included as a core service (INT service) and on-platform

application (INT control application).

If an INT-capable device wants to be managed by ONOS, it needs to im-

plement the INT driver interface fitted to its data plane to let ONOS control

INT behavior through the unified driver interface. Since implementation details

– 42 –

Page 56: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Thesis Defense

Device Link Host

Topology

Flow Rule

Path

Packet

StatisticsIntent

Application

Leadership

Messaging

Storage Region

Mastership

Driver

Group

Security

Flow Objective

Event

OpenFlow NetConf OVSDB

Core Cluster

. . .

INT Control

REST API GUI CLI

Network Cfg.

Tunnel

. . .

OSGi / Apache Karaf

Network Virt.Device Cfg.

Config

UI Extension

External Apps

Graph

Discovery INT . . .

External controllers

South Bound Interface module

ONOS core Service module

On-Platform Apps

App interfaces

External Apps

Figure 4.5: ONOS Subsystems with INT Control Plane Implementation

of each INT-capable device are different, the INT driver interface needs to be

implemented by pipeline developers. ONOS identifies INT-capable devices by

whether a device driver has implemented the INT driver interface or not. When

an INT-capable switch is connected to an ONOS instance, it starts initiation

process. First, it identifies the role of each device, either source/sink or transit.

A switch is identified as a source/sink switch if a host is connected to the switch.

Otherwise, the switch is identified as a transit switch. Second, it populates flow

tables of the switch with flow rules, which are independent from specific INT

intents. This step populates the transit table and source/sink tables. Since the

transit table is implemented to match an instruction bit combination on INT

header, table entries need to be installed beforehand. For source/sink tables, a

packet from a host is marked as an INT source packet by setting an INT source

bit in packet metadata. In the same way, a packet that is going to be forwarded

to a host is marked as an INT sink packet by setting an INT sink bit in the

– 43 –

Page 57: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

metadata.

Figure 4.6: INT management applcation GUI

When a collector configuration (e.g., collector IP address and port number)

is provided by the management application (Figure 4.6, it is converted into a table

entry and installed on all source/sink switches. When an INTIntent is given from

the control application, it is converted into table entries to add an INT header

to packets that match the given traffic match condition. Removal of INTIntent

works in the same way.

– 44 –

Page 58: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

INTParser Switch metric

Flow metric

Eventdetector

Web UI Alert ...

Stop

INTTelemetry

reports

INTC

ollector

Pushmetrics

Eventprocessor Exporter

Time-seriesDBs

Flow-switchmetric

Normal path

Fast path

Metric tables

Anchorreports

Aggregator

Report datatable

Figure 4.7: INTCollector implementation

4.3 INTCollector

4.3.1 Fast path

The INTCollector fast path is implemented in C and accelerated by XDP

for higher performance (Figure 4.7). The fast path XDP program is attached

to one or several NICs that receive INT report packets. XDP has a channel to

communicate with the normal path in user space.

In our implementation, INTCollector supports IPv4 with INT inside TCP/UDP

(Figure 4.8). An INT telemetry report encapsulates an INT report (inner) inside

a UDP packet (outer). The first parser phase deserializes outer header. If the

classification does not match, the packet is not a telemetry report and potentially

– 45 –

Page 59: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Ethernet

IPv4

UDP

Telemetry ReportHeader

Ethernet

IPv4

TCP UDP

INT header

INT metadata

not IPv4

Pass toother

programs

not UDP

not Telemetryreport

Outer  Inner

Figure 4.8: INTCollector parsing procedure

belongs to another application; thus, the packet is passed. In the inner parsing

phase, if there is an unmatched classification (which means a packet error), the

packet is dropped. The detailed report format can be found in the specification

[3, 28].

For report aggregation, we used a hash table as a temporal storage. After a

report packet is parsed, the parsed INT header and data is stored in the Anchor

reports table, with 5-tuple in the inner IP header and sequence number as a key.

For each report packet matching this key, it updates the table entry until all

partial report packets are arrived. When the end-to-end data is collected, it is

forwarded to the next step.

We also used hash tables to store metric values. In these tables, INTCollector

– 46 –

Page 60: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

stores only the latest values along with a timestamp (for threshold detection,

only the last event values with a timestamp are stored). Thus, push metrics are

periodically generated by the event detector in the fast path. For other cases,

push metric values are read periodically from Info tables in the fast path.

4.3.2 Normal path

The normal path is implemented in Python for ease of implementation and

interaction with the remote database. We used BPF Compiler Collection (BCC)

[40] to connect with the fast path and to manage the fast path XDP program.

The implementation of the normal path depends on the type of database. As a

real-time database, InfluxDB [41], a high-performance time-series database that

supports pushing and custom timestamps.

4.3.3 Database

The database stores collected INT data. A network controller can query

network information from the database. The database should support a high

write throughput because it is expected that multiple instances of INTCollector

will send data to the same database instance. Because INT metrics have their

own timestamp and INTCollector needs to push event data, the database should

support a custom timestamp and push mechanism. There are two methods for

sending data from the collector to the database: pushing and pulling. Pushing

means INTCollector will send data to the database whenever it wants. Pulling

means the database decides when to get information by sending a request to

INTCollector for data. From the database’s view, pulling is easier to implement

– 47 –

Page 61: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

and more robust. However, a database that supports pushing is more suitable

because INTCollector uses event detection. Grafana [42] is used as GUI to access

and analyze network metrics.

– 48 –

Page 62: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

V. Validation

In this chapter, we validate the proposed latency measurement methods.

Our validation results of this implementation show that the proposed method

is capable of 1) measuring end-to-end and hop-by-hop latency of each packet in

nanosecond scale, 2) detecting congestion in specific link and host overloads with

the measurement result, 3) monitoring packets being forwarded through a service

function chain inside a single server.

5.1 Testbed setup

Programmableswitches

Host

VNF VNF

Collector

Controller(ONOS) 1G

10G100G

Figure 5.1: Testbed setup

For the validation, we set up H/W testbed which composed of two pro-

grammable switches and five servers (Figure 5.1). We used Wedge100BF-32X

programmable switches which is equipped with Barefoot Tofino 3.2T switching

chipset and 32 ports 100G QSFP 28 ports [43]. Switches are connected by a 100G

QSFP28 passive direct attached copper (DAC) cable. Servers are equipped with

– 49 –

Page 63: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

two Intel Xeon X5690 processors, 48 GB RAM, Broadcom 10GbE SFP+ and

Mellanox ConnectX-3 NICs, and runs Ubuntu 18.04 64bit with kernel version

4.15. Servers are connected to switches using a 100G breakout cable configured

as 4 x 10G. Controller node is connected to switches’ management ports.

5.2 INT processing overhead in the data plane

INT requires additional processing in the data plane, e.g., parsing INT head-

ers, matching INT instruction bits, and executing actions to add INT header and

INT data. We firstly measured the increased latency in the data plane caused by

INT processing. For the evaluation, we have set up a Wedge 100B programmable

switch and connected a host. When a switch receives a packet from a host, it

sends the packet back to the host. Then, we captured the timestamp of each

packet at the host with nanosecond precision, to measure the end-to-end latency.

In this way, we could measure the latency accurately, since no time synchroniza-

tion with nanosecond precision is required. The data plane consists of a basic L2

forwarding table and INT-related tables.

In the first case (without INT), we removed INT-related parsers and tables,

so that INT functionalities do not affect the end-to-end latency. In the second

case (with INT), we enabled all INT functionalities with basic L2 forwarding

tables. Each measurement sent 10,000 packets and calculated the average end-

to-end latency of all packets. Figure 5.2 shows the evaluation result. Enabling

INT functionalities in the data plane adds 0.692 µs on average, which is 1.587%

of the total processing time in the data plane. Since the actual data plane is

– 50 –

Page 64: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

W i t h o u t I N T W i t h I N T4 0

4 1

4 2

4 3

4 4

4 5

��

��

��

���

Figure 5.2: Latency in the data plane with and without INT processing

much more complex (e.g., switch.p4 which manipulates traditional switches has

more than 35 functions and INT is one of them [44]), the effect of adding INT

monitoring capabilities in the data plane is negligible.

5.3 Scenario 1: Comparison with active measurement

method

We first compared the measurement result with active measurement method

(ping).

Programmableswitches

Host

VNF VNF

Collector

Controller(ONOS) 1G

10G100G

SW1 SW2

VNF2VNF1

Figure 5.3: Service function chain configuration

– 51 –

Page 65: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

We configured a service function chain as in Figure 5.3 and generates a

single UDP flow through the chain. We also sent ping packets to VNF2 and

configured switches to forward ping requests and reponses through the chain.

The measurement result is depicted in Figure 5.4.

Figure 5.4: End-to-end latency measurement result in Scenario 1

The average end-to-end latency from our measurement method and ping is

0.192 ms and 0.250 ms, correspondingly. Ping returns higher latency compared to

the proposed method because it also encompasses delay in the sender’s networking

stack, which cannot be measured in the proposed method. While ping only

measures latency for every second, the proposed method measures the latency

for every packet which provides very high resolution to the network status. The

number of measurement data in this validation is 90 from ping and 2691 from the

proposed method. Our method also provides hop-by-hop latency measurement

– 52 –

Page 66: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

result, which cannot be measured by ping. 97.26 % of the latency is caused by

two VNFs and the rest is caused in the network devices. To see hop latencies

from each switch, we excluded latency data from VNFs and depicted the rest of

the data in Figure 5.5.

Figure 5.5: End-to-end latency measurement result in Scenario 1 (excluding la-

tency from VNFs’)

The latency caused by network elements is only 0.00526 ms in average. Fig-

ure 5.5 decomposes the latency for each component. The latency value is stacked

along with the packet path. Hop latency from the same switch can be differ, like

in SW2 (labeled in 3 and 5). This difference comes from the interface speed. In

former case (label 3), the switch received a packet from 100G interface which is

connected to SW2. In later case (label 5), the switch received the same packet

from 10G interface which is connected to a server. Consequently, the switch hop

latency caused by the same packet differs (0.426 µs vs. 1.209 µs in average) and

– 53 –

Page 67: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

our method can measure such a difference accurately.

5.4 Scenario 2: Detecting link congestion

In this scenario, we generated heavy background traffic (6 Gbps from 350

sec to 375 sec, 10 Gbps from 500 sec to 660 sec and 12 Gbps from 675 sec to 685

sec and 700 sec to 710 sec) into the link between switches and validated that our

method correctly detect link congestion status. For the ease of the measurement,

we configured the link speed between switches to 10G (Figure 5.6).

Programmableswitches

Host

VNF VNF

Collector

Controller(ONOS) 1G

10G100G

TG TG

SW1 SW2

VNF2VNF1

Figure 5.6: Service function chain configuration in Scenario 2

The measurement result in this scenario is depicted in Figure 5.7. Most of

the latency is caused by two VNFs, which is same as previous scenario. Unlike

the previous result, however, dark blue lines in the middle of the graph (labeled

as 6: SW2 -¿ SW1) can be found.

To see hop latencies from each switch precisely, we excluded latency data

from VNFs and depicted the rest of the data in Figure 5.8.

Since the background traffic is transmitted from SW2 to SW1, correspond-

– 54 –

Page 68: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Figure 5.7: End-to-end latency measurement result in Scenario 2

ing link and queue on that direction is affected in terms of latency. The average

latency was 5.178 µs in average when the link is not congested. When the back-

ground traffic is generated, hop latencies from SW2 to SW1 are increased up to

66.467 µs, which is ten times higher than the normal state.

Figure 5.9 shows hop latency measurement result from switches. While most

of packets are processed around 1.21 µs in each switch, it takes 3.985 µs in con-

gested situation. This portion of delay can be considered as queueing delay.

To prove this, we measured the queue length of each switch using the proposed

method (Figure 5.10). In the same period when the SW2’s hop latency is in-

creased, queue length of the switch is also increased which causes higher queueing

delay.

– 55 –

Page 69: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Figure 5.8: End-to-end latency measurement result in Scenario 2 (excluding la-

tency from VNFs’)

5.5 Scenario 3: Detecting overloaded host

In this scenario, we sent heavy traffic from TG2 into VNF2 to incur high

processing overhead on the VNF. (Figure 5.11). At time 200s, we sent 10 Gbps

traffic from TG2 to VNF2 until time 275s and measured end-to-end latency of

the flows being forwarded to the specified chain.

Figure 5.12 shows the measurement result. The average end-to-end latency

was 0.194 ms before the heavy traffic is generated. The server latency is increased

into 11.576 ms in average during that period.

Figure 5.13 decomposes the latency for each component, excluding latency

from VNFs’. The result is similar to Figure 5.5, proving that the increased latency

is solely from the VNF2’s processing latency.

– 56 –

Page 70: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Figure 5.9: Switch latency measurement result in Scenario 2

Figure 5.10: Queue length in each port on the path in Scenario 2

– 57 –

Page 71: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Programmableswitches

Host

VNF VNF

Collector

Controller(ONOS) 1G

10G100G

TG1 TG2

SW1 SW2

VNF2VNF1

Figure 5.11: Topology and packet path in Scenario 3

Figure 5.12: End-to-end latency measurement result in Scenario 3

5.6 Scenario 4: Measuring in-server packet forward-

ing

In this section, we measured end-to-end latency when packets are being

forwarded inside a server (Figure 5.14). Since the most popular software switch

which supports programmable data plane is Behavioral Model version 2 (BMv2)

– 58 –

Page 72: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Figure 5.13: End-to-end latency measurement result in Scenario 3 (excluding

latency from VNFs’)

Server

BMv2h1 h2

vnf1 vnf2 vnf3

Int.p4

Figure 5.14: Topology and packet path in Scenario 4

[45] and it shows very low throughput[46], we cannot make a service function

chain through the virtualized host. While leaving the performance improvement

of BMv2 software switches as future work, we demonstrated that the proposed

method also measure hop-by-hop latency of packets which are being forwarded

– 59 –

Page 73: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

inside the host.

Figure 5.15: End-to-end latency measurement log in Scenario 4

Figure 5.15 shows the aggreated output from INTCollector. While partial

reports are generated and sent to the collector, it is aggregated into a single end-

to-end measurement result in the collector. It proves that the proposed method

also works correctly inside a server.

Figure 5.16: End-to-end latency measurement result in Scenario 4

Figure 5.16 shows the end-to-end latency measurement result. Compared to

other measurement result from H/W testbed, the switch hop latency is ms-scale

– 60 –

Page 74: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

and unstable.

– 61 –

Page 75: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

VI. Conclusion

This chapter sumarizes the overall contesnt of the thesis and list contribu-

tions. It also discusses several research topics as future work.

6.1 Summary

In this thesis, a method to measure hop-by-hop latency measurement was

presented. In Chapter I, the reserarch motivations and problem statement was

introduced. Chapter II, the background and three different measurement ap-

proaches were described. In Chapter III, the proposed end-to-end latency mea-

surement method for MEC environment was introduced. Chapter IV described

the implementation details for realizing the proposed measurement method. Lastly,

Chapter V showed the validation results of the proposed method in four different

scenarios.

This thesis proposed enhanced INT which makes INT to be applied for MEC

environment, by introducing sequence number field in INT header, INT anchor

switch and its processing logic. The sequence number in INT header is used as a

key in the collector to aggregate partial telemetry reports into a single end-to-end

measurement data. INT anchor switch keeps the sequence number in registers

and restores to the same packet when it is being forwarded from the VNF. In

this way, the proposed method can measure not only hop-by-hop latency in net-

work elements but also VNF processing latency. We implemented the proposed

– 62 –

Page 76: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

latency measurement system on both hardware programmable switches and soft-

ware switches. We conducted several experiment and the result showed that the

proposed system precisely measure end-to-end latency for data packets and detect

link congestion and overloaded VNFs, as well as hop-by-hop latency measurement

result inside a host.

6.2 Contributions

The followings are key contributions that are expected from this thesis.

First of all, this thesis provides an accurate end-to-end latency measurement

method. In 5G era, there are many new latency critial services are introduced and

it is needed for network operators to monitor the actual latency for those services.

This thesis provides an end-to-end latency measurement method for actual data

packets to measure ultra-low latency requirements. It also provides hop-by-hop

latency measurement result to identify congested links or devices correctly. End-

to-end latency measurement in NFV environment is also made possible with this

thesis.

This thesis also provides a VNF-processing latency measurement method

in network-level. It can measure the VNF-processing latency in the network

without additional monitoring agents for VNF instances. It also provides hop-

by-hop measurement information inside the virtualized host to monitor packet

behaviors precisely inside the host.

This thesis also enhanced the INT specification to be applicable to MEC

environment. This enhancement can be applied not only user plane data traffic

– 63 –

Page 77: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

but also control plane signalling traffic since control functions in 5G are also being

virtualized and running on the NFV environment. Moreover, it is not limited to

5G MEC but is able to be widely deployed in any NFV environment.

Lastly, the implementations in this thesis will be published as an open-source

project. INT data plane implementation in P4 language, INT service and appli-

cations are already merged as open source software under the ONOS project.

INTCollector is also published as an open-source project. Researchers in this

field will be benefit from our project to be used for their research.

6.3 Future Work

The development of the proposed latency measurement system was com-

pleted. However, many features still remain to be developed to overcome the

limitations of current work and to make use of the measurement data. This

section introduces further issues for consideration.

6.3.1 Improving packet processing performance of BMv2 soft-

ware switch

One of the limitations mentioned in the thesis is that the packet processing

performance of BMv2 software switch is very poor. It is because BMv2 is imple-

mented in C++ and is running on the user space for packet processing. There

are other backends for higher performance, such as eBPF and XDP, but their

functionalities are mostly limited to packet filtering. We can improve the perfor-

mance of BMv2 software switch by offloading packet processing block to XDP,

while complicated operations are running on the user space to enhance packet

– 64 –

Page 78: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

processing performance.

6.3.2 Design and implement time synchronization algorithm us-

ing INT

Precise time synchronization is required for the proposed measurement method.

GPS-based time synchronization is able to provide sub 100 nanosecond accuracies.

Howerver, it requires additional equipment installations, such as outdoor satellite

antenna, which causes additional costs and extra burden on the physical infras-

tructure. NTP (Network Time Protocol) [47] and PTP (Precise Time Protocol)

[48] are time synchronization protocols over LANs and WANs. However, NTP

provides only millisecond-level accuracy, while INT can measure nanosecond-level

latency. PTP can achieve nanosecond-level accuracy, but it is tightly coupled to

proprietary implementations for high preciesion [49] and network devices may not

support the protocol. In this thesis, we measured time difference using INT and

compensated the end-to-end latency measurement resuit in the collector. We can

develop an algorithm for accurate time synchronization using INT.

6.3.3 Enhance the collector logic to support bigger hops

Since XDP runs inside the kernel, the stack size and program size are very

limited. Current implementation is able to handle only 7-hops of measurement

data. By optimizing the aggregation logic and data structure, we can collect

more than 7-hops of end-to-end measurement data.

– 65 –

Page 79: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

요 약 문

5G 모바일 네트워크의 특징 중 하나는 고신뢰 초저지연 통신 (URLLC: Ultra-

Reliable Low Latency Communication)으로, 1ms 미만의 지연 시간을 제공하는

것을 목표로 한다. 이를 가능하게 하는 주요 기술로는 MEC (Multi-access Edge

Computing)를 들 수 있다. MEC는 통신 사업자의 코어망 외부에 위치한 클라우드

에서 제공하던 각종 서비스들을 기지국 단위로 배치시키는 분산 클라우드 기술과

NFV (Network Function Virtualization)를 적용하여 지연 시간을 줄일 수 있는

기술이다.

5G 환경에서는 이러한 초저지연성을 바탕으로 현재의 4G 환경에서 불가능했던

여러 서비스들 (e.g., 자율 주행, 공장 자동화, AR/VR)을 가능케 한다. 이러한 서비

스들은 서비스 특성에 따라 서로 다른 지연시간 및 전송 속도 등의 성능 요구조건을

가지며,네트워크사업자관점에서는각서비스의성능요구사항을만족시키기위해

각 서비스 별 지연 시간을 정확히 측정할 수 있어야 한다.

In-band 네트워크 모니터링 기법 중 하나인 INT (In-band Network Telemetry)

를 활용하면 각 패킷의 종단 간 지연 시간을 정확히 측정할 수 있지만, MEC 환경

에서 INT를 사용하여 모니터링을 수행할 경우 패킷의 페이로드를 분석하는 VNF

에서는 INT 헤더 및 텔레메트리 정보를 페이로드로 인식하여 오동작이 발생할 수

있으며, 이러한현상을방지하기위해각 VNF에연결된스위치가 Sink스위치로동

작하는경우하나의패킷이전달되는과정에서텔레메트리정보가 VNF개수만큼의

나누어지기 때문에 종단 간 지연 시간을 측정할 수 없다.

– 66 –

Page 80: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

본 논문에서는 위와 같은 문제를 해결하여 MEC 환경에서도 종단 간 지연 시간을

정확하게 측정할 수 있는 방법을 제안한다. INT 헤더에 sequence number를 할당

하고 패킷이 VNF를 통과하더라도 같은 sequence number를 가질 수 있도록 스위치

레지스터에 sequence number값을보존하는 Anchor스위치기능을제안하였다. 제

안한측정방법을사용하여각서비스별종단간지연시간뿐만아니라각홉단위의

지연시간, 그리고 VNF에서의 패킷 처리시간까지 정확히 측정할 수 있다. 제안한

기법은 P4 언어를 사용하여 데이터 평면에 구현되었으며, 이를 제어하기 위한 제어

평면을 ONOS오픈소스 SDN제어기에구현하였다. 또한생성된측정정보들을 x86

기반의 리눅스 서버에서 빠르게 처리하기 위해 XDP 기반의 INTCollector도 함께

구현되었다. 제안한 방법은 프로그래머블 스위치와 서버들로 구성된 실험환경에서

다양한 시나리오를 통해 검증되었으며, 각 시나리오 별로 제안한 방법이 정확히 종

단 간 및 홉 간 지연 시간을 측정할 수 있는 것으로 검증되었다. 또한 데이터 평면에

INT 기능을 구현함으로써 발생하는 추가적인 지연 시간은 1.6% 수준으로 종단 간

지연시간에 큰 영향을 끼치지 않음을 보였다.

– 67 –

Page 81: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

References

[1] Imtiaz Parvez, Ali Rahmati, Ismail Guvenc, Arif I Sarwat, and Huaiyu Dai.

A survey on low latency towards 5g: Ran, core network and caching solu-

tions. IEEE Communications Surveys & Tutorials, 20(4):3098–3130, 2018.

[2] The P4 Language Consortium. The P4 Language Specification 1.1.0. https:

//p4lang.github.io/p4-spec/docs/P4-16-v1.1.0-spec.pdf.

[3] The P4.org Applications Working Group. In-band Network Telemetry (INT)

specification v1.0. https://github.com/p4lang/p4-applications/blob/

master/docs/INT.pdf.

[4] M Series. Imt vision–framework and overall objectives of the future devel-

opment of imt for 2020 and beyond. Recommendation ITU, pages 2083–0,

2015.

[5] Theodore S Rappaport, Shu Sun, Rimma Mayzus, Hang Zhao, Yaniv Azar,

Kevin Wang, George N Wong, Jocelyn K Schulz, Mathew Samimi, and Felix

Gutierrez. Millimeter wave mobile communications for 5g cellular: It will

work! IEEE access, 1:335–349, 2013.

[6] Erik G Larsson, Ove Edfors, Fredrik Tufvesson, and Thomas L Marzetta.

Massive mimo for next generation wireless systems. arXiv preprint, 2013.

[7] Open Networking Foundation. SDN architecture. ONF TR-502, 2014.

– 68 –

Page 82: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

[8] ETSI. Network Functions Virtualisation: An Introduction, Benefits, En-

ablers, Challenges & Call for Action. NFV White Paper, oct 2012.

[9] 3GPP. Telecommunication management; Self-Organizing Networks (SON);

Concepts and requirements. 3GPP TS 32.500, Feb 2016.

[10] ETSI. Mec in 5g networks. ETSI White Paper, pages 1–28, 2018.

[11] GSMA. Road to 5g: Introduction and migra-

tion. https://www.gsma.com/futurenetworks/5g/

road-to-5g-introduction-and-migration-whitepaper/.

[12] Mehdi Bennis, Merouane Debbah, and H Vincent Poor. Ultra-reliable and

low-latency wireless communication: Tail, risk, and scale. Proceedings of the

IEEE, 106(10):1834–1853, 2018.

[13] Carsten Bockelmann, Nuno Pratas, Hosein Nikopour, Kelvin Au, Tommy

Svensson, Cedomir Stefanovic, Petar Popovski, and Armin Dekorsy. Mas-

sive machine-type communications in 5G: Physical and MAC-layer solutions.

IEEE Communications Magazine, 54(9):59–65, 2016.

[14] Pankesh Patel, Ajith H Ranabahu, and Amit P Sheth. Service level agree-

ment in cloud computing. 2009.

[15] Network Functions Virtualisation. An introduction, benefits, enablers, chal-

lenges & call for action. In White Paper, SDN and OpenFlow World

Congress, 2012.

– 69 –

Page 83: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

[16] Joel Halpern and Carlos Pignataro. Service function chaining (sfc) architec-

ture. Technical report, 2015.

[17] Pat Bosshart, George Varghese, David Walker, Dan Daly, Glen Gibb, Martin

Izzard, Nick McKeown, Jennifer Rexford, Cole Schlesinger, Dan Talayco, and

Amin Vahdat. P4: Programming Protocol-Independent Packet Processors.

ACM SIGCOMM Computer Communication Review, 44(3):87–95, 2014.

[18] Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry

Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner. Open-

flow: enabling innovation in campus networks. ACM SIGCOMM Computer

Communication Review, 38(2):69–74, 2008.

[19] Open Networking Foundation. OpenFlow Switch Specification Version

1.5.1. https://www.opennetworking.org/wp-content/uploads/2014/

10/openflow-switch-v1.5.1.pdf.

[20] Herbert Tom and Starovoitov Alexei. eXpress Data Path (XDP) Pro-

grammable and high performance networking data path. https://github.

com/iovisor/bpf-docs/blob/master/Express_Data_Path.pdf.

[21] Blanco Brenden. eXpress Data Path: Getting Linux to 20 Mpps. Linux

Meetup Santa Clara, July 2016.

[22] Intel. Data Plane Development Kit (DPDK). https://dpdk.org.

– 70 –

Page 84: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

[23] T. Mizrahi, N. Sprecher, E. Bellagamba, and Y. Weingarten. An overview of

operations, administration, and maintenance (oam) tools. RFC 7276, IETF,

https://www.ietf.org/rfc/rfc7276.txt, June 2014.

[24] Tal Mizrahi, Vitaly Vovnoboy, Moti Nisim, Gidi Navon, and Amos Soffer.

Network telemetry solutions for data center and enterprise networks. 2018.

[25] Jon Postel et al. Rfc 791: Internet protocol. 1981.

[26] F Brockners, S Bhandari, S Dara, C Pignataro, H Gredler, J Leddy, S Youell,

D Mozes, T Mizrahi, P Lapukhov, et al. Requirements for in-situ oam. In

Working Draft, Internet-Draft draft-brockners-inband-oam-requirements-03.

2017.

[27] Changhoon Kim, Anirudh Sivaraman, Naga Katta, Antonin Bas, Ad-

vait Dixit, and Lawrence J Wobker. In-band network telemetry via pro-

grammable dataplanes. Demo paper at ACM SIGCOMM, 2015.

[28] The P4.org Applications Working Group. Telemetry Report Format

Specification v1.0. https://github.com/p4lang/p4-applications/blob/

master/docs/telemetry_report.pdf.

[29] N. Van Tu, J. Hyun, and J. W. K. Hong. Towards ONOS-based SDN mon-

itoring using in-band network telemetry. 19th Asia-Pacific Network Opera-

tions and Management Symposium (APNOMS), pages 76–81, Seoul, Korea,

Sept 2017. IEEE.

– 71 –

Page 85: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

[30] Uluderya Serkant. Prometheus INT exporter.

https://github.com/serkantul/prometheus int exporter.

[31] Pavel Benacek, Viktor Pus, Michal Kekely, Lukas Richter, Pavel Minarık,

and Jan Pazdera. 100G In-Band Network Telemetry with P4 and FPGA.

The 4th P4 Workshop, May 2017.

[32] Barefoot Deep Insight, https://www.barefootnetworks.com/products/brief-

deep-insight/.

[33] Broadcom Trident 3 In-band Telemetry,

https://people.ucsc.edu/ warner/Bufs/Trident3-telemetry.pdf.

[34] Suricata - Open Source IDS / IPS / NSM engine. https://suricata-ids.org/.

[35] Snort - Network Intrusion Detection & Prevention System.

https://www.snort.org/.

[36] The Zeek Network Security Monitor. https://www.zeek.org/.

[37] nDPI - Open and Extensible LGPLv3 Deep Packet Inspection Library.

https://www.ntop.org/products/deep-packet-inspection/ndpi/.

[38] Network Intelligence - Simplified. https://www.netify.ai/.

[39] Tofino - World’s fastest P4-programmable Ethernet switch ASICs.

https://barefootnetworks.com/products/brief-tofino/.

[40] BPF Compiler Collection (BCC). https://github.com/iovisor/bcc.

– 72 –

Page 86: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

[41] InfluxDB: Scalable datastore for metrics, events, and real-time analytics.

https://github.com/influxdata/influxdb.

[42] Grafana: The open platform for beautiful analytics and monitoring.

https://grafana.com.

[43] WEDGE 100BF-32X 100GBE DATA CENTER SWITCH. https://www.

edge-core.com/productsInfo.php?id=335.

[44] Switch.p4. https://github.com/p4lang/switch.

[45] BMv2. online. https://github.com/p4lang/behavioral-model.

[46] Thomas Kohler, Ruben Mayer, Frank Durr, Marius Maaß, Sukanya

Bhowmik, and Kurt Rothermel. P4cep: Towards in-network complex event

processing. In Proceedings of the 2018 Morning Workshop on In-Network

Computing, pages 33–38. ACM, 2018.

[47] David L Mills. Internet time synchronization: the network time protocol.

IEEE Transactions on communications, 39(10):1482–1493, 1991.

[48] John Eidson and Kang Lee. Ieee 1588 standard for a precision clock syn-

chronization protocol for networked measurement and control systems. In

Sensors for Industry Conference, 2002. 2nd ISA/IEEE, pages 98–105. Ieee,

2002.

[49] Ryan Zarick, Mikkel Hagen, and Radim Bartos. Transparent clocks vs. enter-

prise ethernet switches. In 2011 IEEE International Symposium on Precision

– 73 –

Page 87: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Clock Synchronization for Measurement, Control and Communication, pages

62–68. IEEE, 2011.

– 74 –

Page 88: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Acknowledgements

학위과정기간동안물심양면으로지도해주신홍원기교수님과유재형교수님께

깊이 감사드립니다. 홍원기 교수님 덕분에 좋은 연구 환경에서 다양한 주제에 대해

접하고 연구를 수행할 수 있었습니다. 그리고 유재형 교수님꼐서 풍부한 실무 경험

을 바탕으로 연구가 올바른 방향으로 진행될 수 있도록 지도해 주셨습니다. 본 학위

논문의 주제를 잡고 논문을 무사히 완성할 수 있도록 지도해 주신 두 분 교수님께

다시 한 번 감사드립니다.

또한 8년이라는 긴 기간 동안 동고동락하며 함께 연구를 수행했던 DPNM 연구실

의 선배님들과 후배님들에게 깊은 감사의 말씀 전합니다.

끝으로, 길고 긴 학업 기간 동안 변함없이 지지해주고 응원해 주신 사랑하는 부모

님께 이 논문을 바칩니다.

– 75 –

Page 89: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Curriculum Vitae Personal Information

Name: Jonghwan Hyun

Position: Ph.D. student

Laboratory: Distributed Processing & Network Management Lab.

Department: Computer Science and Engineering

Education

Degree Year University Department

M.S. & Ph.D. 2011-2019 POSTECH, Pohang, Korea Computer Science and Engineering

B.S. 2005-2011 POSTECH, Pohang, Korea Computer Science and Engineering

Research Areas of Interest

P4, Internet Traffic Monitoring & Analysis, SDN, NFV, OpenFlow, Network and Systems

Management, Network Security

Research / Project Experiences

1. Development of Core Technologies for Programmable Switch in Multi-Service

Networks

Funded by Institute for Information & Communications Technology Promotion (2017

– 2020)

This research project aims at providing multi-service networks on programmable

switch. This project includes extending P4 language and corresponding compiler,

designing new programmable switch machine model and multi-service network

structure and developing network monitoring, control and management technologies.

In this project, I proposed In-band Network Telemetry (INT) based network

monitoring architecture and implemented the architecture in ONOS.

2. Korea-US Collaborative Research on SDN/NFV Security/Network Management

and Testbed Build

Page 90: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Funded by Institute for Information & Communications Technology Promotion (2015

– 2017)

This research project aims at studying on the SDN/NFV based WAN network

stability/service management and constructing the SDN/NFV based WAN Testbed

between Korea and US. In this research, I proposed an application aware traffic

engineering mechanism running on the SDN-based networks to provide application-

specific QoS and proposed dynamic failover management algorithm for SDN-based

virtual networks.

3. A Research on 4G and IMS Network Architecture Analysis in IPv6 Environment

Funded by Korea Internet & Security Agency (2014 - 2015)

This research aims at analyzing the effect of IPv6 adoption in 4G and IMS network

and find the ways to deploy existing IPv4 security devices. I surveyed the IPv6

transition trend and plan of domestic and foreign mobile network operators and

analyzed the structure of 4G(LTE-A) and IMS (IP Multimedia Subsystem) network

structure in IPv6 environment. I also studied the way to use existing security devices

in the IPv6 environment.

4. A Research on the Network Service Implementation Using LISP

Funded by KT Corporation (2014)

This research aims at developing network services and carrier-grade SDN technology

using LISP. As a new SDN protocol, LISP can be used to implement existing network

services with a cheaper price. In this research, four different network services, VM

live migration, vertical handover, traffic engineering and disaster recovery, are

implemented. In this work, I constructed a LISP testbed to develop, implement, and

test LISP-based network services.

5. A Research on Real-time Processing of High-Speed Application Level Internet

Traffic

Funded by Korea Internet & Security Agency (2013 - 2014)

This research aims at developing a high-performance VoLTE traffic classification

method in the LTE core network. I proposed a scalable traffic analysis architecture,

which is only relying on cheap commodity servers, by exploiting the distribution and

Page 91: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

parallelization technology. I also designed and implemented a VoLTE traffic analysis

application for the purpose of extracting several QoS related network metrics, which

can be used for detecting malicious attacks.

6. Intelligent Network Configuration Technology for Efficient SDN Resource

Management

Funded by KT Corporation (2013 - 2014)

This research aims at developing a dynamic traffic engineering algorithm and network

topology construction based on SDN. The algorithm can optimize data flow for

simplifying network management and configuration according to the business needs.

In this research, I proposed a fast failover method for data center network using

Openflow. I also constructed SDN testbed to implement and evaluate the proposed

algorithms.

7. Security Research for Mobile Cloud Service

Funded by KT Corporation (2010)

This research project aims at surveying and implementing security techniques for

mobile cloud service. To improve security-level of mobile cloud service, monitoring

and analyzing abnormal behavior of each mobile virtual instance have been required.

In this research, I suggested and implemented an abnormal behavior detection method

in the host using machine learning algorithm.

8. Design and Implementation of Smart Blackbox Content Delivery System

In this research, I proposed architecture for black box content distribution

infrastructure for enabling black box content delivery service, and on top the

architecture I proposed a secure communication mechanism between vehicle black

box and black box content management server, including device registration and

authentication through exploiting asymmetric key cryptography method.

Publications

International Journal Papers

1. Jonghwan Hyun, Nguyen Van Tu, Jae-Hyoung Yoo, James Won-Ki Hong,

Page 92: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

“Real-time and Fine-grained Network Monitoring using In-band Network

Telemetry”, International Journal of Network Management (IJNM), in press.

International Conference Papers

1. Nguyen Van Tu, Jonghwan Hyun, Ga Yeon Kim, Jae-Hyoung Yoo, James Won-

Ki Hong, "INTCollector: A High-performance Collector for In-band Network

Telemetry", 14th International Conference on Network and Service Management

(CNSM 2018), Rome, Italy, Nov.5-9, 2018, pp. 10-18.

2. Jonghwan Hyun, Tu Van Nguyen, James Won-Ki Hong, "Towards Knowledge-

Defined Networking using In-band Network Telemetry", 16th IEEE/IFIP

Network Operations and Management Symposium (NOMS 2018), Taipei,

Taiwan, April 23-27, 2018, pp. 1-7.

3. Jonghwan Hyun, Youngjoon Won, Kenjiro Cho, Romain Fontugne, Jaeyoon

Chung, James Won-Ki Hong, "High-end LTE Service Evolution in Korea: 4

Years of Nationwide Mobile Network Measurements", 13th International

Conference on Network and Service Management (CNSM 2017), Tokyo, Japan,

Nov.26-30, 2017, pp. 1-7.

4. Nguyen Van Tu, Jonghwan Hyun, James Won-Ki Hong, "Towards ONOS-

based SDN Monitoring using In-band Network Telemetry", 19th Asia-Pacific

Network Operations and Management Symposium (APNOMS 2017), Seoul,

Korea, Sep. 27-29, 2017, pp. 76-81.

5. Seyeon Jeong, Doyoung Lee, Jonghwan Hyun, Jian Li, James Won-Ki Hong,

"Application-aware Traffic Engineering in Software-Defined Network", 19th

Asia-Pacific Network Operations and Management Symposium (APNOMS

2017), Seoul, Korea, Sep. 27-29, 2017, pp. 315-318.

6. Jonghwan Hyun, James Won-Ki Hong, "Knowledge-Defined Networking using

In-band Network Telemetry", 19th Asia-Pacific Network Operations and

Management Symposium (APNOMS 2017), Seoul, Korea, Sep. 27-29, 2017, pp.

54-57.

7. Kyungchan Ko, Dongho Son, Jonghwan Hyun, Jian Li, Yoonseon Han, James

Page 93: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Won-Ki Hong, "Dynamic Failover for SDN-based Virtual Networks", 3rd IEEE

Conference on Network Softwarization (NetSoft 2017), Bologna, Italy, July 3-7,

2017, pp. 1-5.

8. Jonghwan Hyun, Youngjoon Won, David Sang-Chul Nahm, James Won-Ki

Hong, "Measuring Auto Switch Between Wi-Fi and Mobile Data Networks in an

Urban Area", 12th International Conference on Network and Service

Management (CNSM 2016), Montreal, Quebec, Canada, Oct. 31 - Nov. 4, 2016,

pp. 287-291.

9. Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong, "Measurement and

Analysis of Application-Level Crowd-Sourced LTE and LTE-A Networks", 2nd

IEEE Conference on Network Softwarization (NetSoft 2016), Seoul, Korea, June

6-10, 2016, pp. 269-276.

10. Yoonseon Han, Jonghwan Hyun, and James Won-Ki Hong, "Graph abstraction

based virtual network management framework for SDN", in 2016 IEEE

Conference on Computer Communications Workshops (INFOCOM WKSHPS):

Student Activities (INFOCOM’16 Student Activities), San Francisco, USA, Apr.

2016, pp. 884–885.

11. Jonghwan Hyun, Youngjoon Won, Eunji Kim, Jae-Hyoung Yoo and James

Won-Ki Hong, "Is LTE-Advanced Really Advanced?", 15th IEEE/IFIP Network

Operations and Management Symposium (NOMS 2016), Istanbul, Turkey, April

25-29, 2016, pp. 703-707.

12. Yoonseon Han, Jonghwan Hyun, Taeyeol Jeong, Jae-Hyoung Yoo and James

Won-Ki Hong, "A smart home control system based on context and human

speech", 2016 18th International Conference on Advanced Communication

Technology (ICACT), Pyeongchang Kwangwoon Do, South Korea, Jan.31-

Feb.3, 2016, pp. 1-2.

13. Jonghwan Hyun, Jian Li, Hwankuk Kim, Jae-Hyoung Yoo and James Won-Ki

Hong, "IPv4 and IPv6 Performance Comparison in IPv6 LTE Network", 17th

Asia-Pacific Network Operations and Management Symposium (APNOMS

2015), Pusan, Korea, Aug. 19-21, 2015, pp. 145-150.

Page 94: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

14. Taeyeol Jeong, Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo and James Won-Ki

Hong, "Experience on the Development of LISP-enabled Services: an ISP

Perspective", 1st IEEE Conference on Network Softwarization (NetSoft 2015),

UCL, UK, April 13-17, 2015, pp. 1-9.

15. Jonghwan Hyun, Jian Li, ChaeTae Im, Jae-Hyoung Yoo and James Won-Ki

Hong, "A High Performance VoLTE Traffic Classification Method using

HTCondor", 14th IFIP/IEEE International Symposium on Integrated Network

Management (IM 2015), Ottawa,Canada, May 11-15, 2015, pp. 518-524.

16. Jonghwan Hyun, Jian Li, ChaeTae Im, Jae-Hyoung Yoo and James Won-Ki

Hong, “A VoLTE Traffic Classification Method in LTE Network”, 16th Asia-

Pacific Network Operations and Management Symposium (APNOMS 2014),

Hsinchu, Taiwan, Sep. 17-19, 2014.

17. Yoonseon Han, Sin-seok Seo, Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo, James

Won-Ki Hong, “Software Defined Networking-based Traffic Engineering for

Data Center Networks”, 16th Asia-Pacific Network Operations and Management

Symposium (APNOMS 2014), Taiwan, Sep. 17-19, 2014.

18. Jian Li, JongHwan Hyun, Jae-Hyoung Yoo, Seongbok Baik, James Won-Ki

Hong, “Scalable Failover Method for Data Center Networks Using OpenFlow”,

6th IEEE/IFIP International Workshop on Management of the Future Internet

(ManFI 2014), Krakow, Poland, May 5, 2014.

19. Taehyun Kim, Yeongrak Choi, Seunghee Han, Jae Yoon Chung, Jonghwan

Hyun, Jian Li, and James Won-Ki Hong, “Monitoring and Detecting Abnormal

Behavior in Mobile Cloud Infrastructure” , 2012 IEEE/IFIP International

Workshop on Cloud Management (CloudMan 2012), Maui, Hawaii, USA, April

20, 2012, pp. 1303-1310.

Domestic Journal Papers

1. Doyoung Lee, Seyeon Jeong, Jonghwan Hyun, Jian Li, James Won-Ki Hong,

“Application-aware Traffic Engineering in SDN”, KNOM Review, Vol. 19, No.

02, December 2016, pp. 1-12.

Page 95: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

2. KyungChan Ko, Dongho Son, Jonghwan Hyun, Jian Li, Yoonseon Han, James

Won-Ki Hong, “A Study of Dynamic Failover for SDN-based Virtual Networks”,

KNOM Review, Vol. 19, No. 01, August 2016, pp. 12-21.

3. Sin-seok Seo, Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong,

Seongbok Baik, Chan-Kyu Hwang, Young-Woo Lee, “Traffic Engineering for

Data Center Networks using Software Defined Networking”, KNOM Review,

Vol. 16, No. 2, Dec. 2013, pp. 12-25.

4. Jonghwan Hyun, Yongfeng Huang, Jin Xiao, James Won-Ki Hong, “Event-

based Estimation of Perceived Video Quality for Video Streaming”, KNOM

Review, Vol. 15, No. 2, Dec. 2012, pp. 43-53.

Domestic Conference Papers

1. Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong, "End-to-end Latency

Measurement in MEC Environment", 29th Joint Conference on Communications

and Information (JCCI 2019), Gangneung, Korea, May 1-3, 2019, pp. 1-2.

2. Jonghwan Hyun, Gayeon Kim, James Won-Ki Hong, "A Research on ONOS-

based In-band Network Telemetry Management Architecture", 28th Joint

Conference on Communications and Information (JCCI 2018), Yeosu, Korea,

May 2-4, 2018.

3. Jonghwan Hyun, James Won-Ki Hong, “A Study on In-band Network

Telemetry Collection and Management”, 2017 Korean Network Operations and

Management Conference (KNOM Conference 2017), Gwangju, Korea, June 2-

3, 2017, pp. 11-12.

4. Jonghwan Hyun, James Won-Ki Hong, “Survey on Network Virtualization

Technologies”, Korean Network Operations and Management Conference

(KNOM Conference 2016), Chuncheon, Korea, May 12-13, 2016, pp. 93-94.

5. Junemuk Choi, Yoonseon Han, Jonghwan Hyun, James Won-Ki Hong, “SDN

Traffic Engineering Technique based on Machine Learning”, Korean Network

Operations and Management Conference (KNOM Conference 2016),

Chuncheon, Korea, May 12-13, 2016, pp. 95-96.

Page 96: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

6. Jonghwan Hyun, Jian Li, ChaeTae Im, Jae-Hyoung Yoo, James Won-Ki Hong,

“Connection Procedure Analysis of the Smartphone for the Different IP Version

in IPv6-only LTE Network”, Korean Institute of Communication Sciences

Summer Conference (KICS Winter), JeongSeon, Korea, Jan. 21-23, 2015.

7. Jonghwan Hyun, Jian Li, ChaeTae Im, Jae-Hyoung Yoo, James Won-Ki Hong,

“A research on the VoLTE traffic classification method in the mobile networks”,

Korean Network Operations and Management Conference (KNOM 2014),

Daejeon, Korea, May 15-16, 2014, pp. 62-66. (in Korean)

8. Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong, ”A Failover

Method for Large Data Center Networks Using OpenFlow”, Korean Network

Operations and Management Conference (KNOM 2014), Daejeon, Korea, May

15-16, 2014, pp. 38-42 (in Korean)

9. Jonghwan Hyun, Jae Yoon Chung, Jian Li, and James Won-Ki Hong, "A

Method for Guaranteeing Data Integrity/Transmission of Car Dashboard

Camera”, Korean Network Operations and Management Conference (KNOM

Conference 2013), DaeGu, Korea, May 9-10, 2013, pp. 140-141.

10. Taehyun Kim, Jae Yoon Chung, Jonghwan Hyun, Jian Li, James Won-Ki Hong,

"Abnormal Behavior Monitoring and Detection on Mobile Cloud", Korean

Network Operations and Management Conference (KNOM Conference 2012),

Jeju, Korea, May 3-4, 2012, pp. 130-134. (in Korean)

11. Jae Yoon Chung, Jonghwan Hyun, and James Won-Ki Hong, "Abnormal

Behavior Detection System for Cloud Service", Korean Institute of

Communication Sciences Winter Conference (KICS Winter Conference 2012),

Yongpyung, Korea, Feb. 8-10, 2012. (in Korean)

Domestic Patents

1. James Won-Ki Hong, Jae-Hyoung Yoo, Jonghwan Hyun, “Network Device and

Method and System for Controlling Network Monitoring Using the Same”,

Patent No. 10-2018-0170025, Dec. 27, 2018. (Applicant: POSTECH)

Page 97: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Talks & Demos

1. Jonghwan Hyun, “Tutorial: P4 INT”, 2019 1st SDN/NFV Forum P4 Working

Group Meeting, Seoul, Korea, April 19, 2019.

2. Jonghwan Hyun, “Current Status of P4 Support in ONOS”, 2018 2nd

SDN/NFV Forum P4 Working Group Meeting, Seoul, Korea, Oct. 12, 2018.

3. Jonghwan Hyun, Nguyen Van Tu, James Won-Ki Hong, “In-band Network

Telemetry Management Architecture: ONOS INT Service and XDP”, 5th P4

Workshop, Stanford, CA, June 5, 2018.

Work Experience

Open Networking Foundation

Menlo Park, CA

Visiting Scholar

June 2017 – June 2018

- Developed In-band Network Telemetry management service, which orchestrates the

generation and collection process of telemetry data

- Developed dynamic interface configuration and Q-in-Q VLAN termination feature in

ONOS and corresponding ONOS system test scripts in TestON

Teaching Assistantship

Title Courses Date

Teaching Assistant

(Dept. of CSE,

POSTECH)

CSED312 Operating System

CSED490K Internet of Things

CSED702D Internet Traffic Monitoring and

Analysis

CSED702E Open Networking System

CSED702M Advanced Network Management

09/2014 – 12/2014

09/2012 – 12/2012

09/2015 – 12/2015

09/2015 – 12/2015

09/2016 – 12/2016

02/2019 – 06/2019

Page 98: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

Awards

Title Organization Date

Student Travel Grant

Student Travel Grant

CNSM 2016

APNOMS 2014

11/2016

09/2014

Skills

Programming Experiences C, C++, Python, Java, HTML, PHP, P4

System Experiences Windows, Linux, FreeBSD, Solaris, Android

References

Prof. James Won-Ki Hong

Department of Computer Science and Engineering

Pohang University of Science and Technology, Pohang, Korea

Email: [email protected]

Prof. Jae-Hyoung Yoo

Graduate school of Information Technology

Pohang University of Science and Technology, Pohang, Korea

E-mail: [email protected]

Prof. Youngjoon Won

Department of Information System

Hanyang University, Seoul, Korea

Email: [email protected]

Bill Snow

Chief Development Officer

Open Networking Foundation, Menlo Park, CA, USA

Email: [email protected]

Page 99: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a
Page 100: End-to-End Latency Measurement in a Multi-access Edge Computing Environmentdpnm.postech.ac.kr/thesis/19/thesis_Jonghwan.pdf · 2020. 8. 11. · End-to-End Latency Measurement in a

본 학위논문 내용에 관하여 학술/교육 목적으로

사용할 모든 권리를 포항공대에 위임함