concerto simulator: final architecture and numerical result · concerto solutions through the...
TRANSCRIPT
FP7 CONCERTO Deliverable D6.3
1
Project acronym: CONCERTO
Project full title: Content and cOntext aware delivery for iNteraCtive multimEdia
healthcaRe applications
Grant Agreement no.: 288502
Deliverable 6.3
CONCERTO Simulator:
Final Architecture and Numerical Result
Contractual date of delivery to EC 31/10/2014
Actual date of delivery to EC 24/12/2014
Version number 1.0
Lead Beneficiary CNIT
Participants TCS, VTT, CEFRIEL, CNIT, BME, KU
Estimated person months: 28
Dissemination Level: PU
Nature: R
Total number of pages 62
Keywords list:
Simulation platform, software modules, OMNeT++.
Ref. Ares(2015)54043 - 08/01/2015
FP7 CONCERTO Deliverable D6.3
2
Executive Summary
This document concludes the set of deliverables concerning the development, the integration and the validation of the
CONCERTO solutions through the project simulator. In particular, moving from the preliminary information included
in D6.1, a complete overview of the simulation platform is provided, focusing on the relationship with the CONCERTO
architecture (WP2) and presenting the structure and the functionalities of the most important modules. Input and output
information are specified for each module, as well as a reference to the CONCERTO documents containing the
technical description of the corresponding algorithms. A selection of the optimization techniques developed in WP3,
WP4 and WP5 has been included in the common simulator, addressing the interactions among the different units and
evaluating their combined performance in different application scenarios.
Different levels of optimization have been considered, performed by units operating at different points of the
communication chain (e.g. Coordination Centre, 4G Radio Access Network, Ambulance, User Equipment) and in
different layers of the protocol stack. This approach permits to evaluate the incremental advantages offered by the
proposed system, in relation with the stakeholders that will adopt the CONCERTO solutions in the next future, i.e.
emergency and healthcare management authorities, 4G operators and/or equipment manufacturers.
As anticipated in D6.1 and D2.4, two CONCERTO use cases have been analysed with particular attention and validated
through the common simulation platform. In this document we report the most significant numerical results obtained in
the selected use cases, outlining the enhancements achievable with respect to the current state-of-the-art communication
systems.
FP7 CONCERTO Deliverable D6.3
3
TABLE OF CONTENTS
EXECUTIVE SUMMARY .............................................................................................................................................. 2�
1.� INTRODUCTION .................................................................................................................................................... 6�
2.� FINAL SIMULATOR ARCHITECTURE ............................................................................................................. 7�
2.1� GENERAL DESCRIPTION ....................................................................................................................................... 7�
2.2� MACRO MODULE DESCRIPTION ............................................................................................................................ 8�
2.2.1� Hospital 1 ...................................................................................................................................................... 8�
2.2.2� Hospital 2 ...................................................................................................................................................... 8�
2.2.3� Emergency Area ............................................................................................................................................ 9�
2.2.4� Core Network .............................................................................................................................................. 10�
2.2.5� Servers and clients ...................................................................................................................................... 10�
3.� FINAL SIMULATOR MODULE DESIGN ......................................................................................................... 12�
3.1� H.264 AVC ENCODER ....................................................................................................................................... 12�
3.2� H.264 AVC DECODER ....................................................................................................................................... 12�
3.3� H.264 MVC ENCODER ...................................................................................................................................... 12�
3.4� H.264 MVC DECODER ...................................................................................................................................... 12�
3.5� MPEG-DASH (HTTP) MODULE ....................................................................................................................... 13�
3.6� APPLICATION CONTROLLER .............................................................................................................................. 13�
3.7� MULTI-SOURCE MANAGEMENT MODULE (COORDINATION CENTER) ................................................................ 13�
3.8� MULTI-SOURCE MANAGEMENT MODULE (AMBULANCE).................................................................................. 14�
3.9� RTP MODULE..................................................................................................................................................... 14�
3.10� UDP-LITE MODULE ............................................................................................................................................ 15�
3.11� PL-FEC MODULE .............................................................................................................................................. 15�
3.12� ANYCAST ROUTING MODULE ............................................................................................................................ 16�
3.13� IP ROUTING MODULE ........................................................................................................................................ 16�
3.14� BINDINGTABLE.................................................................................................................................................. 17�
3.15� DISTANCE INFO MODULE .................................................................................................................................. 17�
3.16� 802.11G/N MAC LAYER .................................................................................................................................... 18�
3.17� CROSS-LAYER SIGNALLING FRAMEWORK .......................................................................................................... 18�
3.18� 4G RRM MODULE ............................................................................................................................................. 19�
3.19� PHY – CHANNEL CODEC MODULE .................................................................................................................... 20�
3.20� PHY – MIMO MODEM AND FRAME DE-ASSEMBLER MODULES ....................................................................... 21�
3.21� RADIO CHANNEL MODULE ................................................................................................................................ 21�
4.� SIMULATION RESULTS ..................................................................................................................................... 23�
4.1� KPI EVALUATED THROUGH SIMULATIONS ......................................................................................................... 23�
4.1.1� End-to-end quality ...................................................................................................................................... 23�
4.1.2� Transmission Quality .................................................................................................................................. 23�
4.2� SIMULATED SCENARIOS ..................................................................................................................................... 23�
4.2.1� Ambulance and emergency area ................................................................................................................. 23�
4.2.1.1� Simulation Parameters ........................................................................................................................... 24�
4.2.1.2� Use Case 1: Numerical Results and comparison .................................................................................... 26�
4.2.2� Emergency area with multiple casualties ................................................................................................... 30�
4.2.2.1� Simulation Parameters ........................................................................................................................... 32�
4.2.2.2� Use Case 2: Numerical Results .............................................................................................................. 34�
4.2.2.3� Use Case 2: Numerical Result Comparison ........................................................................................... 51�
5.� CONCLUSION ....................................................................................................................................................... 57�
APPENDIX: MESSAGE TABLE ................................................................................................................................. 58�
6.� REFERENCES AND GLOSSARY ....................................................................................................................... 60�
6.1� REFERENCES ...................................................................................................................................................... 60�
6.2� GLOSSARY ......................................................................................................................................................... 60�
FP7 CONCERTO Deliverable D6.3
4
LIST OF FIGURES Figure 1 – Conceptual matching between the simulator and the overall CONCERTO architecture. ................................. 7�
Figure 2 – The structure of hospital_1 macro module ........................................................................................................ 8�
Figure 3 - The structure of hospital_2 macro module......................................................................................................... 9�
Figure 4 – The structure of the Em_area macro module. .................................................................................................... 9�
Figure 5 - Core network structure. .................................................................................................................................... 10�
Figure 6 – General architecture of the video sources (e.g. mobile phone, smart camera, etc.), single- and multi-access
client terminals (e.g. remote doctor’s PC, tablet, etc.). ..................................................................................................... 11�
Figure 7 – Structure of the phy layer module.................................................................................................................... 11�
Figure 8 – Simulated scenario .......................................................................................................................................... 23�
Figure 9 - Scheme of the simulation campaign realized for the Use Case 1. .................................................................... 26�
Figure 10 - Sim.UCI – End-to-end PSNR for RTP-based simulations. Comparison of results obtained with and without
PL-FEC protection. ........................................................................................................................................................... 27�
Figure 11 - Sim.UCI – End-to-end SSIM for RTP-based simulations. Comparison of results obtained with and without
PL-FEC protection. ........................................................................................................................................................... 28�
Figure 12 – Sim.UCI. PSNR and SSIM quality comparison for all the simulations in Use Case 1. ................................ 28�
Figure 13 – Sim.UCI. Transmission delay comparison for all the simulations in Use Case 1. ........................................ 29�
Figure 14 – Sim.UCI. TCP receiving rate, retransmissions and retransmission timeouts for Sim.UCI.00.HTTP (left) and
Sim.UCI.11.HTTP (right). ................................................................................................................................................ 29�
Figure 15- Sim.UCI. HTTP buffer status and delay for Sim.UCI.00.HTTP (left) and Sim.UCI.11.HTTP (right). .......... 30�
Figure 16 - The simulated emergency area, corresponding to the video acquisition campaign performed at the Hospital
of Perugia, Italy. ............................................................................................................................................................... 30�
Figure 17 – Logical phases of the Use Case 2 simulations, representing different events and situations within the
emergency area. ................................................................................................................................................................ 31�
Figure 18 – Optimization techniques considered for the Use Case 2 simulations. ........................................................... 31�
Figure 19 – Scheme of the simulation campaign realized for the Use Case 2. ................................................................. 34�
Figure 20 - Sim.UCII.000. Target source rate specified to the APP Controller and achieved rate during Phase 1. Case of
2 videos (left) and 4 videos (right). ................................................................................................................................... 36�
Figure 21 - Sim.UCII.000. Target source rate specified to the APP Controller and achieved rate during Phase 2. ......... 36�
Figure 22 – Sim.UCII.000. Rate allocated by the LTE eNodeB to the ambulance and to the additional users in the cell
(left) and time spent in TX queue at the LTE MAC layer (right). Transmission of 2 videos during Phase 1 (the case with
4 videos provides similar results). .................................................................................................................................... 36�
Figure 23 - Sim.UCII.000. Video quality at the doctor's UE within the remote hospital. PSNR values are reported on the
left, while SSIM are on the right. Plots (a)-(d) refer to Phase 1, while plots (e) and (f) to Phase 2. The curves reported in
(a) and (b) correspond to the case of 2 videos in Phase 1, while (c) and (d) to the case of 4 transmitted videos. ............ 37�
Figure 24 - Sim.UCII.000. End-to-end delay in Phase 1 (left) and Phase 2 (right). The case with 2 videos in Phase 1 has
been reported, while similar results have been obtained also in the case with 4 videos. .................................................. 38�
Figure 25 - Sim.UCII.001. Target source rate specified to the APP Controller and achieved rate during Phase 1. Case of
2 videos (left) and 4 videos (right), with GBR = 3Mbps at the LTE eNodeB. ................................................................. 39�
Figure 26 - Sim.UCII.001. Target source rate specified to the APP Controller and achieved rate during Phase 1. Case of
2 videos (left) and 4 videos (right), with GBR = 4.5Mbps at the LTE eNodeB. .............................................................. 39�
Figure 27 - Sim.UCII.001. Target source rate specified to the APP Controller and achieved rate during Phase 2, with
GBR = 3Mbps (left) and GBR = 4.5Mbps (right) at the LTE eNodeB. ............................................................................ 39�
Figure 28 – Sim.UCII.001. Rate allocated by the LTE eNodeB to the ambulance and to additional users in the cell.
Transmission of 2 videos during Phase 1, GBR=3Mbps (left) and 4.5Mbps (right). Similar results have been obtained
in the case with 4 videos in Phase 1. ................................................................................................................................. 40�
Figure 29 - Sim.UCII.001. Time spent in TX queue at the LTE MAC layer. The case of 2 videos in Phase 1 has been
reported, with GBR=3Mbps (left) and GBR=4.5Mbps (right) . Similar results have been obtained in the case with 4
videos in Phase 1. ............................................................................................................................................................. 40�
Figure 30 - Sim.UCII.001. End-to-end delay in Phase 1 (left) and Phase 2 (right). The 2 video case has been reported,
with a GBR = 3Mbps at the LTE radio link. Similar results have been obtained in the case with 4 videos in Phase 1. .. 40�
Figure 31 - Sim.UCII.001. End-to-end delay in Phase 1 (left) and Phase 2 (right). The 2 video case has been reported,
with a GBR = 4.5Mbps at the LTE radio link. Similar results have been obtained in the case with 4 videos in Phase 1. 40�
Figure 32 - Sim.UCII.001, GBR = 3Mbps at the LTE eNodeB. Video quality at the doctor's UE within the remote
hospital. PSNR values are reported on the left, while SSIM are on the right. Plots (a)-(d) refer to Phase 1, while plots (e)
and (f) to Phase 2. The curves reported in (a) and (b) correspond to the case of 2 videos in Phase 1, while (c) and (d) to
the case of 4 transmitted videos. ....................................................................................................................................... 41�
FP7 CONCERTO Deliverable D6.3
5
Figure 33 - Sim.UCII.001, GBR = 4.5Mbps at the LTE eNodeB. Video quality at the doctor's UE within the remote
hospital. PSNR values are reported on the left, while SSIM are on the right. Plots (a)-(d) refer to Phase 1, while plots (e)
and (f) to Phase 2. The curves reported in (a) and (b) correspond to the case of 2 videos in Phase 1, while (c) and (d) to
the case of 4 transmitted videos. ....................................................................................................................................... 42�
Figure 34 - Sim.UCII.100. Target source rate specified to the APP Controller and achieved rate during Phase 1. Case of
2 videos (left) and 4 videos (right). ................................................................................................................................... 44�
Figure 35 - Target source rate specified to the APP Controller and achieved rate during Phase 2. .................................. 44�
Figure 36 – Sim.UCII.100. Rate allocated by the LTE eNodeB to the ambulance and to the additional users in the cell
(left) and time spent in TX queue at the LTE MAC layer (right). Transmission of 2 videos during Phase 1 (the case with
4 videos provides similar results). .................................................................................................................................... 44�
Figure 37 - Sim.UCII.100. Video quality at the doctor's UE within the remote hospital. PSNR values are reported on the
left, while SSIM are on the right. Plots (a)-(d) refer to Phase 1, while plots (e) and (f) to Phase 2. The curves reported in
(a) and (b) correspond to the case of 2 videos in Phase 1, while (c) and (d) to the case of 4 transmitted videos. ............ 45�
Figure 38 - Sim.UCII.100. End-to-end delay in Phase 1 (left) and Phase 2 (right). The case with 2 videos in Phase 1 has
been reported, while similar results have been obtained also in the case with 4 videos. .................................................. 46�
Figure 39 - Sim.UCII.111. Target source rate specified to the APP Controller and achieved rate during Phase 1. Case of
2 videos (left) and 4 videos (right), with GBR = 3Mbps at the LTE eNodeB. ................................................................. 47�
Figure 40 - Sim.UCII.111. Target source rate specified to the APP Controller and achieved rate during Phase 1. Case of
2 videos (left) and 4 videos (right), with GBR = 4.5Mbps at the LTE eNodeB. .............................................................. 47�
Figure 41 - Sim.UCII.111. Target source rate specified to the APP Controller and achieved rate during Phase 2, with
GBR = 3Mbps (left) and GBR = 4.5Mbps (right) at the LTE eNodeB. ............................................................................ 47�
Figure 42 – Sim.UCII.111. Rate allocated by the LTE eNodeB to the ambulance and to the additional users in the cell.
Transmission of 2 videos in Phase 1, GBR=3Mbps (left) and 4.5Mbps (right). Similar results have been obtained in the
case with 4 videos in Phase 1............................................................................................................................................ 48�
Figure 43 - Sim.UCII.111. Time spent in TX queue at the LTE MAC layer. The case of 2 videos in Phase 1 has been
reported, with GBR=3Mbps (left) and GBR=4.5Mbps (right) . Similar results have been obtained in the case with 4
videos in Phase 1. ............................................................................................................................................................. 48�
Figure 44 - Sim.UCII.111, GBR = 3Mbps at the LTE eNodeB. Video quality at the doctor's UE within the remote
hospital. PSNR values are reported on the left, while SSIM are on the right. Plots (a)-(d) refer to Phase 1, while plots (e)
and (f) to Phase 2. The curves reported in (a) and (b) correspond to the case of 2 videos in Phase 1, while (c) and (d) to
the case of 4 transmitted videos. ....................................................................................................................................... 49�
Figure 45 - Sim.UCII.111, GBR = 4.5Mbps at the LTE eNodeB. Video quality at the doctor's UE within the remote
hospital. PSNR values are reported on the left, while SSIM are on the right. Plots (a)-(d) refer to Phase 1, while plots (e)
and (f) to Phase 2. The curves reported in (a) and (b) correspond to the case of 2 videos in Phase 1, while (c) and (d) to
the case of 4 transmitted videos. ....................................................................................................................................... 50�
Figure 46 - Sim.UCII.111. End-to-end delay in Phase 1 (left) and Phase 2 (right). The 2 video case has been reported,
with a GBR = 3Mbps at the LTE radio link. Similar results have been obtained in the case with 4 videos in Phase 1. .. 51�
Figure 47 - Sim.UCII.111. End-to-end delay in Phase 1 (left) and Phase 2 (right). The 2 video case has been reported,
with a GBR = 4.5Mbps at the LTE radio link. Similar results have been obtained in the case with 4 videos in Phase 1. 51�
Figure 48 - Use Case 2: comparison of the average PSNR achieved for the received videos in the different cases. ....... 53�
Figure 49 - Use Case 2: comparison of the average SSIM achieved for the received videos in the different cases. ........ 54�
Figure 50 - Use Case 2: average throughput of the ambulance and FTP users (aggregate) for each simulated scheme .. 55�
Figure 51 - Use Case 2: 3rd
quartile of the packet delay, i.e., 75th
percentile, of the packet delay CDF for each simulated
scheme .............................................................................................................................................................................. 55�
LIST OF TABLES
Table 1 – Message table. .................................................................................................................................................. 59�
FP7 CONCERTO Deliverable D6.3
6
1. Introduction This document focuses on the final architecture by detailing the integration of enhanced functionalities previously
developed in WP3, WP4 and WP5 of the CONCERTO project in the simulator, according to the system architecture
defined in WP2. It also collects extensive comparative numerical results of the CONCERTO solutions obtained through
simulations. The final version of the CONCERTO simulator includes more modules than originally planned, as the
consortium decided to extend the combinations of processing algorithms validated through the software tool. For
example, an RTP-HTTP streaming transcoding module has been inserted in the Coordination Center to allow TCP
based video communication within the hospital and compare it with the solution based on end-to-end RTP connections.
Similarly, a joint adaptation and aggregation unit based on a quality fair policy has been included within the ambulance,
to compare some algorithms developed in WP4 with equal rate mechanisms.
The CONCERTO simulator, developed in the event-driven OMNET++ platform, is constituted by general macro-
modules representing well-defined entities, i.e., the hospitals, the emergency area that includes the ambulance and the
local visual camera network, as well as the core network and the server/client terminals. Each of these macro-modules
comprises one or more specific modules for which a description is schematically provided in this document. Each
description includes the basic input/output data information and the reference to the technical document where the
modules have been detailed.
The simulator currently supports the emulation of different optimization strategies for two main use cases considered in
the CONCERTO project, i.e.,
• “Ambulance and Emergency Area” (Use case 1)
• “Emergency area with multiple casualties” (Use case 2)
In the first scenario. The transmission of multiple video streams from an ambulance on the move is addressed. In
particular, two ambient videos are multiplexed with an ultrasound medical stream, in order to enable remote support by
specialists at the hospital. In this scenario, we first show the impact of packet-level FEC (PL-FEC) on RTP transmission
from the ambulance to the CC, highlighting the quality improvements due to efficient packet loss recovery. Then,
multiple communication schemes are simulated assuming different levels of optimization within the ambulance and at
the eNodeB. Quality fair multiple video adaptation is compared to equal rate schemes, and the advantages of a
guaranteed bitrate policy are shown with respect to more traditional multiuser scheduling. In Use case 1 we evaluated
also the possibility to adopt HTTP streaming of the medical video contents in the link between the CC and the doctor’s
UE. This solution is compared with a communication scheme entirely based on RTP/UDP.
For the use case “Emergency area with multiple causalities”, we emulate the main emergency scenario addressed with
the video acquisition campaign performed at the hospital of Perugia. The scenario includes two differentiated aid
operational stages, i.e., a first aid stage where multiple static ambient outdoor cameras acquire videos with different
visual information, and an indoor phase where one injured person is loaded into the ambulance. In the latter case, two
indoor ambient videos and a diagnostic ultrasound video sequence are transmitted from the ambulance.
We consider different degrees of optimization for both stages, ranging from radio resource management, i.e., QoS-
based optimization, to application-based enhancements, i.e., QoE-aware solutions. Specifically, we evaluate four
strategies, including
• a benchmark solution where the transmitting unit at the ambulance equally divides the rate available on the
wireless link among all (or to a sub-set of) the videos of the camera visual network, whereas the LTE eNB
allocates the radio resource according to a conventional proportional fair strategy,
• a full-optimized strategy, where the ambulance divides the negotiated guaranteed bit-rate, provided by an
optimized RRA at the eNB, among the cameras selected by a soft camera ranking algorithm, in order to
provide an high quality to the diagnostic video and a lower but fair video quality to the ambient videos.
Several numerical evaluations are reported, showing the significant enhancements obtained on both the received visual
and video qualities, and the large reduction of the end-to-end delay achieved by the proposed mechanisms with respect
to the benchmark.
A third use case, namely “Ubiquitous tele-consultation” (Use case 3) has been implemented, but simulations are still on-
going, hence it will be reported in a second issue of this deliverable.
The remainder of this document is organized as follows: in section 2, we first introduce the peculiarities of each macro-
module, whereas in Section 3 all significant specific modules of the simulator are schematically described. In Section 4
we present the detailed numerical evaluation for each use case, according to the main defined KPIs, both QoS-based
and QoE-aware. Finally in Section 5, we provide the conclusions of the deliverable.
FP7 CONCERTO Deliverable D6.3
7
2. Final simulator architecture
2.1 General description
An important objective of WP6 has been the development of a joint CONCERTO simulator, with the purpose to
validate the work of the different technical work-packages. In order to provide realistic results, the simulation
framework OMNeT++ [1] has been selected as the common basis to build the CONCERTO simulator. OMNeT++ is a
modular, discrete time, event-driven simulation framework that provides basic modules and functionalities to build
network simulators. A simulated time clock is maintained by OMNeT++ in order to avoid real time issues.
A complete protocol stack has been implemented, from application to physical layer. The functionalities and the
algorithms integrated in the simulator have been provided by WP3, WP4 and WP5 and consist in a selection of
outcomes of the aforementioned work-packages. To this purpose, the preliminary simulator architecture described in
D6.1 has been extended. The CONCERTO simulator has a modular structure: several macro modules compose the
simulator and each of them is, in its turn, composed by several modules and sub-modules. While macro modules
typically represent the areas of the simulation scenario (i.e. the hospital, the emergency area, the LTE cell, etc.), the
sub-modules implement different functionalities and algorithms. The mapping between the final CONCERTO
architecture and the simulator structure is depicted in Figure 1.
Figure 1 – Conceptual matching between the simulator and the overall CONCERTO architecture.
FP7 CONCERTO Deliverable D6.3
8
2.2 Macro module description
In this section, each macro-module composing the simulator architecture is illustrated in details, updating and
completing the preliminary description provided in D6.1.
2.2.1 Hospital 1
The purpose of the hospital_1 macro module is to enable the evaluation of tele-consultation and tele-diagnosis
applications in scenarios where the staff of one hospital (hospital_1) needs assistance from the specialist(s) of another
hospital (hospital_2). The structure of hospital_1 is pretty simple, since it basically reproduces the transmission of
video and other healthcare contents from the hospital to remote specialists. To this purpose, a wired video and data
server is connected to the core network. A simple gateway module (hosp1_ctrl_manager) has been introduced enabling
routing functionalities in case more devices and medical equipment will be connected to the network from hospital_1.
Figure 2 – The structure of hospital_1 macro module
2.2.2 Hospital 2
The structure of hospital_2 has been depicted in Figure 3. A local wireless area network has been introduced, to mimic
indoor mobile access through portable devices. The doctor’s mobile terminal (wireless_client[0]) is connected to the
core network through a WiFi access point (bs_2). The transmission across the wireless channel is simulated by a radio
channel module (WIFI_RadioChannel). Other wired clients (wired_client[0]) may also be present in the hospital.
Multisource management and video ranking/prioritization algorithms are implemented in the Coordination_Center
(CC). Video and medical streams from the remote sources are collected by the CC, then stored and forwarded to the
final user (the doctor). Control signals originating from ranking, aggregation and adaptation techniques are transmitted
from the hospital to the remote devices to optimize the monitoring and telemedicine tasks, through DDE based
mechanisms. The Coordination_Center contains also HTTP (MPEG-DASH) server for hosting incoming HTTP
requests inside the hospital_2 network. In this way the video streams from the emergency_area can be transcoded into
MPEG-DASH format and stored in the CC for later playback via HTTP. Consequently, a wider audience without
dedicated streaming servers is achieved. For example the wireless_client inside hospital_2 topology includes both an
MPEG-DASH and RTP/UDP clients.
FP7 CONCERTO Deliverable D6.3
9
Figure 3 - The structure of hospital_2 macro module
2.2.3 Emergency Area
The emergency area (Em_area), as depicted in Figure 4, includes the ambulance, the local camera network and the
medical video sources (video_server[i]). Although not explicitly represented in the figure, some video sources are
deployed across the entire emergency area, while others are placed on board the ambulance.
Figure 4 – The structure of the Em_area macro module.
The ambulance is connected to the IPv6 core network through a 4G LTE wireless link, capable to guarantee good
connectivity in a wide range of mobile scenarios. Multimedia information flow from local video and medical sources to
FP7 CONCERTO Deliverable D6.3
10
the remote hospital, passing through the aggregation server within the ambulance, while feedback and control signalling
follow the reverse path, from the Coordination Center to the on-site equipment. In Use Case 1, the ambulance is
typically on the move and the different video sources are directly connected to the onboard processing unit, while in
Use Case 2 a stationary emergency installation is addressed, in which the medical devices can be also in proximity of
the ambulance. In the latter case, a wireless local area network is deployed to allow short-range communication of video
and data contents among the different pieces of equipment. For this reason, as depicted in Figure 4, the ambulance is
provided with two radio interfaces, namely IEEE WiFi (mac[1]/phy[1]) and 4G LTE (lte_mac[0]/phi[0]). The data
streams collected onboard the ambulance are multiplexed and transmitted through a single link to the LTE access
network, in order to facilitate the management of the emergency communication by the 4G service provider.
To coordinate the stream aggregation and transmission, a multi-source management unit is included in the ambulance
processing equipment. This module is in charge of jointly selecting the target bit-rate for the multiple video encoders
available on-site. The rate adaptation can be performed following different strategies, as reported in section 4.2.2 and
described in more details in D4.3. For this purpose, the multi-source management units located at the Coordination
Center and within the ambulance exchange several control information and feedbacks through the DDE mechanism (see
D2.3 for more details about the exchanged messages).
2.2.4 Core Network
Figure 5 depicts the core network module. The module functionalities aim to reproduce a realistic IPv6 network,
enabling the bi-directional message exchange between the hospitals, the emergency area and the video servers/clients.
The main module, IP Network, manages the routing of the incoming packets to deliver each of them to the correct
destination. The IP network module simulates the transmission of video and data streams across a realistic set of routers
characterized by random delays and packet loss probability.
The core network is composed also by the Anycasting_routing module as well as by the Feedback_Aggr_Server.
Figure 5 - Core network structure.
2.2.5 Servers and clients
Each device acting as an information source or an information sink is implemented following the ISO/OSI protocol
stack. This approach permits to evaluate the performance of novel solutions and smart algorithms keeping into account
the most important features of real communication standards and their fundamental mechanisms. In Figure 6, the
general architecture of a video source and a client terminal has been reported. Moreover, some of the modules of Figure
6 are, in their turn, compound modules constituted by simpler basic elements. For example, the phy module is
composed by 5 sub-modules, as reported in Figure 7. In Chapter 3, we provide a detailed description of the main
modules included in the simulator.
FP7 CONCERTO Deliverable D6.3
11
Figure 6 – General architecture of the video sources (e.g. mobile phone, smart camera, etc.), single- and multi-access
client terminals (e.g. remote doctor’s PC, tablet, etc.).
Figure 7 – Structure of the phy layer module.
We have also implemented a multi-access mobile terminal aiming to evaluate advances of adaptive, flow-aware
communication techniques relying on overlapping heterogeneous radio coverages. The multi-access smartphone has
two interfaces; one is connected to the LTE RadioChannel and the other one is connected to a WIFI RadioChannel
supported by a Wi-Fi Access Point (AP). In addition, this smartphone entity has a new sub-module called BindingTable,
which stores and manages the state of the address bindings based on a simple model of RFC 6089.
We conclude this section by observing that the smart functionalities addressed in CONCERTO will be located in
different modules of the communication chain. For example, the multi-source management (MSM) module will
typically operate between the observer_unit at the client side and the controller_video at the streaming server.
Similarly, the Application Controller will be part of the controller_video within the server. On the contrary, the radio
resource management (RRM) module will be inserted in the bs_controller, mainly operating on the MAC and PHY
layers of the 4G base station (bs_1). The IP content-aware cross-layer-scheduler will be distributed in the servers within
the IPv6 core network, while the cross-layer signalling framework will implement novel solutions for information
exchange between the observer_unit at the client side and multiple elements along the communication chain.
FP7 CONCERTO Deliverable D6.3
12
3. Final simulator module design In this chapter, the final modules included in the Concerto simulator are described, updating and completing the
preliminary information provided in D6.1. After a short description of the main modules’ functionalities, the list of
input/output information is provided, with reference to both data and control signals. Finally, proper references to the
technical documents describing the implemented algorithms and models are reported for the reader’s convenience.
3.1 H.264 AVC Encoder
Module description
This module is part of the application layer and it is in charge to code video data according to the H264/AVC standard.
The module can read both raw videos or already compressed ones. If the simulation parameter that specifies the file to
read refers to an already compressed video, the module will read the file and simply send frame by frame the video to
lower layers. In case of a raw video (YUV 4:2:0) the module receives from the application controller the compression
parameters and generates an AVC bitstream to send to application controller and to lower layers.
Input data
• Raw videos to transmit.
• Already compressed videos to transmit
• Compression parameters from application controller
Output data
• Encoded AVC streams for the application controller and lower layers.
3.2 H.264 AVC Decoder
Module description
This module is part of the application layer and it is in charge to decode video data according to the H264/AVC
standard. As for the encoder module, the decoder can work in two different ways according to simulation parameters.
The module receives encoded AVC streams from RTP module and can both decode the received streams to generate
raw videos or simply recompose complete AVC videos starting from the received frames. In both cases, once the videos
are entirely recomposed, they can be viewed through a video player by the end user.
Input data
• Encoded AVC streams from RTP module.
Output data
• Raw videos or AVC videos for the end user.
3.3 H.264 MVC Encoder
Module description
This module is part of the application layer and it is in charge to encode, in a single video, sequences captured
simultaneously by multiple cameras. It respects the MVC amendment to H264/AVC standard. The module reads raw
videos (YUV 4:2:0) or already coded videos and generates MVC bitstream to send to application controller and to
lower layers. In case of raw videos it also receives compression parameters from the application controller module.
Input data
• Raw or AVC coded videos to transmit.
• Compression parameters from application controller
Output data
• Encoded MVC streams for the application controller and lower layers.
3.4 H.264 MVC Decoder
Module description
This module is part of the application layer and it is in charge to decode AVC sequences from multiple cameras video
data according to the H264/MVC standard. The module receives encoded MVC streams from RTP module and
generates raw videos or AVC videos for the end user.
FP7 CONCERTO Deliverable D6.3
13
Input data
• Encoded MVC streams from RTP module.
Output data
• Raw videos or AVC videos for the end user.
3.5 MPEG-DASH (HTTP) module
Module description
MPEG-DASH module is a part of the application layer and it contains HTTP server and HTTP client that communicate
on top of TCP protocol. The server side is basically a simple web server hosting for incoming HTTP requests.
Furthermore, it holds the MPEG-DASH generated files as a storage base. The client side contains MPEG-DASH client,
which has similar functionalities as a web browser. The main responsibilities for the client include the initialisation of
the HTTP/TCP session, playlist file parsing and scheduling for the new HTTP requests. Since the client implementation
does not contain MPEG-DASH compatible player, the received video data is saved into file(s).
Reference for Technical Details
MPEG-DASH functionalities implemented into the system simulator are described with more details in WP4
deliverable D4.3 [14].
Input data
• Encoded / raw (YUV) video(s)
Output data
• MPEG-DASH compatible
o playlist file (.mpd)
o initialization file (.mp4)
o video segments (.m4s)
3.6 Application Controller
Module description
The application controller is part of the application layer and it is the module in charge to provide the required inputs to
the RTP module and to the encoder modules (AVC or MVC). In particular the application controller uses the expected
throughput and loss probability to dynamically adapt the compression rate of the video stream to be transmitted,
providing adapted frames to the RTP module. Moreover it controls the FEC that the RTP module will use. The
application controller will be compatible with AVC, SVC and MVC streams.
Input data
• Data stream to transmit (from AVC/SVC/MVC encoder)
• Expected throughput (through the cross layer signaling framework)
• Expected error probability (through the cross layer signaling framework)
Output data
• Frames adapted to the expected throughput to RTP module
• FEC parameters to RTP module
3.7 Multi-Source Management Module (Coordination Center)
Module Description
The Coordination Center (CC) is developed with the aim to manage the multiple sources of information destined to end-
users located in the hospital; as it is shown in Figure 3, it acts as the main interface between the Hospital 2 and the outer
world. In this regard, different functionalities are implemented in this module. In particular, The Coordination Center is
capable to store all the received information flows. This functionality has been implemented in order to meet the
hospital internal regulations, which require that all the incoming data have to be recorded. Then, the received
information are forwarded, based on specific selection criteria, to the destination user. Alternately, the stored videos can
be transcoded in order to perform TCP-based video communication within the hospital and compare it with the solution
based on end-to-end RTP connections.
The implemented video selection criteria are performed based on the video content and the video typology. For the
video sources deployed across the emergency area (outdoor ambient videos in the following), the video selection is
FP7 CONCERTO Deliverable D6.3
14
mainly based on camera ranking selection algorithms which are able to define the sub-set of cameras capable to provide
best visual quality of the filmed scene in the emergency area.
The implemented camera ranking strategies are mainly based on the camera pose (position and orientation) and further
camera parameters as resolution and frame-rate. These information, in conjunction with the position of the point of
interest, are used to define a priority policy between the different cameras. The resulting priority weights are forwarded
to the ambulance through the usage of the DDE (see 3.17): in this regard, the Coordination Center is a DDE feedback
event producer.
In its turn, the ambulance takes advantages of this information in order to guarantee the best quality to the prioritized
selected videos during the radio resource allocation process (see 3.8).
Again, for videos acquired directly on the ambulance, the Coordination Center provides the priority information in
order to define the number of videos to be transmitted and provide the best quality to the medical data.
Reference for Technical Details
The Coordination Center module functionalities are described with more details in WP4, deliverable D4.2 Chapter 4
and D4.3 Chapter 2.
Input Data:
• Outdoor ambient video pose (position and orientation)
• Indoor ambient video typology (ambient or medical)
• Position of the point of interest
Output Data:
• Video camera priority weights
3.8 Multi-Source Management Module (Ambulance)
Module Description
The multi-source management module in the Ambulance is in charge of determining the amount of information rate to
be allocated to each video, when multiple video sources are multiplexed by the ambulance and sent through a single
uplink LTE channel.
This task is performed dynamically (e.g. every two seconds) according to the content characteristics of each video
scene intended for transmission, thereby fulfilling the uplink transmission bit-rate provided to the ambulance
equipment. The objective of this module is to provide an high quality of experience for diagnostic video sequences and
a lower, but fair, quality for less critical ambient videos.
Reference for Technical Details
The multi-source management module functionalities are described with more details in WP4, deliverable D4.3, section
2.3.
Input Data:
• Set of video cameras to be transmitted
• Video camera priority weights
• Three parameters describing rate-quality model of each video scene
• Minimum and maximum allowed encoding rate
• Available uplink throughput (i.e. guaranteed bit-rate if GBR-LRE algorithm is applied at eNodeB)
Output Data:
• Target source bit-rate for each video
3.9 RTP module
Module description
The RTP module is part of the transport layer and it implements the RTP protocol. This module is in charge to open
RTP connections to transfer data. It receives data to stream (images and videos) from application layer and it builds the
packets that have to be transmitted according to the codec in use and the received inputs on Forward Error Correction
(FEC).
FP7 CONCERTO Deliverable D6.3
15
In reception the RTP module extracts the data included in packets sent by UDP lite module and it recomposes the data
stream for the application layer.
The RTP module will support the following modes:
• the default mode (RFC 3640 [6]);
• the H.264 mode (RFC 3984 [7]) ;
• a Systematic (fully transparent for users unaware of the FEC feature) mode based on Reed-Solomon (RS)
codes;
• a Non-systematic (information bits are used to generate a non-systematic payload) mode based on rate
compatible punctured convolutional (RCPC) codes.
Input data
• Parameters for the FEC from Application Controller module
• Images from the application layer to be sent (in transmission)
• Packets from the UDPlite module in reception
Output data
• Packets to be sent to the UDPlite module (in transmission)
• Recomposed data stream to the application layer (in reception)
3.10 UDP-lite module
Module description
The UDP-Lite module implements the transport layer of the simulator, following the Lightweight User Datagram
Protocol as described in RFC3828 [13]. The protocol is similar to classic UDP, with the difference that damaged
packets are not necessary discarded. Indeed in UDP-lite it is possible to define partial checksum, while in classic UDP
the checksum is on the full packet.
Input data
UDP-lite module receives:
• in transmission, packets from application layer (RTP module or HTTP module);
• in reception, packets from the IP module.
Output data
UDP-lite module outputs are:
• in transmission, UDP-Lite packets to the IP module;
• in reception, RTP or HTTP packets to the application layer.
3.11 PL-FEC Module
Module description
Packet-level FEC functionalities have been included in the CONCERTO simulator. The PEC module operates at the
transport layer, on RTP/UDP packets. PL-FEC solutions based on both binary and non-binary LDPC codes have been
designed and implemented in the module, as well as iterative and maximum likelihood (ML) decoding functionalities.
The module is able to provide different level of stream protection, as multiple code rates are supported. Multiple Source
Block (SB) management is supported, as well as the possibility to manage contemporary encoded flows from different
sources.
Four binary GeIRA LDPC codes have been designed, with the following parameters:
- n=8192, k=4096 (Rc=1/2), optimized for iterative decoding
- n=8192, k=4096 (Rc=1/2), optimized for ML decoding
- n=6144, k=4096 (Rc=2/3), optimized for iterative decoding
- n=6144, k=4096 (Rc=2/3), optimized for ML decoding
Moreover, 3 nonbinary LDPC codes have been designed on GF(4), optimized for ML decoding, and inserted in the
simulator:
- n=4096, k=2048 (Rc=1/2)
- n=3072, k=2048 (Rc=2/3)
- n=2458, k=2048 (Rc=5/6)
Reference for Technical Details
FP7 CONCERTO Deliverable D6.3
16
The implemented encoding and decoding process, as well as the source and repair packet format have been described in
D4.2 (chapter 5). The nonbinary code design principles and performance evaluation are described in D4.3 (chapter 3).
Input data
The input information consist of configuration information and data packets to be processed:
• Code type (binary, nonbinary)
• Code rate (1/2, 2/3, 5/6)
• Source packets from upper layers (Encoder)
• Priority of incoming source packets (Encoder)
• Source and repair packets from lower layers (Decoder)
• Decoder type, iterative or ML (Decoder)
• Max number of iterations for iterative decoding (Decoder)
In addition, some specific information must be provided to configure the TX and RX source block managers:
• SB symbol length
• UDP destination ports to be encoded together
• UDP destination port for repair packets
• Encoding and decoding max tolerable delay (TX and RX trigger)
Output data
The output information consist of processed packets and feedback information:
• Processed source packets and repair packets (Encoder)
• Packet loss rate estimation (Decoder)
• Recovered source packets (Decoder)
3.12 Anycast Routing Module
Module description
The Anycast Routing Module is a special network layer function performing a unique type of IPv6 routing: anycasting
provides a set of interfaces (each in a different location) which have the same IP address if they supply the same
network services, thus the users of these services have to know only one address to reach this service. The anycast
routing mechanism transmits a datagram to an anycast address, and provides best effort delivery of the datagram to at
least one, and preferably only one of the servers that accept datagrams for the anycast address. In tomorrow’s networks
the role of anycasting probably will grow since mobility (service discovery in new networks during roaming) and
multimedia delivery to mobile devices (feedback routing) are becoming elementary services.
Anycasting is applied in CONCERTO to always provide the fastest delivery of every single feedback message of
mobile terminals towards the system controller directly or through intermediate aggregators.
Input data
• Actually used IP addresses
• Transport layer packets
• Data layer packets
Output data
• Signalling messages with anycast routing information (including metrics) to the closest anycast capable router
• Network layer packets to the Transport and Data layer
3.13 IP Routing Module
Module description
The IP Routing module is one of the sub-modules of the Core Network module and it allows the introduction of random
losses and delay for each packet that passes through it.
This module aims to model a generic IP network composed of a number of IP routers: this is achieved introducing loss
and delay, which generate the effects of crossing multiple IP routers and also introduce the impact of the network-layer
enhanced functionality (such as supporting of QoS guarantees relying on a proportional model for delay differentiation).
Delays are modelled with a shifted Gamma distribution [5] whereas the losses are modelled with a uniform distribution.
The input and output of this module are basically IP packets, while some configuration parameters are supported in
order to model the behaviour of the IP network in the different application scenarios. Some of them refer to the number
and characteristics (i.e. introduced loss and delay) of the router interfaces and others to the proper functionality of the
simulation module (i.e. routing table and connection topology).
FP7 CONCERTO Deliverable D6.3
17
With regards to the first set, the number of nodes to be crossed and the statistical value of the distributions used to
model loss and delay play a critical role in the impact of the IP network in the simulation chain according to the
scenarios to be investigated.
The number of nodes crossed, in particular, is retrieved from the Distance Info Module (DIM) based on the distance
between the source and the destination address of the packet that passes through the IP network. If no matching is found
within the DIM for the pair of source-destination address, a default value is used. As a consequence the loss and delay
introduced by the IP network model reflect the complexity and characteristics of the modelled network scenario.
The latter set is also related to the specific location of the entities that compose the architecture of the project.
Input data
The input data for the IP Routing module is basically IP Message containing either:
• data (i.e. audio, video, other type of file)
• cross layer signalling (e.g. for joint optimization)
• network management and protocol information (e.g. for session management, cooperative networking,
caching, mobility management)
• feedback about end-to-end metrics (e.g. RTT, jitter, packet loss, latency, available bandwidth, throughput,
QoE)
Output data
Considering that the main role of the IP Routing is to introduce losses and delays, the output data are the same as the
input ones (at least for the former version of the module). Actually, the IP Network does not generate or receive as final
destination IP messages.
3.14 BindingTable
Module description
There is an end-user terminal entity called smartPhone in the simulation chain, aiming to model a multi-access mobile
device. The SmartPhone has two interfaces to communicate; one faces the 4G network and one the Wi-Fi network.
Using the BindingTable module, the multi-access device is able to control which interface to use on a per flow basis.
The BindingTable has two modes. It can bind the full communication with a correspondent node to an interface, or it
can bind individual communication flows based on not only the IP addresses, but the port numbers as well. The
networkL module, which is responsible for sending the messages to the appropriate lower layer module, can ask the
BindingTable module, whether the actual packet is bound or not.
Simultaneously, the core_network has to know the correct direction of a packet addressed to the smartPhone. Therefore,
the BindingTable sends binding messages to the IP routing module, so the packets are routed always to the correct
interface.
Input data
The BindingTable has no input data.
Output data
Binding messages containing the source IP address, and source and destination ports (if needed).
3.15 Distance Info Module
Module description
The module called Distance Info Module (DIM) is in the Core Network module and responsible for providing
information about the hop count between two distinct IP nodes. For each end-to-end connection, two distance values are
available in order to take in account both communication directions. DIM is not connected to other modules as it has no
direct effect on the IP packets. This module is initialized dynamically, but there’s a possibility to modify the calculated
values by provided functions or an outer resource.
The module uses two main resources. Nodes are stored in a <string, int> map where the string variable is their IP
address and the int variable is their distance to the routing module. There is also a distance matrix stored in a two-
dimensional vector structure, which is filled out at the initialization method. Elements are identified by the nodes place
in the map (e.g., the distance between the second and the forth node in the map is stored in the (2, 4) position of the
matrix).
Input data
The main input for the module is the topology extraction provided by the cTopology class. The DIM has no gates and
therefore no messages are needed to be handled. If it receives a message somehow, it is simply deleted. The module has
no self-messages neither.
FP7 CONCERTO Deliverable D6.3
18
Output data
The main output of the module is the distance matrix. The DIM has no gates and therefore no messages are needed to be
generated. The module has no self-messages neither.
3.16 802.11g/n MAC layer
Module description
The MAC module is largely based on a legacy code modified for the CONCERTO simulation chain. The IEEE
802.11g/n MAC layer is implemented based on the IEEE WLAN standard with minor enhancements and modifications
on it. The first enhancement is related to the additional frame check sequence (FCS) which calculates the two
checksums from the IEEE 802.11 data frame. The standardized way to calculate a 32-bit cyclic redundancy check
(CRC) is to factor in the whole frame. In the 802.11g/n MAC module, it is also possible to use the standardized FCS.
However, the MAC module implements also a modification to the frame structure where there are two FCSs calculated.
The first FCS is calculated from the whole frame and another additional FCS is generated only from the IEEE 802.11
header part, excluding the data part received from the upper layers. This enables to detect whether a possible bit error
resides in the MAC header or in the payload data. In the aforementioned case, packet is always dropped. In the latter
case, if the frame includes video stream data, it can be passed to upper layers for further processing. However, if the
frame includes, for instance, a signaling message it is dropped regardless of where the bit errors reside.
RTS/CTS (Request to Send / Clear to Send) is not implemented for improving the collision avoidance. Bitrate
adjustment is carried out by PHY, instead of MAC. This is because the MAC is implemented to be scalable for different
PHY models, also beyond the IEEE 802.11 standards.
The same MAC module can act as a client and as an access point. When in the access point mode, the module transmits
beacons for clients. When a client first time receives a beacon and it has not connected to any of the access points, it
makes an L2 association with the access point.
Reference for Technical Details
The MAC layer is studied in WP5. More details on it are provided in the three deliverables of the work package
Input data
The input data for the 802.11g/n module is an IP Message received from the upper layer or a MAC frame
(Ieee80211Frame) received from the physical layer. The IP packets or MAC frames can include any payload data.
Messages received are:
• IEEE80211Frames (from lower layer)
• IPMessages (from upper layer)
• Beacons (clients)
• Authentication messages (access points)
Output data
If the message (IpMessage) is received from upper layers (IP), the MAC module encapsulates the IP packet to a MAC
frame (Ieee80211Frame) and passes the frame to PHY for transmission. If the message is received from the PHY, the
MAC module decapsulates the IP packet (IpMessage) from the frame and passes the packet to the IP Network.
Messages sent are:
• IEEE80211Frames (to lower layer)
• IPMessages (to upper layer)
• Beacons (access points)
• Authentication message (clients)
3.17 Cross-layer signalling framework
Module description
The cross-layer signalling framework proposed by the project and introduced in deliverable D2.3 comprises two sub-
modules, Distributed Decision Engine (DDE) Framework and Network Information Service (NIS), which complement
each other. The DDE is a collection of components for short time storage and delivery of events between different
entities and NIS is used to collect information about the characteristics and services of the serving network and other
available networks in range.
FP7 CONCERTO Deliverable D6.3
19
The cross-layer signalling framework has been included in the simulation chain by implementing DDE functionalities.
The DDE server is deployed to the core network side of the simulation chain. In the simulation chain, the efficiency or
scalability of the DDE are not measured but the DDE is used as a supporting functionality only, providing cross-layer
information from any entities producing it and subscribing to it. Thus, the DDE implementation in the simulation chain
is kept simple and it is located in the middle of the transmission chain. The functionality of cross-layer signalling
framework is distributed among different modules in OMNeT model. The event producer is included in observer_unit
and each module which produces cross-layer information has an interface implemented to communicate with the cross-
layer signalling framework. Event consumers are implemented through interfaces in each module foreseen to use cross-
layer information.
Reference for Technical Details
More details of cross-layer signalling framework developed can be found in WP2 deliverables, in particular D2.3.
Input data
The input information consists of DDE messages, containing
• Event ID, type, and time-to-live
• Event registration/subscription info (e.g. DDE Event Producer/Consumer ID, Event ID)
• Events containing cross-layer and control data from DDE Event Producers
• Event data filters and access policies for controlling the information flows
Output data
The output information consists of DDE messages, containing
• Event ID and type
• Event registration/subscription status info
• Events containing information obtained through DDE. The information may be processed (e.g. averaged,
aggregated, filtered) prior to its transmission.
3.18 4G RRM Module
Module description
The 4G RRM Module is part of the “Base Station Controller” module and supports LTE-based uplink wireless
transmission, specifically considered in CONCERTO use cases. Two different Radio Resource Allocation (RRA)
strategies are available: the first is a QoS-aware and channel-aware solution, based on the LRE algorithm described in
D5.3; the second is a channel-aware proportional fairness solution used as benchmark for the final results.
Both algorithms determine the power and the physical resource blocks (PRBs) to be allocated to the LTE users with
non-empty queue, by ensuring contiguous PRB allocation, as required by SC-FDMA technology. Open-loop LTE
power control is considered to partially compensate the average attenuation.
With respect to the proportional-fairness solution, the QoS-aware LRE strategy is able to differentiate among different
traffic classes. Specifically, it can differentiate between two LTE macro-classes, guaranteed bit-rate (GBR) and non-
GBR user flows. The resources are allocated to achieve the prescribed rate of GBR users in the short/medium term and
to provide a best effort fair service to non-GBR users.
In order to validate the algorithms, dummy users, in addition to CONCERTO users, are emulated inside this module.
The dummy users can support either best-effort traffic or GBR video streaming traffic. In the latter case, realistic H.264
video sources encoded at 200 kbps are used, whereas best-effort uses are modelled as infinite buffer traffic sources.
The radio resource algorithm for LTE is integrated with an AMC scheme based on the computation of the average
mutual information per bit, allowing to maximize the achievable throughput while fulfilling the target requirements in
terms of BER/BLER. The input of the AMC scheme is constituted by RB and power allocation to the different user,
while it provides the optimal CQI as output.
Reference for Technical Details
The 4G RRM functionalities implemented into the system simulator are described with more details in WP5 deliverable
D5.3 and reference therein. The AMC scheme is based on the technique proposed in [9], extended in order to support
turbo codes and uplink.
FP7 CONCERTO Deliverable D6.3
20
Input data
The dynamic input information consists of
• Information from IP layer of each CONCERTO user, containing
o Amount of data in each queue
o Head-of-line packet waiting time in each queue
• Information from PHY layer of each CONCERTO and dummy user, containing
o Instantaneous channel gain for each sub-carrier
o Average channel gain (path loss + shadowing)
• Allocated subcarriers and related transmission power (AMC)
• Target BER/BLER across the LTE radio link (AMC)
Additional configuration data are:
• Bandwidth
• RRA strategy
• Number of dummy users
• Percentage of GBR-dummy users (the other are considered as best-effort)
• Rate requirements for GBR-dummy users and GBR-CONCERTO users
• Power control cell-specific parameters
• Maximum Transmission Power
• Initial GBR users load
Output data
The output information consists of;
• information to PHY layer for each CONCERTO user
• Allocated subcarriers and related transmission power
• Optimal CQI according to the LTE standard (AMC)
3.19 PHY – Channel Codec Module
Module description
Forward Error Correction (FEC) at the physical layer is implemented within the Channel Codec module. Several
encoding and decoding schemes have been included into the common simulation chain. In particular, in addition to rate
compatible punctured convolutional codes (RCPC) and irregular repeat-accumulate low density parity check (IRA
LDPC) codes, the Channel Codec Module has been enriched with the turbo coding supported by LTE, including the
CRC check mechanism foreseen by the standard. At the decoder side, the BCJR algorithm [4] is used to decode
convolutional codes, a full turbo decoder is available for LTE communications, and an iterative scheme based on belief
propagation is applied with LDPC codes.
The Channel Codec module constitutes a key element for the AMC solution for LTE (see the description of the 4G
RRM module).
Reference for Technical Details
The turbo coding implementation follows the specifications provided in the LTE standard [10], while the LDPC
solution is based on the work in [9].
Input data
The input information consist of configuration information and data packets to be processed:
• Code type (Convolutional, LDPC, Turbo)
• Code rate (5 values are supported with CC and LDPC codes, 15 values are supported with LTE turbo codes,
according to the CQI foreseen by the standard)
• Data packets from upper layers (Encoder)
• LLR information from the Modem Module (Decoder)
• Max number of iterations for the decoding algorithm (Decoder)
Output data
The output information consists of processed data packets and feedback information:
• BLER/BER estimation (Decoder)
• Encoded packets (Encoder)
• Decoded packets (Decoder)
FP7 CONCERTO Deliverable D6.3
21
3.20 PHY – MIMO Modem and Frame De-Assembler Modules
Module description
The multiple-input multiple-output (MIMO) modem module implements the modulation and demodulation features
supported by the physical layer. The modem functionalities allow to simulate the behaviour of both 4G LTE uplink and
WiFi radio links. Multiple antenna architectures are supported at both the transmitter and receiver side (typically 1, 2 or
4 antennas). For the CONCERTO simulations a maximum ratio combining (MRC) scheme was selected at the receiver
side. The MIMO modem module works in strict synergy with the Frame De-Assembler module, implementing
orthogonal frequency division multiplexing (OFDM) modulation with cyclic prefix (CP) insertion and providing the
main functionalities required by OFDMA and SC-FDMA schemes. In general, the constellations supported for each
subcarrier are BPSK, QPSK, 8PSK, 16QAM and 64QAM. When LTE links are considered, only QPSK, 16QAM and
64QAM (for the downlink) are possible. Moreover, single antenna LTE terminals and 2-antennas eNodeB were
assumed.
The MIMO modem module constitutes a key element for the AMC solution for LTE (see the description of the 4G
RRM module).
Reference for Technical Details
Standard OFDM-based modulation is implemented, supporting different types of multiple access to the shared channel
(WiFi and LTE).
Input data
The input information consist of configuration information and data packets to be processed:
• Modem type
• Number of antennas
• Constellation size (BPSK, QPSK, 8PSK, 16QAM, 64QAM)
• CP duration
• Number of sub-carriers
• Transmission bandwidth
• Resource block allocation information
• (Max) transmission power
• Antenna gain and cable loss
• Rx noise figure
• Data packets from the Channel Codec module (Modulator)
• Received symbols (Demodulator)
Output data
The output information consist of processed symbols and feedback information:
• Modulated symbols (Modulator)
• LLR associated to the received symbols (Demodulator)
• Channel state information (CSI), in terms of subcarrier gains and signal-to-noise ratio (Demodulator)
3.21 Radio Channel Module
Module description
The Radio Channel module simulate the effects of packet transmission across radio links. Several types of wireless
radio channel have been realized and integrated in the simulator:
1. LTE channel
a. based on the ITU models:
i. ITU Pedestrian and Vehicular A,
ii. ITU Pedestrian and Vehicular B,
iii. ITU Extended Pedestrian A,
b. supported bandwidth: 1.4MHz, 3MHz, 5Mhz, 10MHz, 15MHz, 20MHz
2. WiFi channel, based on the ITU models:
a. ITU Residential,
b. ITU Office
c. ITU Commercial.
FP7 CONCERTO Deliverable D6.3
22
3. Uncorrelated block fading (UBF) channel
a. Fully configurable
The implemented solution allows to simulate several kinds of OFDM signals, like those supported by LTE and WiFi
systems. Possible presence of multiple transmitting and receiving antennas is considered. In addition to path loss
depending on the terminal position, Rayleigh distributed channel gains and log-normal shadowing model the fading
effects due to time-varying multipath propagation and obstacles, respectively. Fast fading samples are derived based on
the selected channel model and on the mobility of the terminals, defined by their velocity and motion direction. Slow
fading samples are calculated taking into account their temporal correlation, based on the user equipment velocity.
This model permits to achieve a good trade-off between simulation realism and computational burden. In order to
lighten the simulation complexity and, consequently, to reduce the simulation time, simplified versions of this module
have been implemented and can be adopted in specific parts of the simulator. For example, the radio channel modelling
the local/personal links between the cameras and the medical devices within the emergency area will be simulated
assuming simple packet erasure channels with a predefined loss probability.
Reference for Technical Details
Wideband radio channels have been implemented according to the ITU models described in [11] and [12].
Input data
The input information consist of configuration information and modulated symbols to be processed:
• Channel type and model (LTE, WiFi, UBF)
• Transmission bandwidth
• Terminal position, velocity and direction
• Time and frequency block length (with UBF)
• Number of carriers
• Number of TX and RX antennas
• Log-normal fading parameters
• Transmitted symbols
Output data
The output information consists of processed symbols and feedback information:
• Received symbols
• Channel state information