design and implementation of a led-bulb to smartphone vlc

103
Politecnico di Milano School of Industrial and Information Engineering Master of Science in Telecommunication Engineering Design and implementation of a LED-bulb to smartphone VLC system Supervisor: Prof. Riva Carlo Giuseppe Co-Supervisor: Prof. Capsoni Carlo Author: Matteo Sanvito Matr. 862684 Academic Year 2018-2019

Upload: khangminh22

Post on 16-May-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Politecnico di Milano

School of Industrial and Information Engineering

Master of Science in Telecommunication Engineering

Design and implementation of a LED-bulb tosmartphone VLC system

Supervisor: Prof. Riva Carlo GiuseppeCo-Supervisor: Prof. Capsoni Carlo

Author:Matteo Sanvito Matr. 862684

Academic Year 2018-2019

Vorrei iniziare il mio lavoro dedicando poche righe a coloro i quali mi hannoaccompagnato e sostenuto in questi anni.

Primi fra tutti la mia famiglia, che in tutti questi anni si e spesa lettealmenteanima e corpo per consentirmi di vivere questo percorso cosı segnante per la miavita. Seguono immediatamente il professor Riva e il professor Capsoni, il quale estato innanzitutto guida alla mia crescita personale, prima ancora che correlatore,e mi ha volontariamente preso in carico per tutto questo tempo.Seguito poi a ringraziare le persone piu care per le quali non potrei esprimere aparole il contributo fraterno che mi hanno dato: la cara morosa Bianca e gli amiciAnna, Leonardo, Matteo e Alessia, Luca e Martina, Miriam e Paco e le rispettivefamiglie verso le quali nutro un indescrivibile e profondissimo affetto.Ringrazio poi i miei colleghi che si sono spesi in tutti i sensi affinche io potessiconcludere gli studi senza che mai trasparisse nemmeno un accenno di rinfaccioo lamentela. Grazie, e quasi unico trovare una compagnia (un’amicizia) come lavostra.Ci tengo a chiudere ringraziando per tutte le amicizie che sono nate in questi anni.I compagni di corso, tra cui i piu cari: Faryal, Francesca L., Francesca M., Giulia,Ilaria, Laetitia, Monica, Parvina, Sara, Alberto, Javier, Leonardo, Lucio, Matteo,Mattia, Reza, Vasile.Gli amici carissimi del Coro Alpino, tra cui Gioca, Alfio, Dello, Pigna, Aspro; delGruppo AVDIO, soprattutto Obi, Stigia, Dani, Spad, Davide, Guido, Ceci, Sham-poo, e del (Pre) Meeting, da Battagli ai Sardi passando per tutta la penisola. Gliamici (fratelli) della Comunita del CLU, in particolare Ezio, Chiara, Laura, Elisa,Elisabetta, Giovanna, Simo Erbeia, Gassi, Caio, Ano, Ismo, Gio Beri, Canaglia.I brianzoli del Don Gnocchi (e non solo) Alice, Francesca, Giulia, Letizia, Maria,Vaghi, Pale, Beppe, Street, Sam, Calo, Marco, Alfonso, Bianchi, e il preside (quasiun padre) Franco, e gli amici Alain e Gallo. Il gruppo della Benny, con Ade, Lety,Albert, Ciccio, Pol e i Riminesi Pelle e Carol, Cri e Andre, Michela, Francesco,Luca e gli altri, che sempre mi hanno fatto grandissima compagnia con l’amiciziae la preghiera, assieme ai carissimi Don Andrea.Ringrazio anche i professori che piu di tutti mi hanno “plasmato” in questi anni:Ciucci, Bellini, D’Amico, Gentili, Lanzi, Luini, Macchiarella, Marcon, Prati,Riva, Spagnolini e Spalvieri.

I

Contents

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI

Thesis proposal 1

1 VLC 5

1.1 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2 General system description . . . . . . . . . . . . . . . . . . . . . . 6

1.2.1 Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.2 LED Modulation . . . . . . . . . . . . . . . . . . . . . . . 8

1.2.3 Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 VLC: will it have a future? . . . . . . . . . . . . . . . . . . . . . . 10

1.3.1 VLC and RF: the data rate . . . . . . . . . . . . . . . . . 10

1.3.2 VLC and RF: range and localization of information . . . . 12

1.3.3 VLC and RF: environment and interferences . . . . . . . . 13

1.3.4 VLC and positioning . . . . . . . . . . . . . . . . . . . . . 15

1.3.5 Other VLC applications . . . . . . . . . . . . . . . . . . . 16

1.3.6 VLC and the actual market . . . . . . . . . . . . . . . . . 16

2 Using a smartphone as receiver: background 17

2.1 Smartphone cameras as VLC receivers . . . . . . . . . . . . . . . 17

2.1.1 Rolling shutter . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1.2 Signal and frames . . . . . . . . . . . . . . . . . . . . . . . 21

III

2.2 Android Camera control . . . . . . . . . . . . . . . . . . . . . . . 22

2.2.1 Camera APIs . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3 Final considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 The VLC transmitter: background 29

3.1 Transmitting with a LED bulb . . . . . . . . . . . . . . . . . . . . 29

3.1.1 LED luminaries . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2 Information encoder . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2.1 Arduino . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.3 Modulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 System implementation 37

4.1 Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.1.1 Information Encoder . . . . . . . . . . . . . . . . . . . . . 38

4.1.2 Modulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.1.3 Transducer . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.2 Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.2.1 Application workflow . . . . . . . . . . . . . . . . . . . . . 44

4.2.2 From the image to the message . . . . . . . . . . . . . . . 45

4.3 Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.3.1 Simil-DTMF . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5 System testing and performance evaluation 53

5.1 Experimental set-up . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.2.1 Signal propagation . . . . . . . . . . . . . . . . . . . . . . 54

5.2.2 Different room conditions . . . . . . . . . . . . . . . . . . 61

5.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3.1 System performance . . . . . . . . . . . . . . . . . . . . . 67

IV

6 Conclusions and Future Work 69

6.1 Applications and future works . . . . . . . . . . . . . . . . . . . . 71

Appendices 75

A Transmission circuit derivation 77

B Transmitter program source 81

Bibliography 85

V

List of Figures

1 System concept design . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1 Block representation of a common VLC system . . . . . . . . . . 7

1.2 Example of COB LED . . . . . . . . . . . . . . . . . . . . . . . . 8

1.3 VLC receiver block scheme . . . . . . . . . . . . . . . . . . . . . . 9

2.1 Block representation of CMOS camera components . . . . . . . . 17

2.2 Functional scheme of a CMOS sensor . . . . . . . . . . . . . . . . 18

2.3 Rolling shutter process and effect . . . . . . . . . . . . . . . . . . 19

2.4 Example of rolling shutter affecting the received varying light. The

green area represents ∆tRST while the yellowa represents ∆tE,r . . 20

2.5 Effect of camera framing over a signal. Information coming during

∆tIFI is lost and cannot be recovered. . . . . . . . . . . . . . . . 22

2.6 The rolling shutter captures the neon flickering and produces bright-

ness bands on the image . . . . . . . . . . . . . . . . . . . . . . . 25

2.7 YUV420 NV21 image format. Each pixel is described by its bright-

ness, while the chrominance components are sampled on groups of

4 pixels and attached at the end of the image. . . . . . . . . . . . 27

3.1 High level block scheme of a VLC transmitter. . . . . . . . . . . . 29

3.2 The schematics of the LED light bulb circuit. In the picture it

is possible to identify the three section composing the circuit: the

Power section (red), the Control section (green), the Load section

(blue). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

VII

3.3 The voltage/current relation of a diode. . . . . . . . . . . . . . . . 31

4.1 The block diagram of the implemented transmitter. . . . . . . . . 37

4.2 The flow diagram of the transmitter firmware. . . . . . . . . . . . 39

4.3 The schematics of the modulator developed for this work. On the

left side, the connections to the load section of the light bulb (D1). 42

4.4 Modified lamp with connections for the transmitter. . . . . . . . . 43

4.5 Receiver application workflow. . . . . . . . . . . . . . . . . . . . . 44

4.6 FIR representation of the rolling shutter effect on the signal. . . . 46

4.7 Example of square wave deformation captured in a frame. . . . . 47

4.8 Square wave restoration process: the signal is normalized, then

derivatives are computed and, based on that, the original square

wave is reconstructed. . . . . . . . . . . . . . . . . . . . . . . . . 47

4.9 FFT of a received symbol (‘a’). The symbol is described by two

frequencies (bins 23 and 30) that must be recognized by the receiver. 48

4.10 Frequency representation of a DTMF. Credits to [39] . . . . . . . 50

4.11 Each frequency couple has been tested. Columns represent, for

each couple, the lower frequency. . . . . . . . . . . . . . . . . . . 50

5.1 The experiment environment and setup. . . . . . . . . . . . . . . 54

5.2 Measured received luminance (crosses) and inverse square fitting

function (red line) in LOS condition . . . . . . . . . . . . . . . . . 55

5.3 Measured received luminance (crosses) and inverse square fitting

function (red line) in NLOS condition . . . . . . . . . . . . . . . . 56

5.4 Measures of incident (crosses) and reflected (circles) light at differ-

ent distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.5 Signal projection on the wall for A)Transmitter close to the surface

and B)Transmitter far from the surface . . . . . . . . . . . . . . . 58

5.6 Measurement points on the wall in order to estimate the light dis-

persion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

VIII

5.7 Measured LOS light intensity on the normal to the surface and at

20, 40, 60cm from the fulcrum for several transmitter distances . . 59

5.8 Measured SER against received light intensity and receiver distance

from the wall. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5.9 Experiment setup with external light interference. . . . . . . . . . 61

5.10 Average luminance distribution on each row of the image when aim-

ing to a curved surface (white paper) compared to a lucid surface

(white ceramic tiles) with both the TX and the RX 40cm far from

the surface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.11 Figure A shows an example of different environment reflectivities

with the joint perpendicular to camera rows. Figure B shows an ex-

ample of different environment reflectivities with the joint parallel

to camera rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

A.1 The schematics of the modulator developed for this thesis with

voltages (green arrows) and currents (red arrows) . . . . . . . . . 78

IX

List of Tables

1.1 Wi-Fi data rate summary . . . . . . . . . . . . . . . . . . . . . . 10

1.2 VLC data rate examples . . . . . . . . . . . . . . . . . . . . . . . 11

1.3 Data rate examples for VLC system based on smartphone camera 12

1.4 Common sources of interference for RF . . . . . . . . . . . . . . . 14

1.5 Common sources of interference for VLC . . . . . . . . . . . . . . 14

5.1 SER evaluation against different light and surface conditions. . . . 63

XI

Sommario

In un’epoca connessa quale quella attuale, siamo abituati ad avere informazioni

su tutto cio che incontriamo. Per ora lo facciamo sfruttando tecnologie basate

sulla radiofrequenza, come Wi-Fi, Bluetooth, NFC, e simili.

Visible Light Communication (VLC) e un modo di trasferire informazioni che

sta dando risultati molto promettenti: l’idea e di trasmettere modulando la luce

anziche le onde radio. Esempi di comunicazione basata sulla luce si trovano fin

dalle epoche piu antiche, ma e stato solo nel 1880 che si e avuto il primo esempio

di comunicazione tra due dispositivi a mezzo della luce.

Sono diversi gli aspetti delle VLC che catturano l’attenzione: le alte velocita di

trasmissione raggiunte sia in ambito di ricerca che da prodotti commerciali, i

diversi tipi di interferenze a cui i sistemi VLC sono soggetti rispetto ai sistemi in

radiofrequenza e quindi la possibilita di installarli in ambienti come aerei e ospedali

senza compromettere il funzionamento di apparecchiature delicate, e la possibilita

di sfruttare le lampade LED dell’illuminazione come trasmettitori. Inoltre, grazie

alla diffusione degli smartphone, la maggior parte delle persone oggi possiede un

dispositivo che abbia capacita di calcolo abbastanza elevate e che sia equipaggiato

con una camera basata su tecnologia CMOS, caratteristiche che rendono chiunque

virtualmente capace di ricevere un segnale VLC. Gia da qualche anno la ricerca

ha iniziato ad esplorare questa strada, e questa tesi vuole proporre un sistema

sperimentale che affronti alcuni dei problemi che affliggono gli attuali sistemi

proposti. Per fare cio e stato sviluppato un sistema VLC basato sul segnale

riflesso dalle superfici utilizzando uno smartphone Android non modificato, una

scheda per prototipi basata su Arduino e una lampada LED ad uso commerciale

leggermente modificata.

Parole chiave: VLC, CMOS camera, Android, LED, segnale riflesso

XIII

Abstract

In this connected epoque, we are used to get information about everything we

are in touch with. Up to now, we are achieving this by exploiting radiofrequency

technologies such as Wi-Fi, Bluetooth, NFC, and similar.

Visible Light Communication (VLC) is a way to transfer information that is giv-

ing very interesting results: the idea is to send data by modulating light instead

of the radio waves. Example of information encoded in light are dated back to

ancient epoques, but it was only in 1880 that the first example of device-to-device

communication happened with light.

Different aspects of VLC are catching the attention: the high data rates reached

by both research and commercial products, the very different level of interferences

that VLC systems generate with respect to the RF ones and thus the possibil-

ity to deploy them in environments like airplanes and hospitals without causing

problems to mission-critical devices, and the possibility to exploit LED luminaries

as transmitters. Moreover, thanks to the diffusion of smartphones, most people

today own a device able to perform quite heavy data processing and are equipped

with a CMOS camera, characteristics that make everybody virtually able to re-

ceive a VLC signal. Research investigated this possibility since few years, and

this thesis wants to propose an experimental system to address some of the issues

that affect the current lamp-to-camera implementations. To do this, a NLOS

VLC system has been developed using an unmodified Android smartphone, an

Arduino-based board and a slightly modified commercial light bulb.

Keywords: VLC, CMOS camera, Android, LED, NLOS

XIV

Thesis proposal

Wireless communication reached a great diffusion during the last decades, chang-

ing the life of people: in the last 30 years houses, schools, malls and offices have

been pervaded by Wi-Fi, Bluetooth, RFID and NFC systems for any kind of pur-

pose, from Internet connection to positioning or local services access.

Since few years, Visible Light Communication (VLC) started to gain interest

thanks to the diffusion of LED technology, that it is already exploited to imple-

ment high efficient lighting systems [1] and it is very suitable to be modulated,

being apt to support fast light variations. On this path, very interesting results

have been achieved, both in terms of data rate [2] and positioning accuracy [3].

Some products have already been developed for the market to provide high speed

data access [4]. Nevertheless, VLC diffusion into the market is still slow.

The VLC diffusion is a topic that this thesis wants to address by developing a

solution exploiting something that users have always with them: the smartphone.

Most smartphones, in fact, are equipped with high definition CMOS cameras that

exploit the rolling shutter technique, which consists in acquiring the image one

row at a time: varying the light faster enough would allow the camera, but not

the human eye, to catch the variation.

Many related works ([5], [6], [7] and others) consider the rolling shutter technique

to implement a VLC system, by pointing the camera toward the light source using

the bright areas that are generated on the image by the luminaries; however, the

user is forced to point the camera toward the lamp, that in practical applications

should be installed on the ceiling, and to keep a fixed (and possibly the shortest)

distance between the transmitter and the receiver since the higher the distance

the smaller the lamp projection on the image.

This thesis investigates the possibility to make use of the light reflected by the

environment in order not to force the user to keep the smartphone camera aiming

to a specific point and allowing a more comfortable user experience. In order to do

this, a low-cost transmitter has been implemented and no modifications have been

1

Thesis proposal

carried out on the smartphone, but an Android application has been developed

to make the device acting as a receiver.

System description

In this study, a unidirectional VLC communications system has been developed.

The receiver hardware, an Android smartphone, has not been modified but an

Android application has been developed to collect the images from the camera and

to elaborate the data. Aiming to realize a device that could be easily improved for

a full implementation and as simple as possible for users to be used, a transmitter

has been developed by modifying an off-the-shelf commercial LED bulb to make

it able to receive input from an Arduino-compatible board.

The system has the following structure:

Figure 1: System concept design

and it wants to achieve the following goals:

Data rate

As the related works quoted in Chapter 1 suggest, there seem to be no ways

to get data rates comparable to Wi-Fi technology; however, for this work the

goal is to reach at least the same order of magnitude as systems in similar

studies with the plus, described below, of exploiting a NLOS transmission.

NLOS transmission

One of the requirements that most of the proposed VLC-to-camera systems

have in common is the need of a LOS path for transmission; this because

2

Thesis proposal

those systems look into each camera frame to locate the source and get data

from that area of the image; this has some advantages, such as the possibility

to implement diversity or to improve positioning capabilities, but also some

disadvantages, such as the maximum operational TX-RX distance, since the

farther is the receiver the smaller is the lamp projection on the image and

thus the harder is for the device to decode data, and the need to align the

camera to the transmitter. In this thesis the proposed system will exploit a

NLOS transmission: as an advantage, the user does not need to look for the

transmitter with the camera, but the SNR can be reduced. The reduction

of the SNR for NLOS transmissions with respect to LOS ones for the same

distance can be seen as a strong drawback, but the experimental results

presented in Chapter 6 show how the LOS and NLOS systems performance

are actually comparable.

Thesis organization

The work will be presented as follows:

• Introduction to VLC

First, optical wireless communications are introduced and compared with

other wireless communication systems. Related works are also reviewed

here.

• Receiving VLC with a smartphone

The smartphone is analyzed by providing an idea as more complete as possi-

ble about its potentiality and critical issues when acting as a VLC receiver.

• VLC Transmitter technology

The operation of a commercial LED bulb is presented, with the goal of

understanding how it can be used to radiated modulated light. Modulation

possibilities for the system are also reviewed in this chapter.

• Implemented system

After the review of transmitter and receiver possibilities, the implemented

solution is presented and detailed.

3

Thesis proposal

• System testing

Different tests have been performed on the system, results are presented and

commented.

• Conclusions and future work

In this section a brief summary of the thesis is given, along with poten-

tialities and critical issues of the presented solution. Also, ideas for further

expansions, implementations and improvements are described.

4

Chapter 1

VLC

Visible Light Communication is a way to transmit information by exploiting light.

Reference [8] gives a review of the history of VLC, including both ancient and

modern solutions; while reported ancient technologies such as torches and fire

signals can be considered an extremization of the modern VLC definition, since the

receiver was a human being, the first real case of device-to-device communication

using visible light was the Photophone realized by Alexander Graham Bell, that

modulated the sunlight reflected by a mirror according to the voice of a person

speaking and demodulated the received signal by using a sort of photoresistor [9].

However, as [8] reports, excluding some laboratory researches, it was only at the

end of the previous century that optical wavelengths started to be massively used

for device-to-device data transmission, by using infrared communication; VLC,

on the other hand, started to be interesting at the beginning of this Millennium,

along with the spreading of LED bulbs.

From a technology point of view, VLC exploits the range between 750nm and

380nm; the information is sent wirelessly without the need of a waveguide and

in some cases even without the need of a line of sight between transmitter and

receiver [10].

1.1 Related works

In this section camera-based VLC systems that inspired this work are briefly

presented. The wide diffusion of smartphones [11] represented a boost to this

research, since almost everybody today owns a device with high computational

capabilities and at least one high definition CMOS camera.

In 2012 Danakis et al. proposed a VLC system using a LED as transmitter and

5

Chapter 1. VLC

a smartphone application to exploit the CMOS camera as receiver [12]. Using a

Manchester coding they were able to transmit up to 3.1kBd in a NLOS situation,

i.e. the transmitter and the receiver were pointing to a reflection surface, with

a maximum distance of 35cm between the transmitter and the surface, and 9cm

between the surface and the receiver.

In 2014 Rajagopal et al. implemented a VLC system, Visual Light Landmarks

[13], using multiple transmitters and an iPad as receiver, reaching up to 1.25B/s

for each transmitter. Their purpose was to implement a VLC-based indoor lo-

calization system using several transmitters as landmarks in the same collision

domain (up to 7 transmitters have been tested together) by dividing the spec-

trum into 29 frequency bins to broadcast each transmitter ID.

The 2014 was a great year for positioning, since in September another VLC-based

positioning system has been presented: Luxapose [7]. The receiver, a Windows-

based smartphone with a high resolution camera, was able to detect the land-

marks, i.e. modified commercial light bulbs, and to send the data to a computer

for the image processing. They achieved a precision of 10cm.

RollingLight [6] team developed, in 2015, a LOS light-to-camera system using

different smartphones and a computer as receivers and a LED array as transmit-

ter. They were able to reach up to 11.32B/s. Later in the same year, DynaLight

[5] faced the problem of distance in LOS light-to-camera system: by exploiting a

VLC uplink channel Vasilakis was able to adapt the transmitter throughput to

the transmitter-receiver distance, allowing the maximum capacity for each specific

situation.

In 2016 Damodaran et al. proposed a LED-to-camera system to reduce the ra-

diofrequency load at expositions, transmitting context-related information via Li-

Fi instead of Wi-Fi. Their system differentiated from previous studies for the use

of a red LED instead of a white bulb, and the introduction of a color-detection

technique at the receiver [14].

1.2 General system description

A basic VLC system implements the same configuration of a radiofrequency wire-

less system, with slight variations due to the different nature of transmission. In

particular, the following elements can be identified:

Transmitter: the transmitter is composed by a Source Encoder, to translate

6

1.2. General system description

information into symbols, a Driving circuit, to vary the light emission, and a

Light Emitter; hence, the driving circuit is comparable to a modulator, while

the light emitter implements the same role of an antenna for a radiofrequency

wireless system.

Medium: for VLC, the transmission medium is the air, and its characteristics,

as it happens for radiofrequencies, can affect the result.

Receiver: a very basic receiver can be composed by a photodetector, that trans-

late the received light into electrical pulses, and a decoder; for better results

more complex systems can be implemented, for example by introducing an

amplifying circuit and a filter.

Figure 1.1: Block representation of a common VLC system

The description of the system implemented for this work is detailed in Chapter

4; here, only a brief and general investigation of VLC transmitter and receiver,

along with common modulations, is given.

1.2.1 Transmitter

It is possible to divide VLC systems in two macro categories: Laser-based and

LED-based systems; this work focuses only on LED-based systems.

Why LED LEDs (Light Emitting Diode) know a great success and diffusion

not only as “service lights” in electronic equipments, but in the last few years

they started to be used also to build a new more efficient lighting systems: in

fact, high power COB (Chip On Board) LEDs allowed producers to build very

bright, cost effective, energy efficient and durable light bulbs; they revealed to be a

game-changing technology in the lighting field, and today they almost completely

replaced incandescent lamps and started to invade also the halogen market, as

explained in [1].

7

Chapter 1. VLC

From the technology point of view, a COB LED is a high-density array of (very

tiny) LEDs, and it can be found in different shapes and sizes; it behaves pretty

like a common LED but, in order to render all its brightness, it needs a much

higher current flow that implies the need of a heating dissipation system. An

example of COB LED can be seen in Figure 1.2 (credits: [15]); the small LEDs

composing the array can be noticed as well:

Figure 1.2: Example of COB LED

What makes LED so interesting from the point of view of Visible Light Commu-

nications is that they need a relatively small amount of power to work and they

can achieve high modulation speed: for example, a white LED can achieve up to

2 MHz of modulation bandwidth [16], while blue LEDs can reach up to 20 MHz

[16]; the reason of this difference is how colors are rendered by LEDs: materials

composing diodes can vary, and their modulation characteristics change as well.

Finally, the bandwidth of visible light is much higher than RF bandwidth: about

200-300THz against the 300GHz available for radio communications [8].

1.2.2 LED Modulation

LED Modulation is usually achieved by varying the amount of current flowing

into the diode; in fact, this allows a more precise control over the light emitted

with respect to a voltage driving, and it has other advantages.

A detailed description of a possible implementation for a modulation circuit will

be investigated later.

8

1.2. General system description

Here, only a hint about the most used modulation techniques used in VLC is

given:

OOK: On-Off Keying, in which data bits are represented by presence (binary 1)

or absence (binary 0) of light. It is the simplest modulation.

PPM: Pulse-Position Modulation, in which symbols are represented by the pulse

position in each time slot; pulses have equal width and amplitude.

PWM: Pulse-Width Modulation, in which symbols are represented by different

pulse durations. The same technique is used to regulate light intensity for

dimmable LED bulbs.

OFDM: Orthogonal Frequency Division Multiplexing, it consists of different

streams modulated over different frequencies. It is one of the most promising

techniques for VLC [17].

CKS: Color-Shift Keying, achieved by using RGB(W) LEDs, it exploits colors

to represent symbols.

FSK: Each symbol (or each bit value, 0 and 1) is represented by different fre-

quency transmitted for a predefined time slot.

Variations on these techniques allow to achieve higher data rates, up to some

Gb/s.

1.2.3 Receiver

In most VLC systems, the receiver is basically composed by a photodetector, a

first amplifier, a filter, a second amplifier and a decoder:

Figure 1.3: VLC receiver block scheme

While filters and decoder depends on the chosen modulation, the photodetector

is usually a photodiode. Photodiodes are PN-junctions very sensible to photons:

when photons reach the device with enough energy, they generate a small current

9

Chapter 1. VLC

flow between the PN interface; in this way, if an external circuit is connected, the

current starts flowing inside the device according to the light hitting the photo-

sensor.

Other photodetectors are available, such as photoresistors and phototransistors,

but they are slower than a photodiode (a photoresistor, or Light-Dependent Re-

sistor, has a switching time of the order of milliseconds) and, as a consequence,

not suitable for VLC purposes.

1.3 VLC: will it have a future?

After this brief review of how VLC systems work, it is interesting to understand

why there is so much interest in this technology, which are the possible application

fields, and which are the advantages and the drawbacks with respect to current

RF technologies.

In this section, a comparison is carried out between existing RF based solutions

and VLC possibilities.

1.3.1 VLC and RF: the data rate

One of the main topics to investigate is data transfer speed: can VLC be a realistic

complement to Wi-Fi in terms of data rate?

Wi-Fi is a well-established technology, with very good performances and ease of

use. To have an idea of the actual commercial standards, this comparison table

from Intel [18] is very interesting:

IEEE 802.11 Wi-Fi protocol summary

Protocol FrequencyChannel Band-width

Maximumdata rate(theoretical)

802.11ax 2.4 or 5 GHz 20, 40, 80, 160 MHz 2.4 Gbps

802.11ac wave2 5 GHz 20, 40, 80, 160 MHz 1.73 Gbps

802.11ac wave1 5 GHz 20, 40, 80 MHz 866.7 Mbps

802.11n 2.4 or 5 GHz 20, 40 MHz 450 Mbps802.11g 2.4 GHz 20 MHz 54 Mbps

802.11a 5 GHz 20 MHz 54 Mbps

802.11a 2.4 GHz 20 MHz 11 MbpsLegacy 802.11 2.4 GHz 20 MHz 2 Mbps

Table 1.1: Wi-Fi data rate summary

10

1.3. VLC: will it have a future?

Beside different technology requirements between each protocol, the highest

achievable data rate at the time of writing is 2.4Gbps.

About VLC, there is not yet a technology standard (probably it will be included

into 802.11bb, in 2021), and a lot of claims have been made about capacity. In

this work, the focus will be only on LED-based VLC and only on demonstrated

(not only theoretical) results:

VLC data rate examples

Company orResearcher

Products (ifavailable)

Maximum data rate(theoretical)

UniversityofEdinburgh[19]

Blue LED 3 Gbps

Firefly Firefly SecureLink 700 Mbps [20]

Oledcom LiFiMAX system 100 Mbps

SignifyPhilips Li-Fi enabled

luminaries30 Mbps

Table 1.2: VLC data rate examples

Other results have been obtained, both in laboratory (more promising, up to

almost 8 Gpbs [2] and even more) and in commercial products, but the data here

reported should be enough to understand the inherent capability of this technol-

ogy.

However it has to be kept in mind that, for each of these results, a specific sys-

tem has been developed with its own transmitter and receiver: this is remarkable

because it explains the discrepancy between results. Moreover, it implies that

there can not be any interoperability among different systems because of the lack

of a standard, and this fact affects negatively the user experience. Also, it must

be pointed out that is still very hard to find products available on the market.

Hopefully, the introduction of a standard for this communication system will give

a great benefit to both the user experience and the market share of VLC and, by

consequence, to the improvement of this technology.

On the other hand, some implementations have been tried with non VLC-ready

devices, such as smartphones and tablets, obtaining results that are actually poor

if compared to dedicated system presented before, but still noticeable if consider-

ing both the starting point and the possible applications: in fact, smartphones and

tablets do not contain any hardware dedicated to VLC, and cameras have been

11

Chapter 1. VLC

used in order to make them work as receivers. DynaLight developer [5] collected

the major results of VLC experiments with smartphone cameras:

VLC and smartphones

Project Platform Modulation Data Rate

CMOS for VLC[12]

AndroidOOK andManchesterEncoding

Between 1 and 3Kb/s

Visual LightLandmarks [13]

iOS BFSK 1.25 B/s

RollingLight [6] iOS FSK 11.32 B/s

Table 1.3: Data rate examples for VLC system based on smartphone camera

The DynaLight results should also be added to this table, but the author did not

provide a specific data rate: in fact, he implemented a very interesting adaptive

system that is able to adjust the channel capacity according to the distance be-

tween the transmitter and the receiver.

As anticipated, these results are very different from the ones of dedicated systems:

this is mainly due to the limits introduced by mobile operating systems when con-

trolling cameras and by the camera technology itself, as shown in Chapter 2.

From this comparison is clear that smartphone-based VLC is not meant to replace

Wi-Fi, but it can represent an interesting channel to deliver application-specific

information.

1.3.2 VLC and RF: range and localization of information

One of the main differences between VLC and RF systems is the operational

distance in buildings and close areas. As reference [21] reports, VLC can be

used for long distance applications, such as building-to-building communication,

satellite-to-satellite communication and even satellite-to-ground communication,

as it happens with radiofrequencies; on the other hand, obstacles can represent

a big challenge for VLC systems, and inside a building the maximum range can

be given by room walls, while radiofrequency-based communication are able to

outcome this boundary.

VLC systems rely on photons reaching the receiver, and this has to be well kept in

mind: photons can reach the receiving sensor through a direct path (LOS trans-

mission) or a reflected path (NLOS transmission), but they cannot pass through

12

1.3. VLC: will it have a future?

non-transparent materials: as a consequence, transmitted data remains localized

in the same room where they are transmitted. In this respect, in order to be

exhaustive it is interesting to report the case of SLUX company, that developed

a VLC receiver able to exploit the few photons passing under a closed door to

transmit across different rooms [10]. On the contrary, radiofrequencies can also

be blocked and reduced, but in the everyday experience and for common applica-

tions, a transmitter is enough to provide data access to more rooms and even an

entire small building.

This difference leads to some advantages and some disadvantages for VLC with

respect to RF: the disadvantage is, of course, that VLC needs at least one trans-

mitter per room, and the requirement varies according to the structure of the

room itself and the chosen technology. The advantage is, on the other hand, the

confinement of information exactly where it is used and the fact that there are

(almost) no privacy concerns since transmitted data is localized only inside the

room where they are received.

1.3.3 VLC and RF: environment and interferences

This section gives a summary of what are the challenges for both VLC and RF

systems according to the environmental constraints such as interferences and elec-

tromagnetic compatibility.

In a radiofrequency-based communication system the interference and the noise

can be generated by several different sources. In Table 1.4, the possible sources

have been divided in “environmental” and “technological”, as the first group is in-

tended to collect the sources of interference not related to communication systems

(natural or artificial), while the second group collects the interference produced

by other RF systems.

13

Chapter 1. VLC

Common sources of interferences for RF

Environmental TechnologicalLighting or other electrostaticphenomenaSolar activityWeather events

Power LinesSparksMicrowavesConsumer electronicsDC MotorsRailways...

Reflective pathsBroadcast transmitters (TV,Radio...)Cellular servicesWireless audio equipmentMedical equipmentsWi-Fi/Bluetooth systemsSystems sharing the same fre-quencies...

Table 1.4: Common sources of interference for RF

While “Technological” interferences can be overcome or at least reduced with an

appropriate system design, “Environmental” interferences are unavoidable. More-

over, the “Technological” ones cannot always be completely removed, for example

in presence of Medical equipment, and there are situations where the communi-

cation system is an interference source for other susceptible equipment, such as in

airplanes or in some hospital areas: in these situations, an alternative has to be

found.

In Table 1.5 VLC interferences are presented; electrical noise due to receiver

circuitry is here neglected, as in the previous table, since it is present in both the

technologies.

Common sources of interferences for VLC

Environmental Technological

Solar lightNon-VLC lighting systems (am-bient light)

Near VLC transmittersVideo equipments (monitor,TV)Unpredictable non static lightsources (flashlight, IR remotes)Obstacles interrupting the LOSpath

Table 1.5: Common sources of interference for VLC

14

1.3. VLC: will it have a future?

From this comparison, it becomes clear how an accurate system design can over-

come almost all of the interferences affecting VLC; moreover, the fact that this

kind of systems does not generate any RF interference make VLC suitable for use

in environments like hospitals, airplanes and places in general where RF technol-

ogy is unwanted or difficult to deploy.

1.3.4 VLC and positioning

Another field in which VLCs seem to be promising is indoor positioning [13] [7],

i.e. the capability to locate an object inside a building with high precision. In

fact, the main positioning system, the GPS, is not suitable for indoor applications,

so that other systems have been developed, such as Wi-Fi based positioning and

Bluetooth positioning.

A review of indoor positioning techniques is presented along with a rough estima-

tion of their accuracy:

Wi-Fi based indoor positioning has an accuracy of about 2 meters [22],

because of path losses due to walls, metal object and so on; usually, it is used

for AGPS-like purposes rather than actual indoor positioning. Assisted-

GPS exploits nearby GSM cells to roughly locate the device while the GPS

compute the accurate location, by using the GSM cell location database;

in the same way, a database with Wi-Fi access point locations is used to

determine the approximate position of the device.

Bluetooth positioning by itself has more or less the same accuracy as Wi-

Fi, but the implementation of the capability to estimate both angle of arrival

(AoA) and angle of departure (AoD) will be completed soon, so that the

accuracy could be increased to few centimeters [23].

VLC positioning is actually very promising: thanks to the abundancy of

transmitters (as any light bulb can be converted into a transmitter) the

accuracy for a triangulation is easy to achieve even without knowing the

AoA. Basic implementations ([3] and [24]) get an error of less than 3cm.

Easier implementations and NLOS systems can be used to give the same

behavior as Bluetooth beacons positioning.

15

Chapter 1. VLC

1.3.5 Other VLC applications

VLC has a great potentiality in terms of possible applications: Blue Jay Eindhoven

[25] developed an autonomous drone able to locate itself inside a hospital using

VLC, to deliver small objects and medicines to people. Interact, a Signify [26]

brand, developed an indoor navigation system for people in retails or in offices,

where VLC can be exploited to provide access to company services such as room

and desk booking, or to allow interaction with wireless content sharing systems.

VLC is also applied to vehicle-to-vehicle (V2V) communications [27], an upcoming

technology that should make human driving safer and open the path for improved

autonomous driving achieved connecting vehicles and infrastructures.

It is also possible to exploit VLC in situations where radiofrequencies are forbidden

for safety reasons, such as on airplanes to provide contents to the users [28] [29].

1.3.6 VLC and the actual market

After the review of all these possibilities for VLC, a question is mandatory: if the

technology is so promising, what is slowing down its diffusion in the market?

There are actually some big companies that started to work on commercial im-

plementations, first of all Philips, that is investing a lot of efforts in products and

research ([26], [25]), and smaller but promising companies such as pureLiFi [4],

the already cited SLUX [10] and others, but there are some practical obstacles

that all the players have to overcome:

1. Each vendor has its own technology for both the transmitter and the re-

ceiver; this means that at the moment no interoperability is allowed, and

each customer is tied to its vendor.

2. Most of systems need dedicated receivers; the consequence is that without

the receiver of the same vendor of transmitters, it is impossible to achieve

data access.

3. Actual systems are not cheap, and this is mainly due to the lack of a standard

and the need of building a dedicated solution for each situation.

16

Chapter 2

Using a smartphone as receiver:background

In this chapter the technology used to implement the VLC receiver is presented.

Different technologies can be exploited to receive light, such as phototransistors,

photoresistors, photodiodes, and others. The purpose of this work is to integrate

VLC with smartphones without modifying or adding hardware to the devices.

This chapter introduces the reader to the smartphone’s camera technology and

how an Android application can exploit it for VLC purposes. The detailed analysis

of these two topics is necessary to fully understand the choices that have been

made for the implementation.

2.1 Smartphone cameras as VLC receivers

Most of smartphone cameras use CMOS image sensors; accordingly, a description

of CMOS cameras working principle is given in the following, in order to let

understand how a smartphone camera can be used as a VLC receiver.

Smartphone cameras are systems composed by different elements:

Figure 2.1: Block representation of CMOS camera components

17

Chapter 2. Using a smartphone as receiver: background

Lens has the role to focus the light on the image sensor.

Color Filter is a matrix of color filters placed before the image sensor that let

each photodiode receive only one of the three-color components (Red, Green

or Blue)

Sensor Array is an array of photodiodes, each one converting the incoming light

into an electrical signal

ADC translates the analog outputs of the photodiodes into digital signals

When a camera is built, the manufacturer performs a calibration of the system

so that non-idealities of lenses and sensors can be software compensated. This

compensation, along with other processing, is applied by the internal circuitry of

camera to each image as soon as it is acquired.

In addition, the internal circuitry has often the task of managing the sensor reading

time, its gain, the focus and the actual frame rate, even if sometimes (and with

some limitations) these gauges can be manipulated by an external control.

2.1.1 Rolling shutter

Figure 2.2: Functional scheme of a CMOS sensor

A CMOS image sensor is made

of a two-dimensional array of

photosensors, each one com-

posed by a PN junction (i.e.

a photodiode) and an ampli-

fying circuit for the read-out,

a controller circuit, that in-

cludes an address generator

and a row address decoder, a

reading circuit, composed by

an ADC, and a column scan-

ner.

The CMOS image sensors

have a peculiar way of work-

ing, called rolling shutter, that can be summarized as follows:

18

2.1. Smartphone cameras as VLC receivers

1. The address generator generates Row Reset and Row Set signals for the first

row;

2. The corresponding row starts its exposure time ∆tE;

3. After having sent the previous signals, the address generator generates the

new Row Reset and Row Set signals for the next row;

4. After the exposure time of each row has passed, the address generator gen-

erates the Read-Out signal for the row

5. The column scanner reads all the columns of the first row and sends the

result to the ADC

6. The process is iterated for all the rows of the sensor.

(a) Temporal diagram of rolling shutter pro-cess

(b) Effect of rolling shutter process when thecamera is moving at different speed with re-spect to the subject. Credits to: [30]

Figure 2.3: Rolling shutter process and effect

This behavior has a relevant drawback when taking pictures, i.e. the whole image

is not captured all at the same time but each row of pixels is acquired a bit later

of the previous one: this is the cause of artifacts when something inside the scene

is changing, such as a moving element (Figure 2.3) or a flickering light.

The rolling shutter effect can be mitigated as described in [31], or it can be ex-

ploited to achieve very interesting effects, such as cameras synchronization through

strobe lights or flashlights [32], high dynamic range imaging and adaptive exposure

[33], and, eventually, visible light communications.

19

Chapter 2. Using a smartphone as receiver: background

Due to the Row Reset and Row Set signals generation operated by the address

generator, each row is exposed with a certain delay, named ∆tRST , with respect to

the previous one. Consequently, if a light signal coming towards the sensor varies

during a frame acquisition, its variation can be observed in the resulting image in

the form of darker and brighter rows.

Unfortunately, it is impossible to exploit all the rows of an image to receive dif-

ferent pulses, even if the light variation is shorter than ∆tE: in fact, since ∆tE

starts after ∆tRST and ∆tRST < ∆tE, any light variation influences more rows at

the same time, limiting the maximum receivable frequency. To better understand

this concept, let’s consider the amount of light Ir collected by a single row during

the exposition ∆tE,r, that is the same for each row in a frame but with different

starting times:

Ir =

∫∆tE,r

L(r,t) ·Grdt (2.1)

where L(r,t) is the incident light and Gr the gain factor of the row, that here is

considered constant over all the sensor in the row r. As it can be seen in Figure

2.4, each row can acquire the light variation fully or partially, or it can even not

catch it at all. The result is a difference in the total amount of light Ir captured

by each row.

Figure 2.4: Example of rolling shutter affecting the received vary-ing light. The green area represents ∆tRST while theyellowa represents ∆tE,r

Note that the captured incident light depends both on the time, since it can vary

during the full frame acquisition or even during ∆tE,r, and on the row considered.

20

2.1. Smartphone cameras as VLC receivers

In fact, all cameras suffer from a vignetting effect due to both non-idealities of

the lenses and the incident angle of the light rays on sensors that make the most

peripheral rows capture less light.

2.1.2 Signal and frames

Both old-fashioned film cameras as well as digital cameras acquire scenes in a

discrete way, by dividing the acquisition time into frames. A frame is a single

image representing a realization of the scene at a specific time: by collecting a

sufficient number of frames per second (fps) a video can be recorded, and the video

appears more and more realistic as the number of frames per second increases.

If the acquisition of each frame could start exactly at the end of the last Row Reset

(RST) signal of the previous one, it would be possible to achieve a continuous

rolling shutter between frames giving rise to the ideal situation of implementing a

camera-based receiver. Unfortunately, this is not the case. In fact, between each

frame a delay is introduced because of different factors, such as sensor read-out

speed, output interface transfer rate, amount of data per frame (resolution), and

others.

In general, smartphone cameras achieve up to 60 frames per second, but higher-

end models can now achieve higher frame rates, usually exploited to record slow

motion videos.

It is important to explain the difference between inter-frame interval and frame

duration in order to avoid confusion: in fact, even if the frame rate is fixed at

-let’s say- 30 frames per second, the frames usually don’t last 1/30 seconds.

Frame rate roughly determines the interval between two consecutive frames and

tends to be fixed in cases such as video recording and camera preview. However,

its value can slightly vary according to user settings, camera modes (slow motion,

video, burst, single capture), camera internal processing, and others.

Frame duration also varies, mainly according to the row exposition time ∆tE that,

in turn, depends on the light conditions. The total frame duration can be roughly

computed in the following way:

∆tFrame = (R− 1) · ∆tRST + ∆tROW (2.2)

21

Chapter 2. Using a smartphone as receiver: background

where R is the number of rows the sensor is composed of, ∆tRST is the Row Reset

signal duration and ∆tROW is the duration of the last row, that is:

∆tROW = ∆tRST + ∆tSET + ∆tE (2.3)

The difference between the frame duration and the inter-frame interval becomes

a problem when the device is used to receive a light signal: in fact, given a signal

duration ∆tS > ∆tFrame and an inter-frame interval ∆tIFI , the signal will par-

tially fall in the ∆tIFI .

Figure 2.5: Effect of camera framing over a signal. Informationcoming during ∆tIFI is lost and cannot be recovered.

This problem gets worse when light is not constant over time in the scene focused

by the camera: in fact, tests carried out during this work have clearly shown how

the amount of light within a frame triggers the internal light compensation of the

camera, that tries to adjust the gain of the sensor for the next frame by varying the

exposure time and, accordingly, the frame duration. A different frame duration

makes ∆tIFI also to vary and shift in time and, because of that it is very hard to

get constant values of frame duration and inter-frame interval when modulating

the light in the scene.

2.2 Android Camera control

After having understood how the smartphone hardware works, it is important to

explore the software possibilities for implementing a VLC receiver.

While any smartphone can potentially be used for the purpose we are interested

22

2.2. Android Camera control

in, this thesis focused on Android smartphones and in particular on Motorola

Moto Z2 Play. It is important to specify now the target device since each vendor

provides different camera controls to developers for its devices. The device used

for this thesis exposes the old Android Camera APIs, while only a very basic

implementation is provided for the newer Camera2 APIs.

2.2.1 Camera APIs

The Android operating system provides the so-called Hardware Abstraction Layer

(HAL), an interface that allows both the operating system and the developers

to access the hardware functionalities in a standard way, without knowing the

low-level driver implementation provided by the hardware vendors.

Despite the standardization provided by the HAL, the features of the camera

depend on the implementation provided by the vendor for each smartphone model.

As a consequence, two different models from the same manufacturer can allow

different camera features to be controlled, and even more differences are expected

from one brand to another one, thus reducing the standardization possibilities for

developers.

The device used for tests implements advanced camera controls only with the

android.hardware.Camera APIs but not with the modern Camera2 interface.

However, the same results can be obtained with newer devices by porting the

camera-related code to Camera2 APIs.

Camera acquisition

In order to acquire images from the camera by using Camera APIs, it is necessary

to follow these steps:

1. Get the ID of the wanted camera

2. Get an instance of the camera

3. Get and set the camera parameters

4. Initialize the surface for the preview

5. Start the preview

Once that all these steps are completed, the camera is ready to collect pictures or

videos, and the preview is available on the screen.

23

Chapter 2. Using a smartphone as receiver: background

Camera APIs allow the acquisition of video and pictures, and using Camera2 makes

possible to achieve also the Burst mode, that means to collect a set of pictures

very close in time, up to ten per second for middle-end devices and up to thirty

for higher-end ones.

Camera parameters

With Camera APIs the settings (the gauges) of the camera can be adjusted using

the Camera.Parameters class. Some standard parameters interesting for VLC

purposes are:

Antibanding

Exposure

ISO

Image format

Image size

Camera framerate

Unfortunately, not all the necessary parameters are standardized by these APIs:

for example, ISO settings are missing; however, it is still possible to set non-

standard parameters by writing them directly with the function Parameters.set(),

if they are allowed by the device. In order to get all the supported parameters

and all the supported values the Camera function getParameters() can be used.

It is important to understand the role of these parameters when exploiting a

smartphone camera for VLC purposes:

Antibanding The antibanding function is very useful when taking indoor pic-

tures with a CMOS image sensor: in fact, artificial light sources such as neon

lamps canproduce a flickering light that is not noticeable by eye but is evidenced

by the rolling shutter in terms of bright and dark bands, as in Figure 2.6. An-

tibanding filter has the role to remove this artifact from the image.

24

2.2. Android Camera control

Figure 2.6: The rolling shutter captures the neon flickering and produces brightness bands on theimage

Exposure To understand this concept, let’s consider a grayscale image, where

each (digital) pixel is described by an integer value representing the amount of

light hitting the (physical) pixel, i.e. the brightness of the pixel. With reference

to the structure of a CMOS image sensor reported in Figure 2.1, the concept can

be easily extended to a colored image, where each (digital) pixel is described by

three integer values representing the amount of light allowed by color filters to

reach the sensor (physical) pixels.

In particular, each CMOS photodiode receives the incoming light and generates a

current (or a voltage, according to the specific technology) that is converted to a

digital integer value by the ADC. This value represents the amount of light hitting

the pixel, thus the longer the exposure time the higher the collected current and

the corresponding integer value, and the brighter the resulting point of the image.

Unfortunately, not all the smartphone manufacturers allow the control of this

parameter.

When the exposure time is not selected manually, the camera runs an algorithm

provided by the manufacturer in order to compute the optimal exposure time.

The working principle is that the amount of light in an image should be equal

to a certain value (usually defined as middle gray or 18% gray) in order to get

good results when taking pictures in most situations; thus, the camera computes

the amount of light in the current frame and adjust the exposure time for the

next one. Since this process is blind with respect to the subject in the scene and

to the intentions of the user, most manufacturers provide another way to adjust

the image brightness, the Exposure Compensation: this parameters introduces a

bias when the camera computes the amount of light inside a frame. The camera

internal algorithm will compute a different exposure time (and ISO) for the next

25

Chapter 2. Using a smartphone as receiver: background

frames so that the images will result darker or brighter with respect to an unbiased

situation.

ISO In CMOS image sensors ISO is the sensitivity of the image sensor, achieved

by increasing the electrical gain of the pixels: in fact, each photodiode of the sensor

is connected to an amplifier that acts on the signal before the ADC. For the same

exposure time and exposure compensation, the result is a brighter image but the

ADC collects more electrical noise.

Conventionally, the sensitivity of the sensor is expressed as ISO steps, since in the

past ISO was used to represent the sensitivity of photographic films; ISO steps are

relative to one another, and in particular each step is the double of the preceding

one; this wording has been kept for digital cameras even if the meaning and the

behavior are slightly different.

Image Format Android allows the developer to choose the image format for the

picture acquired by the camera, and each camera allows different image formats

to be chosen; some of them are standard for all Android devices, such as NV21,

others can be related to the specific device.

The image format is the way in which the image is stored in the memory, and this

information is needed in order to process the image. Different standards can be

used, according to the purpose of the processing; here the described format is the

one used for the thesis, NV21, that comes to be very handy for VLC purposes.

NV21 is a YUV-based image format, in particular Y’UV420. YUV is an image

format designed to reduce the amount of information needed for representing a

picture, based on the non-ideal perception of human eye. In particular, YUV

describes an image according to three factors: the luma Y’ and two chrominance

components U and V. The luma plane describes each pixel of the image in terms

of brightness, thus, for a 20x20 pixels image the Y plane is composed by 400

values, ordered from the top-left pixel to the bottom-right pixel row-by-row; the

U and V components are subsampled of the original image, obtained by assigning

to each group of 4 pixels (2x2 matrix) the same U and V values: for a 20x20 pixel

image, U and V planes are composed by 100 values, arranged from the top-left

2x2 group of pixels to the bottom-right group. The resulting information loss is

not perceived by human eye, and the memory occupancy is reduced by 1/3.

26

2.2. Android Camera control

Figure 2.7: YUV420 NV21 image format. Each pixel is described by itsbrightness, while the chrominance components are sampled ongroups of 4 pixels and attached at the end of the image.

Image Size Image size is as easy as it sounds: in fact, it describes the number

of rows and columns the image is composed of. Usually, the image size is deter-

mined according to the aspect ratio (ratio between width and height). The most

common aspect ratio today is 16:9, that for a Full HD image means 1920x1080

pixels, but other aspect ratios and resolutions are available according to the device

capabilities.

Note that the image size can differ between Camera Preview Mode and captured

image: in fact, for responsiveness purposes, the bandwidth dedicated to the pre-

view can be limited and thus only reduced resolutions can be available.

Camera Framerate The framerate is the selected number of frames that the

camera acquires in a second. Usually, standard quality videos are acquired at

not less than 30 frames per second, but higher-end cameras can work up to 240

frames per seconds. Usually the preview framerate is limited with respect to the

maximum camera capabilities, since the preview purpose is mainly to help the

user to aim the subject and to set the camera.

The actual framerate can be not perfectly constant: in fact, it can be affected by

the exposition time, camera internal readout and processing, and other factors.

27

Chapter 2. Using a smartphone as receiver: background

2.3 Final considerations

The rolling shutter technology offers an interesting opportunity to enable camera-

equipped devices to receive VLC: the scanning process operated row-by-row by

the camera allows the devices to acquire fast light variations without the need

of a dedicated circuit. Moreover, the controller of the camera allows the system

to act on some parameters, i.e. ISO and exposure settings, in order to adapt the

receiver to different light conditions. However, the availability of these parameters

depends on the specific vendor implementation of the APIs.

Despite the great potential, some drawbacks are introduced by the actual camera

technology itself, being it not designed (at the moment) for VLC purposes. One

for all, the discretization of time into frames that are relatively distant in time

from each other, which causes the loss of many signal samples thus limiting the

maximum data rate. In Chapter 4 it is explained how the implemented solution

cope with the technology limitations to maximize the data rate.

28

Chapter 3

The VLC transmitter:background

In this chapter the technology used in this work to implement the VLC transmitter

is presented.

In Section 3.1 the transmitter hardware is presented, with specific reference to the

commercial LED bulb used for this experiment. Section 3.3 follows with a resume

of the possible modulations that have been considered for this work.

3.1 Transmitting with a LED bulb

The VLC aims to integrate two completely different applications: the lighting of

an environment and the data transmission. In fact, a very simple VLC transmit-

ter can be implemented with just an encoder, to translate the information into an

electrical stimulus, and a light source, such as a LED:

Figure 3.1: High level block scheme of a VLC transmitter.

Starting from this basic structure it is possible to change either the encoder sec-

tion, in order to achieve different data rates and modulations, or the light source,

to achieve different light levels and colors.

29

Chapter 3. The VLC transmitter: background

3.1.1 LED luminaries

In this section the LED bulb technology is presented. Most of commercial LED

luminaries have similar circuits, with small variations essentially due to the con-

straints (for example in terms of energy consumption or brightness or bulb shape)

that the manufacturer wants to satisfy, so the following description can be con-

sidered valid in most cases without losing generality.

Figure 3.2: The schematics of the LED light bulb circuit. In the picture it is possible to identifythe three section composing the circuit: the Power section (red), the Control section(green), the Load section (blue).

The circuit is composed by three main sections:

Powering circuit with the current-rectifying circuit;

Control circuit composed of an integrated circuit and the components

needed for its operation;

Load circuit composed of the actual load (COB LEDs) and the protection

circuit.

The integrated circuit is the core of the whole system: in fact, it has the role

to provide a PWM (Pulse-Width Modulation) output current given an AC input

and to react to different bulb conditions, as it will be investigated later.

The bulbs analyzed for this work uses the BP9916C integrated circuits produced

by Bright Power Semiconductor [34]. As reported in the datasheet [35], this device

30

3.1. Transmitting with a LED bulb

implements most of the functionalities needed to get the bulb working properly,

giving only few knobs to tune the behavior according to the specific application: in

fact, the integrated circuit is able to react to short circuits, LED failures, strong

temperature variations, and other conditions that can affect the system proper

way to operate, and it allows the manufacturer to adjust some parameters such

as the output current and the PWM period by varying a resistor.

Driving a commercial LED bulb

There are three main methods to drive LEDs: voltage driving, current driving

and pulse-width modulation (PWM).

Voltage driving consisting in providing LEDs with a fixed voltage. Given

the component voltage/current relation, once a sufficient voltage is applied

to the component a current starts to flow; however it is very hard to achieve

a perfectly constant voltage and a small voltage variation can cause a big

difference in the current flow; moreover, component conditions such as tem-

perature can affect the voltage/current curve.

Figure 3.3: The voltage/current relation of adiode.

Current driving is a safer way to drive a LED: a fixed amount of current

is provided to the component, letting it reach the necessary voltage to make

it flow. This configuration allows a safer control for the LED but it is not

often used for high power LEDs because of the amount of current needed to

power them.

31

Chapter 3. The VLC transmitter: background

PWM driving (pulse-width modulation) is used to control high-power and

COB LEDs: a square wave is used to feed the current to the LED and, by

varying the period of the signal, it is possible to achieve light dimming.

In general, COB-based LED bulbs rely on the PWM driving technique, that con-

sists in feeding the LEDs with a square wave. This approach allows the controller

to vary the signal period according to the LED conditions such as temperature or

possible failures, and to dim the emitted light. Because of this, when designing

the transmitter it must be considered that most of the controllers available on

the market implement protection systems to protect the LED bulb from short or

open circuits and overheating, and to extend the lifespan of the LEDs.

3.2 Information encoder

The information encoder of a transmitter is the device that performs the infor-

mation encoding, i.e. the translation of the message into symbols that are chosen

according to the used modulation.

In order to perform the encoding, the encoder accesses a look-up table where all

the possible input values are mapped to the corresponding output symbol; thus,

given an input message, the encoder translates each message element into the

corresponding symbols by checking the lookup table, and the resulting sequence

is sent to the next section of the transmitter.

3.2.1 Arduino

Arduino [36] is an open-source platform mainly thought to develop electronic

projects in an easy way, letting the user to focus more on the software logic rather

than on the design of a suitable circuit for the microcontroller: in fact, an Arduino

board already provides a microcontroller with a power supply system and a set

of interfaces that can be enabled and used with just few lines of code. Moreover,

Arduino, and the Arduino-compatible board Luigino328 [37] in particular, pro-

vides a set of 14 digital input/output pins, 6 of which can provide PWM output,

a C-based programming language, a 32KB non-volatile memory to store the pro-

gram, and an USB interface to exchange data with a computer thanks to a serial

interface already integrated in the Arduino Integrated Development Environment

(IDE). All these features make the Luigino328 suitable for the encoder implemen-

32

3.2. Information encoder

tation.

In this section some features of the device are enlightened because of their rele-

vance for the system implementation that will be presented in Chapter 4.

Luigino328 digital interface

The Luigino328 used here provides up to 14 interfaces that can be used either as

digital input or digital output pins.

When using a digital pin as input, the voltage between the input pin and the

ground pin should be higher than 6V to drive the microprocessor; on the other

hand, when a digital pin is set to work as a digital output, it is triggered by the

software to assume either 0V or 5V, with a maximum output current of 40mA.

Note that there are no protections installed on the board for those pins, so it is

necessary to design a proper circuit in order not to damage the device.

Memory

Arduino boards store the program into the flash memory, that can be written

only when the program is uploaded on the board; when the program is running,

variables are copied to the SRAM for being manipulated. However, the SRAM is

limited to 2KB so that it is not possible to code high amounts of data such as big

look-up tables as if they were common variables, since they would be loaded to

the SRAM at runtime and the program would stuck for lack of memory.

In order to avoid this problem, ATmega328 provides the possibility to store some

data in the program memory, that can contain up to 32KB, by using the PROG-

MEM compiler directive when declaring a (constant) variable to store it in the

flash memory: when the program is loaded to the board, the variables declared

with that keyword are not loaded into the SRAM.

In order to manipulate the values at runtime it is possible to load them indi-

vidually into the SRAM by addressing the varibles directly using the functions

provided by the library avr/pgmspace.h.

Serial interface

Luigino328 can create a serial connection with Arduino IDE so that it is possible

to exchange data with the running program, for example in order to send custom

commands or messages.

33

Chapter 3. The VLC transmitter: background

Serial interface is bi-directional and it can be used through the USB cable that pro-

vides power to the board. The serial connection must be enabled in the setup()

method of the program and then it can be read programmatically or it can trigger

the program when some data is received. Also, it is possible to transmit data

from the board to the computer using the serial communication.

3.3 Modulations

In this section the possible types of modulations suitable for this work are re-

viewed taking into account both the transmitter structure and capabilities, and

the application constraints.

OOK

On-Off Keying is a technique often used with light-based transmission, consisting

in transmitting bit-by-bit an entire binary sequence, by producing light when a

bit is high and not sending anything when a bit is low. When using this modu-

lation, each symbol of the original message is mapped into a sequence of bits by

the information encoder and then transmitted to the receiver bit after bit.

Despite its simplicity, it can not be always the best solution for VLC commu-

nication systems: in fact, combinations of long sequences of high values closed

together followed by low values can produce flickering in the light perceived by

human eye, especially at lower bitrates. However, this modulation has been tried

with good results when combined with Manchester encoding [12].

M-FSK

Frequency-Shift Keying consists in mapping two symbols to two different frequen-

cies. For the scope of this work, FSK can be used to send bit sequences by

mapping high and low values to two different frequencies: this would reduce the

flickering of the light, but the need to send a minimum number of periods to be

able to decode it effectively through the FFT would make the transmission very

slow.

An improvement is represented by the M-FSK, which exploits M different fre-

quencies to represent M different symbols of the message. From a theoretical

point of view this solution allows the transmission of more than the whole ASCII-

Extended character set by sending one symbol per frame to a Full-HD camera,

34

3.3. Modulations

but in practice, it is barely possible to use only the standard ASCII code. This

is because there is the need to add a guard distance between frequencies and the

actually usable spectrum is relatively limited: for frequencies below 2KHz the

light oscillation is disturbing for the human eye, while frequencies over 10KHz

have been tested to be barely perceived by the testing device.

DTMF

A variation of M-FSK is the DTMF (Dual Tone Multi Frequency): with this tech-

nique each symbol is represented by the combination of two different frequencies

so that, for M frequencies, the number of possible couples is M2 − M (where

M is subtracted to remove the symbols composed by the repetition of the same

frequency twice).

Actually, it has to be considered that the two frequencies are sent in parallel, so

that the couples containing the same frequencies but in different order must be

discarded, and this leads to the total amount of couples:

CouplesM =M2 −M

2(3.1)

Finally, it has to be considered that, by transmitting square waves with LEDs, it

is better not to choose frequencies that are odd multiples between each other: in

fact, an ideal square wave is defined as

s(t) =4

π

(sin (ωt) +

1

3sin (3ωt) +

1

5sin (5ωt) +

1

7sin (7ωt) + . . .

)(3.2)

Where ω = 2πf0, so that in the frequency domain the square wave is represented

by odd harmonics of f0.

Choosing frequencies that are odd harmonics can lead to false positives for cer-

tain frequencies couples or make the receiver believe that a couple of frequencies

has been transmitted when a single square wave is received; because of this, the

modulation alphabet should not contain odd harmonics in the same couple and

the receiver should implement a protection against harmonics false positives.

Some of these modulations are already proposed by related works: [12], for

example, exploits the OOK with Manchester encoding to achieve a high speed

with a sufficient level of redundancy; [13] uses a BFSK to avoid the flickering that

the OOK can produce.

35

Chapter 3. The VLC transmitter: background

DTMF is an interesting technique, but it can be hardly implemented “as it is”

because of the limits introduced by the LED technology, such as the difficulty to

render a perfectly shaped sine wave.

In Chapter 4 the chosen modulation is presented and the adaptions that have

been applied to make it suitable for the system are detailed.

36

Chapter 4

System implementation

After the reviews of the technologies considered for this thesis, in this chapter

the implemented solution is detailed. First, it is shown how the transmitter has

been designed by interfacing the Luigino328 with a commercial light bulb based

on BP9916C through an electronic circuit; then, the receiver implementation is

presented. Finally, the designed modulation is detailed.

4.1 Transmitter

The VLC transmitter includes a hardware component, composed by the LED

bulb, the Luigino328 board and the interfacing circuit, and a software component

constituted by a script loaded on the Luigino328 to encode and transmit the

messages coming from the computer.

Figure 4.1: The block diagram of the implemented transmitter.

The information encoder is implemented with a Luigino328 and it also

provides the interface with the computer, acting as the information source

of the system

The modulator has the role to modulate the transducer light according to

the signal generated by the information encoder

37

Chapter 4. System implementation

The transducer is realized by adding a connector to a LED bulb in order

to interface it with the modulator.

4.1.1 Information Encoder

A specific software has been developed to make the Luigino328 board able to act

as an information encoder. The role of this software is to receive the message

from the computer (information source), to encode it into symbols, and to give in

output the resulting signal towards the modulator.

The message is composed by a string of ASCII characters, while the output signal

is a sequence of square waves sent to the modulator using one of the digital

interfaces of the board.

Encoding workflow

The workflow of the encoder can be summarized as follows:

1. The program waits for a string from the computer

2. Once a string is received, the software reads the first character in the buffer

3. A delimiter is transmitter

4. The first character is passed to the the encoding function

5. The values representing the character are read from the lookup table

6. The output generation function is called with the values found

The encoding workflow is represented in 4.2.

38

4.1. Transmitter

Figure 4.2: The flow diagram of the transmitter firmware.

39

Chapter 4. System implementation

Encoding process

The encoding function implements the conversion from a character coming from

the serial interface into a symbol to be transmitted. As it will be detailed later

in this chapter, the chosen modulation is a variation of the DTMF specifically

developed and optimized for this application. At this stage, it is enough to say

that each symbol is represented by two frequencies f1 and f2, each one transmitted

for a certain amount of time corresponding to half the camera frame time. From

the software point of view each symbol is described by:

The two half-period durations given by the two frequencies (i.e. t1 = 12· 1f1

and t2 = 12· 1f2

)

The number of half-periods in half a frame duration #reps1 =tframe

2· 1t1

and

#reps2 =tframe

2· 1t2

.

All the values are computed offline, so that the microprocessor does not perform

multiplications or divisions at runtime, thus reducing the latency. ti and #repsi

are stored in a lookup table uploaded in the flash memory by exploiting the

avr/pgmspace.h library presented in Chapter 3 in order to overcome the RAM

limitations.

The two couple of values (t1, #reps1 and t2, #reps2) are passed to the sender()

function that generates the square waves to be given in output by running a set

of for-loop cycles according to the values extracted from the lookup table. A first

for-loop is repeated #reps1 times, and for each iteration it keeps the output pin

in HIGH (for even iterations) or LOW (for odd iterations) state for a time equal

to t1. The next loop performs the same task generating the second square wave

by using #reps2 and t2 as parameters.

Everything is repeated few times by a third for-loop to ensure that the whole

symbol is captured in a single camera frame at least once, avoiding the problem

of the synchronization with the receiver.

Appendix B reports the most relevant sections of transmitter source code.

4.1.2 Modulator

The modulator must adapt the encoder output with the transducer in order to

make the light vary according to the signal generated by the microprocessor.

40

4.1. Transmitter

As anticipated in Chapter 3, the light bulb circuit implements protections against

unexpected behavior of the light bulb itself, and the modulator must not activate

them. The BP9916C, in particular, reacts by varying the PWM period: if a short

circuit is detected, the working frequency slows down to 3kHz, that is in the range

of frequencies useful for VLC transmissions.

Interfacing with the encoder

In order to cope with the encoder specifications an opto-coupler is used to insulate

the light bulb section of modulator from the Luigino328 outputs. An opto-coupler

is constituted by an LED on the input side and a phototransistor on the output

side: the light produced by the input signal generates a current flow through the

phototransistor while keeping the input and the outputs insulated up to tenth of

volts.

In a commercial implementation, keeping the high-power section of the trans-

mitter circuit insulated from the encoder produces an interesting benefit: in fact,

a VLC-ready lamp built with this design can equip both the modulator and the

transducer in the same package, i.e. the light bulb case, providing only a 2-pin,

low voltage connector to safely and easily connect an external encoder. Then, few

encoders can be deployed in a room or in a building to drive several VLC-ready

luminaries.

Interfacing with the LED bulb

The approach to make the transducer follow the signal coming from Luigino328 by

directly acting on the main AC power of the bulb (powering it on or off according

to the signal) is not convenient since the integrated circuit requires a startup time

to reach the steady working condition, limiting the maximum frequency available

for transmission.

The other possibility is to act between the control section and the load section

of the bulb. The developed solution exploits a MOSFET transistor to force the

current towards a fake load circuit instead of towards the lamp’s LEDs, in order

to keep both the voltage and the current absorption at a level that does not trig-

ger the integrated circuit protections. In particular, a resistive load is connected

in parallel to the load section, and the MOSFET transistor deviates the current

41

Chapter 4. System implementation

toward this load according to the signal generated by Luigino328, shutting down

the LEDs.

The circuit has been designed so to preserve the normal behavior of luminaries:

when no information is transmitted, the transistor keeps the resistive load in-

sulated from the bulb so that the light is steady on; on the contrary, when an

information is transmitted the MOSFET reacts according to the square wave by

deviating the current from the LEDs to the resistive load. The logical NOT be-

havior so obtained does not prevent the proper behavior of the transmitter: in

fact, since the modulation is frequency-based, the resulting signal is the original

signal but with a phase shift of π.

The modulator schematics are presented in Figure 4.3, while the circuit deriva-

tion is reported in Appendix A.

Figure 4.3: The schematics of the modulator developed for this work. On the left side, theconnections to the load section of the light bulb (D1).

4.1.3 Transducer

The transducer section is composed by the light bulb presented in Chapter 3 with

a connector soldered in parallel to the LEDs, as in Figure 4.4, where it is possible

to see how two wires have been soldered to connect the encoder. Other two wires

have been added to connect the LED section for testing purposes.

42

4.2. Receiver

Figure 4.4: Modified lamp with connections for the transmitter.

4.2 Receiver

The receiver has been implemented with an Android smartphone equipped with a

12 MegaPixel CMOS camera, and specifically a Motorola Moto z2 Play. A native

Android application has been developed using Microsoft Visual Studio 2017 as

development environment.

The Android application provides a complete testing enviroment. Thus, it is not

suitable for an end-user but, instead, for a technical user who can make use of the

following features:

Different ways to operate

– Stand-alone VLC receiver and decoder

– Stand-alone VLC receiver and decoder, plus the possibility to send

the result of the Fast Fourier Transform (FFT) live processing to a

computer via Wi-Fi

– VLC receiver, plus the possibility to transmit live the raw VLC signal

to a computer via Wi-Fi but without performing the decoding

All the camera parameters are available to the user and they can be edited

at runtime

43

Chapter 4. System implementation

The received signal can be processed and decoded, and the resulting message

can be examined

4.2.1 Application workflow

This section is focused only on the receiver-related features from the camera out-

put to the symbol decoding. All the other features developed for testing purposes

are not investigated here, along with the computer-side scripts implemented to

cope with the Android application.

The main application workflow is the following:

Application WorkflowApplication Workflow

Camera APICamera API Preview CallbackPreview Callback Signal ProcessingSignal Processing Result ActivityResult Activity

Image Acquired

New Preview Callback

Preview Buffer free

On-screen Preview

New Preview Callback

Send to PC?

YES

Send to PC

End of Task

Signal recovery

Send to PC

Send to PC?

End of Task

FFT processing Send to PC

Reordering the decoded symbols

All processing completed?

Presenting to the user

End of Task

NO

YES YES

Figure 4.5: Receiver application workflow.

The tasks performed by the application require a lot of processing time, so in order

not to lose data nor to get the device stuck most of the operations are performed

in parallel. Thus, the device can start the acquisition and the processing of the a

frame without waiting for the completion of previous ones. However there are two

minor drawbacks with this approach: first, the more parallel tasks are running,

the slower the device becomes and the higher is the power consumption; second,

there is no guarantee on the task completion order, so an indexing of the results

is necessary not to lose the original message sequence.

44

4.2. Receiver

Each receiver task performs the operations listed below and detailed in the next

section:

1. Image acquisition operated through the camera preview

2. Image processing to obtain a monodimensional signal

3. Signal preprocessing to reconstruct the original square wave

4. FFT and peak detection, where the signal is analyzed

5. Harmonic filtering and symbol decoding to obtain the original message

At the end of all the tasks and the reordering of the frames the decoded message

is presented to the user.

4.2.2 From the image to the message

Image acquisition

The first step, i.e. the image acquisition, should be carried out as fast as pos-

sible in order to the minimize the inter-frame interval ∆tIFI . Thus, images are

taken from the live preview: despite the reduced quality (the maximum preview

resolution for the device under test is 1280x960), the acquisition process is faster

because it is thought for live processing (i.e. augmented reality or face detection)

and the risk to reduce the framerate is lower.

Live preview images are acquired according to the YUV format presented in Sec-

tion 2.2.1, thus before proceeding to the next step, color values are discarded by

windowing the image array.

Reducing the impact of the environment

As described in Chapter 2, the CMOS sensor scans the environment row by row

so that the resulting image is a 2D matrix of pixels in which the signal is sampled

only along one dimension.

The first thing to do is to obtain a 1D signal from the 2D matrix: while the

easiest way is to pick up only a column of the image, better results are achieved

by averaging each row to extrapolate one sample, reducing most of the noise

present in the original image, thus making the user more free when handling the

smartphone.

45

Chapter 4. System implementation

The signal is now ready to be preprocessed and it is windowed to the closest power

of 2 to compute the FFT.

Signal pre-processing

The overlapping in the signal acquisition operated by the rolling shutter (see:

2.1.1) can be described by a low-pass FIR filter like the one presented in Figure

4.6:

Figure 4.6: FIR representation of the rolling shutter effect on the signal.

where δ = ∆tRST , and the gain factor αn,i < 1 is proportional to the sampling

overlapping duration tovn,i= ∆tE − (n− i) · ∆tRST between the rows n and i. As

a consequence, the sampled signal is no more a square wave, and some artifacts

can be generated during the averaging process carried out in the previous step

[38]. Figure 4.7 shows an example of the signal deformation observed during

experiments.

The preprocessing of the signal operates the the square wave recovery and, in

the meantime, it gets rid of the low-pass component (noticeable in Figure 4.7)

due to the reflection given by the surface, as investigated in the next Chapter.

The recovery of the square wave is achieved by estimating the derivatives to detect

the signal edges. To enhance the performance, the developed algorithm relies also

on the concavity information provided by the second derivative, giving in output

a +0.5 value where derivatives are both positive, and -0.5 when the derivatives are

both negative. The resulting signal is a normalized square wave that resembles

pretty accurately the transmitted original square wave. In Figure 4.8 a Matlab

example of the signal restoration process is shown.

46

4.2. Receiver

Figure 4.7: Example of square wave deformation captured in aframe.

Figure 4.8: Square wave restoration process: the signal is normalized, then derivatives are com-puted and, based on that, the original square wave is reconstructed.

47

Chapter 4. System implementation

Detecting the frequencies

The receiver must detect the two frequencies encoded in the recovered square

wave. Thus, the FFT is performed and an adaptive threshold is applied to the

power distribution function (PDF) obtained, in order to catch all the peaks.

Figure 4.9: FFT of a received symbol (‘a’). The symbol is described by twofrequencies (bins 23 and 30) that must be recognized by the receiver.

All the frequency indexes corresponding to the peaks above the threshold are

collected in an array f_peaks[], and they must be filtered from harmonics pro-

duced by the square wave:

fP,n = (2n+ 1)f0 (4.1)

where f0 is the fundamental frequency and fP,n are harmonics generating false

positives when thresholding the PDF (n > 0).

To do that, f_peaks[] is processed so that when the i-th element f_peaks[i] is

very close to an odd harmonic of f0 f_peaks[i] is removed from the array. The

result is then composed by two elements, i.e. the two fundamental frequencies

representing the symbol.

48

4.3. Modulation

Message decoding

Each symbol is anticipated by a preamble, constituted by a couple of frequencies

that define the beginning of a new symbol. Thus, the receiver first detects the

preamble, then starts processing the next frames to find a possible character and,

when a character is decoded, the receiver skips decoding all the next frames until

a new preamble is detected (preamble expiration).

Character decoding is the dual to the encoding process performed by the trans-

mitter, and it is implemented by using a lookup table.

Preamble role The need of a preamble and its “expiration” is very important

to make the system less prone to errors due to the absence of synchronization:

the transmitter must repeat the same transmission for a time that is longer than

∆tIFI +∆tframe and, as a consequence, it can happen that two consecutive frames

capture the same symbol without knowing if the transmitter sent the same symbol

twice or not. Moreover, a fixed preamble resets the exposure compensation before

each symbol transmission, mitigating the frequency-shift problem due to different

exposure time between the frames.

4.3 Modulation

For this project a specific modulation has been developed, in order to overcome

the limitations introduced by the camera, the environment and the transmitter

technology.

The environment non idealities, i.e. a non-flat, low reflective or lucid surface that

maximizes the diffused reflection of light, require a certain level of redundancy to

help the signal recovery when some areas of the image cause signal deformation.

From the transmitter side some limitations are introduced by the technology

involved: Luigino328 is able to give in output a square wave, but not analog

signals, that however the modulator and the transducer are not able to reproduce

in a proper way.

4.3.1 Simil-DTMF

The implemented solution addresses the issues caused by the FSK and presented

in Chapter 3 by introducing a variation of the DTMF modulation.

49

Chapter 4. System implementation

Figure 4.10: Frequency representation of a DTMF. Credits to [39]

In order to optimize the spectrum usage, each symbol is described by two fre-

quencies following the same concept as DTMF but exploiting the whole spectrum.

Tests conducted have shown that the useful portion of the spectrum consists of

up to 80 bins (over the 80th bin the received signal is almost not detectable) so

that, by keeping on average two bins as guard bands, it is possible to achieve up

to 660 different symbols, as it has been surveyed during this work (Figure 4.11).

Implementation

Each couple of frequencies in the spectrum has been analyzed in order to select

the best performing ones (referring to Figure 4.11, the green slots) and to define

the number of achievable symbols. The implemented solution exploits the framing

Figure 4.11: Each frequency couple has been tested. Columns represent, for eachcouple, the lower frequency.

behavior of the camera to fool the receiver, since the frames are processed individ-

ually: if two different square waves fit into the same frame, they will be detected in

the same FFT as if they were transmitted in parallel. Thus, the transmitter keeps

50

4.4. Conclusion

alternating the two frequencies for the whole symbol time tS ≥ tframe. Moreover,

fast alternating the frequencies mitigates the light flickering effect for both the

human eye and the receiver.

4.4 Conclusion

The work resulted in a fully operational VLC system.

Starting from a commercial light bulb a transmitter has been implemented by

developing a modulator to connect the lamp to an Arduino-based information

encoder able to receive information from the computer. To convert data into

symbols and vice-versa, a lookup table has been used. The modulator has been

designed to cope with the light bulb protections and to preserve the encoder from

the transducer high currents. No changes have been made to the lamp except for

the introduction of a connector for the modulator, an approach that can provide

a solution for commercial implementations of VLC-ready LED bulbs.

Transmitted information is received by a middle-end smartphone running an

application to capture the signal from the camera, estimate the transmitted wave,

process it with a Fast Fourier Transform and identify and decode the message.

The receiver application has been developed in C# and the whole code but the

camera acquisition is cross-platform, thus it can be easily adapted to non-Android

devices too (i.e. iOS based) and also to computers equipped with a webcam.

The communication between the transmitter and the receiver has been achieved

developing a variation of the Dual Tone Multi-Frequency modulation. The ex-

ploitation of square waves, the amount of frequencies used and the serialization

of the signals in the same time slot, have been necessary in order to increase the

total number of symbols and to overcome the transmitter limitations.

51

Chapter 5

System testing and performanceevaluation

Several tests have been conducted to estimate the system performance in terms of

error rate and to verify the system resiliency to distance, presence of interference

and different reflection conditions in the field of view (FOV) of the camera, i.e.

non homogeneous surfaces or presence of objects between the receiver and the

reflection surface.

First, the experimental set-up is presented. Then the conducted experiments

the path loss characterization, the SER estimation against the light intensity and

the simulation of different environments to stress the system are reviewed. Finally,

the results are commented.

5.1 Experimental set-up

The system has been developed for indoor applications, and all the tests have

been designed in order to mimic real situations. Thus, most of the tests have

been conducted in different light conditions, that spanned from a completely dark

environment to a daylight situation.

Test have been performed using the following equipment:

Transmitter composed by a 7W modified light bulb, an external modula-

tor circuit, a Luigino328 board and a computer running Arduino IDE and

Matlab R2017a

53

Chapter 5. System testing and performance evaluation

Receiver constituted by a Motorola Moto Z2 Play running the application

purposely developed during this study, with a preview resolution of 1280x720

pixels

A flat white wall and a white sheet of paper acting as different reflective

surfaces, along with small objects, a ceramic and a parquet floors to simulate

different types of environments

All the conducted experiments have the following set-up:

(a) Picture of the experimental environment. (b) Scheme of the experiment setup.

Figure 5.1: The experiment environment and setup.

The transmitter is connected to the information source, i.e. the computer run-

ning the Arduino serial monitor, via an USB cable between the computer and

Luigino328, while the lamp points towards the wall. The receiver is connected to

the computer through the Wi-Fi network and the device camera is aiming, as the

transmitter, to the same wall.

5.2 Experiments

5.2.1 Signal propagation

Different tests have been conducted to characterize the system with respect to the

path loss and the reflection characteristics of the surfaces. Here, the light decay

with respect to the distance is presented for both LOS and NLOS conditions; then

the reflection coefficient derivation for the testing surface follows along with the

observation of the light spatial distribution over the surface.

For this set of experiments a white matte wall has been used as a reflection surface

54

5.2. Experiments

in order to minimize the signal distortion and the measurements have been taken

in a dark environment to avoid any interference.

Light attenuation

Signal attenuation has been estimated in both LOS and NLOS conditions in free

space propagation. The LOS condition has been achieved by pointing the lux

meter towards the light source, while in NLOS condition the lux meter measured

the light reflected by the wall.

Starting from a distance of 10cm from the transmitter, the lux amount has

been measured every 10cm up to a distance of 150cm in LOS condition. As it

can be observed in Figure 5.2, the light decays according to the inverse square of

the distance (Inverse Square Law), as it has also been surveyed by [40] where the

characterization of a LOS VLC channel is presented.

Figure 5.2: Measured received luminance (crosses) and inverse square fitting function (red line) inLOS condition

The NLOS condition has been evaluated using the setup shown in Figure 5.1(b),

i.e. with the transmitter pointing towards the wall in a fixed position, 20cm

away from the surface, and the lux meter pointing towards the reflection fulcrum.

Measurements have been taken moving the meter away from the reflection surface

from 20cm to 150cm in 10cm steps. As shown in Figure 5.3, the light attenuation

55

Chapter 5. System testing and performance evaluation

follows the Inverse Square Law as it happens in LOS situations.

Figure 5.3: Measured received luminance (crosses) and inverse square fitting function (red line) inNLOS condition

Wall reflection coefficient

The developed system exploits the reflected light to receive information, thus the

channel characterization should include the reflection coefficient estimation for the

chosen reflection surface.

To perform the estimation, the transmitter has been positioned at different dis-

tances from the wall, from 10cm to 150cm, in 10cm steps. For each transmitter

position the lux meter has been placed first against the wall and pointing toward

the light bulb to measure the incident light LIW , and then 2cm away from the

wall but pointing toward the wall itself (i.e. the reflection surface) to collect the

reflected light LRW. It has been verified that the distance between the wall and

the lux meter has not affected the measurement since the light variation in 2cm

is smaller than the lux meter approximation (1 lux).

In Figure 5.4 it is possible to observe and visually compare the results of each

measurement.

56

5.2. Experiments

Figure 5.4: Measures of incident (crosses) and reflected (circles) light at different distances

The reflection coefficient ρW of the wall has been evaluated for each transmitter

position i using the formula:

ρW,i =LIW ,i

LRW ,i

(5.1)

By averaging the results over all the cases the final estimation has been obtained:

ρW = 0.29 (5.2)

The measured reflection coefficient is very low for the type of white paint here

used, since it is usually estimated to have ρW = 0.8 [41][42] for a matte, plastered

wall: the discrepancy is justified by considering the age of the paint on the wall

(about 20 years) and the dust and pollution that the wall collected over the years.

Light distribution over the surface

An observation has to be made about the NLOS path: the amount of Lux mea-

sured concur to the SNR and thus to the signal reconstruction operated by the

receiver, but, since the receiver is constituted by a camera, the distribution of the

reflected light in the captured image affects the final result. In fact, all the pixels

in a row are averaged to get one single value: thus, when the light produces a

very bright but small area on the surface (a spot) as in Figure 5.5.A the receiver

57

Chapter 5. System testing and performance evaluation

has only few pixels interested by the useful signal and they are averaged with the

dark area of the image, giving rise to a reduction of the SNR. On the other hand,

when the light is diffused all over the surface such as in Figure 5.5.B, the whole

image contributes positively to the SNR.

Figure 5.5: Signal projection on the wall for A)Transmitter close to the surface and B)Transmitterfar from the surface

For a matte surface the spatial distribution of light is mainly affected by the aper-

ture angle of the transmitter, the incident angle of the light hitting the surface,

and the distance between the transmitter and the surface. In this work, the exper-

iments have been carried out by using a commercial LED bulb with an aperture

of about 120° positioned so to resemble the condition of a room lighting system

with the floor as a reflection surface. Other configurations work as well.

To evaluate the spatial distribution of light on the reflection surface a set of exper-

iments has been carried out by positioning the transmitter at different distances

from the wall. Measurements have been taken in LOS condition by placing the

lux meter on the wall plane in the point of peak light first, and then far from the

fulcrum at 20, 40 and 60 centimeters as shown in Figure 5.6.

Collected data showed how having the transmitter close to the wall (i.e., less than

60cm) means to have a high signal power in the center of the image and, at the

same time, to have a dark area around the spot, as in Figure 5.5.A. Despite the

high light intensity, this condition makes the signal harder to decode because of

58

5.2. Experiments

the lack of samples for the FFT, as explained in 4.2.2, since the signal is windowed

in the light spot by the dark region. Moreover, having a very bright but small

area in the image saturates the camera, which in turn reduces the exposure as

explained in Chapter 2, thus making the dark area even darker.

Figure 5.6: Measurement points on the wall in order to estimate the light dispersion.

Figure 5.7: Measured LOS light intensity on the normal to the surface and at 20, 40, 60cm fromthe fulcrum for several transmitter distances

Brightness and SER

A set of experiments has been conducted to estimate if and how much the Symbol

Error Rate (SER) value is affected by the amount of lux received. Tests have

been performed in a dark room with the white, flat, matte wall described before

as reflection surface. The experiment set-up is the same presented in Figure

59

Chapter 5. System testing and performance evaluation

5.1. To test several brightness values the transmitter and the receiver have been

positioned at different distances from the wall, positioning the transmitter behind

the receiver to avoid any object in the field of view of the camera and aiming

always to the center of the reflection, where more light is reflected. All the tests

have been performed in a static situation, i.e. with both the receiver and the

transmitter not moving.

For each test case the lux amount at the receiver has been measured, first, and then

a random series of 31 characters have been sent several times. After demodulation,

the errors have been counted and the SER (the Symbol Error Rate) has been

estimated as follows:

SER =#SW

LM

(5.3)

where #SW is the number of wrong demodulated characters and LM the length

of the message. For each test case, the SER has been estimated as the arithmetic

average of all the SERs measured in the same light condition. In Figure 5.8, SER

values have been plotted against received lux and distance from the wall.

Figure 5.8: Measured SER against received light intensity and receiver distance from the wall.

60

5.2. Experiments

5.2.2 Different room conditions

The last set of experiments aims at verifying the performance of the system in

different environment conditions that can heavily affect the signal decoding, such

as the presence of light interference, different reflection surfaces, or presence of

small object between the receiver and the reflection surface, as it can happens in

real use cases.

The transmitter has been positioned 80cm far from the reflecting surface, and the

receiver 50cm away from the surface.

Environmental light

The following situations have been investigated to analyze effects of external light

interference

1. Dark room

The transmitter is the only light source in the room. This represents the

ideal working condition.

2. Natural daylight

The main light source of the room is the sunlight coming from the windows.

This situation resembles a more realistic use case.

Figure 5.9: Experiment setup with external light interference.

Non-homogeneous environments

As mentioned above, non-homogeneous reflections highly affects the receiver capa-

bility to decode the signal. A set of experiments have been carried out to evaluate

61

Chapter 5. System testing and performance evaluation

the resiliency of the system against different surfaces or with the presence of ob-

jects in the FOV that can produce a distorted signal.

1. Flat, white, matte surface

The ideal situation is when the reflection surface is perfectly white, to max-

imize the reflected signal; flat, to avoid shadows; matte, to homogeneously

diffuse the light all over the surface [43]. This condition can be easily

achieved with a white wall keeping the TX and the RX in front of the

surface, as in Figure 5.1(b).

2. Curved, white surface

A non-flat surface tends to generate a shadow that distorts the received

signal: the closer the surface area to the light source, the higher the reflected

light, as it can be observed in Figure 5.10.

Furthermore, this condition produces a reflection similar, to a glossy surface

such as marble or ceramic.

Figure 5.10: Average luminance distribution on each row of the image when aiming to a curvedsurface (white paper) compared to a lucid surface (white ceramic tiles) with both theTX and the RX 40cm far from the surface.

3. Non-homogeneous surface

A typical realistic situation occurs when the camera is aiming to an area

of the room where small objects are present or to a “mixed reflecting area”

as the one represented by the joint between the floor and the wall. The

62

5.2. Experiments

analysis of the results clarifies why it is important to distinguish these two

cases.

In order to evaluate the system performance a fixed string has been sent tenths

of times for each one of the different conditions described above, and the arith-

metic average has been applied to the results when computing the Symbol Error

Rate to reduce the measurement uncertainty.

Table 5.1 summarizes the estimated SERs for the considered situations. Re-

sults have been rounded to the second decimal position.

Testing condition SER SERDark Room Natural daylight

Flat, White, Opaque 0,11 0,09Curved, White, Opaque(warp parallel to camera rows)

0,17 0,07

Curved, White, Opaque(warp orthogonal to camera rows)

0,11 0,07

Non homogeneous(Joint between surfaces parallel to cam-era rows; the dark surface covers the 15%of camera’s FOV)

0,2 0,21

Non homogeneous(Joint between surfaces parallel to cam-era rows; the dark surface covers the 50%of camera’s FOV)

0,24 0,33

Non homogeneousJoint between surfaces orthogonal tocamera rows; the dark surface covers the15% of camera’s FOV)

0,2 0,24

Non homogeneousJoint between surfaces orthogonal tocamera rows; the dark surface covers the50% of camera’s FOV)

0,18 0,22

Non homogeneous(Small object in the camera’s FOV)

0,22 0,22

Table 5.1: SER evaluation against different light and surface conditions.

While the SER appears to be high, it is not far from the result presented in simi-

lar works such as DynaLight [5], that runs in LOS condition and achieved almost

the same accuracy in presence of non-transmitting (passive) ambient light (95%),

63

Chapter 5. System testing and performance evaluation

and an average accuracy for all of its testing situation of around 87%. These

“poor” results should not be surprising since the system is based on devices not

purposely designed. For example, Luxapose [7] had the possibility to exploit a

more performing camera and a more accurate control control over the exposure,

the framerate and other camera parameters, achieving a SER in the order of 10−3

in the worst case.

Non-homogeneous surfaces affect the system much more than the light inter-

ference. Here, two different situations have been enlightened: the case where the

joint between the dark and the bright surfaces (i.e. between the floor and the

wall) is parallel to the camera rows (Figure 5.11(b)), and the case where it is

orthogonal (Figure 5.11(a)).

(a) Figure A (b) Figure B

Figure 5.11: Figure A shows an example of different environment reflectivities with the joint per-pendicular to camera rows. Figure B shows an example of different environmentreflectivities with the joint parallel to camera rows.

When the joint between the surfaces is perpendicular to the rows of the camera

(Figure 5.11(a)), the averaging process operated by the receiver reduces the signal

energy considerably but, on the other hand, the dark region reduces the overall

amount of light in the image, forcing the camera auto-compensation algorithm to

64

5.3. Conclusions

increase the exposure. The consequence is that signal oscillations are enhanced

in the bright region of the image, becoming more robust against the averaging

process. This is confirmed also by the observations: the bigger the dark region,

the lower the SER. However, the exposure compensation is limited, so that if the

dark region covers a too large area of the image the signal is reduced too much

by the averaging process and the SER increases.

Let’s consider the second case (Figure 5.11(b)): the dark surface has a lower

reflectivity, thus the signal samples received in that area of the image will be

attenuated, if not missing at all. The result is a windowing effect on the signal

that produces a sinc distortion in the FFT and, as a consequence, the thresholding

process operated by the receiver is not able to detect the symbol peaks. Natural

light does not help at all in this situation, in fact it increases the overall light level

of the brighter region of the image, enhancing the windowing effect.

The system seems to behave in the opposite way with curved surfaces: the

presence of daylight during the tests increased the measured SER. However, it

has been observed that different warps produce different outcomes, so this result

is highly situation dependent.

5.3 Conclusions

The conducted experiments had the role to characterize the system and to test

its behavior in realistic indoor conditions such as sunlight coming from windows,

different materials on which the signal is reflected, non-flat surfaces, objects within

the scene.

Coping with light propagation

From the analysis of the results is it possible to derive the limits and the condi-

tions in which the system can be successfully deployed.

First, the NLOS transmission strongly relies on the reflection coefficient of the

surface: the higher the light reflection coefficient ρW , the best the system perfor-

mance. However, a lucid surface can have ρW tending to 1, but it produces a spot

of light instead of a diffusing one. As shown before, such a situation can result in

a problem for the receiver. Thus, another constraint is set for the surface charac-

teristic, which has to be as matte as possible to give rise to a diffused reflection.

65

Chapter 5. System testing and performance evaluation

The uniformity of the reflection has also been tested according to the relative

position of the surface and the transmitter, showing how a too short distance can

generate a light spot too, which again deteriorates the performance, while the

farther is the transmitter the more uniform is the diffusion of the light over the

surface. For the tested device, 60cm was a sufficient distance to eliminate the spot

behavior.

The distance matters also for the signal energy: light intensity, in both LOS and

NLOS condition, decreases according to the Inverse Square Law, i.e. with the

inverse square of the distance. The system has been surveyed over a sufficient

quantity of lux values so that it is possible to estimate the intensity of the light

needed to have a successful communication. Experiments showed that 100lux are

enough to have a SER≤10% with a bit rate equal to 22.5b/s without using any

adaptive algorithm or error recovery. In [13] a NLOS approach has been tried

also, but it needed a minimum of 530 lux to get a BER≥10% using a BFSK

modulation1.

Given the achieved results and the values collected during this thesis it is pos-

sible to derive all the requirements to make this system able to work in a real

environment. For example, a facility manager can keep track of the temperature,

humidity and other details within an office by pointing the smartphone camera

to the white tabletop. To achieve this, the system needs -let’s say- a SER≤5%:

from the measurement plots it can be observed that the received light must have

an intensity greater than 150lux; if the operator is standing, his or her hand is

probably keeping the smartphone more or less 20cm above the tabletop, being

the tables usually 80-90cm high. Thus, the light intensity on the tabletop should

amount to about 500lux. Once the ρW of the tabletop is known it is possible to

compute the incident light intensity necessary and thus the power needed at the

transmitter on the ceiling to achieve the wanted SER. Note that, with a proper

transmitter design, multiple transducer (i.e. LED bulbs) can be connected to the

same encoder to transmit the same signal synchronized, thus enhancing the signal

power without the need of a single, high power lamp.

1It was not possible to derive SER from the results presented in [13], but is worth to remindthat SER≥BER

66

5.3. Conclusions

Interferences, distortions and unclean FOV

Different tests have been performed during this work to provide a general idea

of the performance that can be expected in real situations. The focus has been

posed to the condition of the FOV, such as the presence of surfaces with different

reflection coefficients in the same camera frame or the presence of small objects,

to the interference from “passive” (i.e. non-transmitting) light and to non flat

surfaces.

Objects in the FOV can be challenging for the system, resulting in a SER of 22%,

even if this specific value cannot be taken as a strict reference since it obviously

depends on the object size. The worst results have been experienced in case

of different surfaces (SER=33% for more than half of the FOV occupied by the

surface with a lower ρW ).

Warped surfaces showed much better results in case of sunlight (SER = 7%), but

this result cannot be generalized to all the curved surfaces, since they can be very

different from those used for the experiments.

It is interesting to observe how the daylight condition does not affect too much

the system performance in case of a flat homogeneous surface, even in presence of

small objects between the camera and the surface: in fact, the difference in terms

of error rate amounts to a maximum of 2%. However, the results here presented

have been measured in a situation where the daylight was not extremely high and

the main light source is the transmitter: otherwise, the zero-level of the signal

increases, while the high level is always the same, making the signal harder to

demodulate.

5.3.1 System performance

As anticipated in Chapter 3, the spectrum survey showed the possibility to achieve

up to 660 different symbols. Consequently, 512 symbols can represent 9 bits,

while the remaining can be exploited for further improvements such as service

communication or channel estimation.

For this work, the spectrum portion available for the unused symbols has been

exploited to optimize the guard bands, improving the performance letting the 9

bits per symbol for transmitting the whole Extended-ASCII character set along

with special characters, several prepositions, articles and a list of the most common

words from both English and Italian languages.

67

Chapter 5. System testing and performance evaluation

The theoretical data rate limit for the chosen approach is 15 symbols per sec-

ond, assuming that the system is able to catch a symbol (preamble and payload)

in exactly two consecutive frames. Such a high data rate is unrealistic if consid-

ering the absence of synchronization and the challenges faced by the system in

practical situations, thus a level of redundancy has been introduced by increasing

the symbol duration to allow the receiver to decode it. Performed tests showed

that transmitting 2.5 symbols per second, i.e. one sixth of the theoretical limit,

was the highest data rate at which the system demonstrated to be reliable in most

situations without relying on error correction or adaptive algorithms.

Despite not being an impressive data rate, 2.5 symbols per second converted into

a bit rate of 22.5 b/s, which is more than twice the capacity of a similar NLOS

system [13], which provides a throughput of 10 b/s using a BFSK modulation

scheme.

For sake of completeness, some of the LOS systems performance available in

the literature are reported in the following without attempting any definite com-

parison, that is very difficult, if even possible.

DynaLight [5] ranges from a minimum bit rate of 2 b/s to a maximum of 8 b/s

with an OOK modulation combined with a Manchester encoding, adapting the

bit rate to the distance. On the other hand, RollingLight [6] shows an outstand-

ing data rate of 90.6 b/s: the much higher SNR achieved by this LOS system

allowed several FSK symbols to be transmitted in a single Tframe and decoded

using a YIN-based [44] detection algorithm. Such an approach is unfeasible for

NLOS systems, since any noise present in the image can make symbols, that oc-

cupy only few rows in the image, hardly recognizable, drastically deteriorating the

performance.

68

Chapter 6

Conclusions and Future Work

Recent studies showed a great potential for smartphone-based VLC systems. In

fact, smartphones are highly diffused today and LED-based lights are rapidly

replacing most of the installed luminaries, especially in indoor applications such

as school, offices, public buildings and now also at homes. However, obstacles are

still to be overcome for the implementation of a reliable smartphone-based VLC

solution. In particular, the impossibility to synchronize the transmitter and the

receiver, due to the instability of the camera framerate and the unavailability of

any control over that, produces the loss of many packets and it forces the system

to be highly redundant which results to definitely limit the maximum achievable

data rate.

Another issue is the recovery of the received signal, considering both the brightness

(and thus the SNR) and the number of samples. Most related studies make use of a

LOS configuration where receivers rely on the lamp projection on the image sensor

to acquire the signal. While the SNR is very high, the projection size in terms of

pixels decreases as the distance from the transmitter increases. As a consequence,

the number of collectable samples in each frame can be so much reduced that it is

impossible to decode the signal. Using an uplink channel, implemented through

the camera flash, another VLC transmitter [5], an infrared-based channel [4] or

even a radiofrequency-based solution, it is possible to develop an algorithm to

adapt the transmitter throughput to the actual distance, as in [5].

Moreover, the use of the LOS configuration limits its friendly usability since the

user is forced to keep the transmitter in the camera FOV.

This work addresses the topic of smartphone-based VLC systems proposing a

NLOS configuration. This approach allows the camera to collect the light from

69

Chapter 6. Conclusions and Future Work

all its FOV, which means to have light over all the frame: the receiver, and the

user by a consequence, is not required to keep a fixed orientation with respect to

the transmitter.

Moreover, this work shows how a VLC system can be designed and implemented

safely with few and cheap off-the-shelf components: this approach can be an

incentive for the market and, also, of great importance in our view, for education

i.e. for helping students in approaching experimentally VLC topics.

During the thesis work a transmitter prototype has been designed and realized

using an Arduino-based board and a commercial LED bulb with few other compo-

nents; also, a firmware has been implemented to receive and encode data coming

from a computer acting as data source.

The transmitter design faced the topic of the encoder-lamp interface following

a general approach adaptable to most commercial LED luminaries by adjusting

only the electrical characteristics of the modulation circuit, and specifically the

maximum voltage and current allowed. The interface has been designed so that

the modulation circuit can be either enclosed in the lamp case or positioned in

the luminaries installation site, requiring only a two-pins low voltage connection

with the VLC-ready LED bulb.

As a receiver a middle-end Android smartphone has been used, with installed a

specifically developed application. In designing the software the focus has been

posed in realizing a solution as flexible as possible for developers, because most

of the code, i.e. everything not strictly related to the Android platform, has been

written in C#. Developers can adapt it to any kind of device equipped with a

camera, no matter if it runs Windows, OS X, iOS or Android. Moreover, the

software can transmit the received samples through the network to a computer

running Matlab, for off-line processing, as it could be the case of student’s exper-

iments.

The channel has been also characterized in terms of path loss and reflection effect,

and the system performance have been tested in different situations which mimic

most real use cases. Tests conducted have shown that the developed system per-

forms better than related works using similar NLOS approach [13]: a received

light intensity of just 150lux is enough to achieve a SER lower than 5%, with a bit

rate equal to 22.5b/s, more than the double with respect to [13], where the same

SER can be achieved only if more than 525lux are provided. Experiments also

enlightened which are the possible challenges in real applications: interferences,

70

6.1. Applications and future works

inhomogeneity of the surfaces or unclear FOV are all conditions that the system

can experiences, and they have been all investigated in Chapter 5. While the sys-

tem is still not ready for the market, results are promising and some application

fields can be hypothesized.

6.1 Applications and future works

Receiver side

The receiver can transmit data on an arbitrary UDP port to any IP in the local

area network to which the smartphone is connected via Wi-Fi, thus the device

can be used either as a standalone receiver o as a data collector that should be

then processed by a computer. By means of a simple Matlab script it is possible

to mimic a basic VLC oscilloscope for experimentation purposes.

The algorithm to reconstruct the square wave can be further improved by using

an adaptive approach, either with an external hardware, i.e. a light meter, or BY

exploiting a light image processing.

Moreover, presenting the received data over the camera preview can create inter-

esting scenarios for augmented-reality applications.

In public offices the receiver can be used to track the queue in background while

the user keeps working or surfing the web. Hotels can exploit the system to

provide the customers Wi-Fi passwords, useful phone numbers or other service

information.

Again, for security applications, the receiver can be triggered by the luminaries

to generate a One-Time Password (OTP) for the user to access secured services,

as it happens with banking operations.

A peculiar application can be the exploitation of the system to send timestamps

to multiple smartphones for synchronization purposes: it would be then possible

to use several devices to reproduce music together, providing an immersive ex-

perience, or to realize multi-point-of-views videos (mPOVs), givin birth to a new

social media experience, or again to reproduce videos composing a videowall by

using multiple smartphones.

Transmitter side

On the transmitter side the topic is how the data delivery from the source to the

encoder or to the lamp, according to the relative displacement of all the elements,

71

Chapter 6. Conclusions and Future Work

and how the system deployment. A possibility is to build VLC-ready LED bulbs

by simply adding a connection for an external encoder. In this way transducers

can be cheap, easily replaceable and encoding-agnostic; a single encoder can then

be used for several luminaries reducing the complexity of the infrastructure. In

other situations, where each lamp must transmit a different message, a Power

Line Communication (PLC) approach can be exploited by implementing a PLC

receiver and packing the encoder into the LED bulb.

With such a flexible system, the ideal configuration strictly depends on the specific

application. For automation purposes, VLC can be used to acquire useful infor-

mation about the building facilities, such as it happens with industrial Building

Management Systems: in this case in each area of the building a single encoder

can be used to broadcast the same information to several transmitters. On the

other hand, in highly automated offices room-specific messages can be transmit-

ted using an encoder for each room (external or internal to the luminaries), such

as to inform the users about the room scheduling or the available equipments,

as some systems allow to do today through ultrasounds-based systems. It is also

possible to exploit the system to offer positioning services in large buildings, such

as museums, shopping malls or airports: users can verify their own position on a

map by using a mobile application while framing the floor with the camera. Also,

visitors at exhibitions can be enabled to access information related to the stand

they are visiting or to the conference they are attending using local VLC-enabled

luminaries and, by informing the infrastructure about their location, get useful

matching with other attendees.

VLC Generation

Manufacturers should take in due consideration the research results about VLC

systems because of the enormous possibilities that the generation of VLC-compatible

devices could offer to the users and the future of this promising technology.

While mantaining the general structure of the smartphone camera as it is now,

it shoud be possibile to get a VLC-compatible camera if the basic requirement to

fully manage all its parameters and, first of all, the framerate, which instability is

the cause of many unpredictable lost packets, would be fulfilled. The possiblility

to trust and control the framerate allows the synchronization and the reduction

of the redundancy, boosting up the data rate up to the theoretical limit of 15

symbols per second, i.e 135 b/s. High speed cameras can further improve the

72

6.1. Applications and future works

result with their higher framerate.

Also, cameras should expose through the APIs all the needed parameters (framer-

ate, exposure, exposure compensation, ISO settings), which at the moment depend

on the specific smartphone model. In this way, a standard can be developed and

more people would be able to access the technology.

The introduction of a light sensor on the back of the smartphone can help the

system to cope with different SNR conditions, by allowing the introduction of a

fast and lightweight adaptive algorithm for the signal reconstruction, boosting the

SER for low light conditions.

With such specific features it would be possible to reduce the redundancy in the

signal: the synchronization between the transmission and the frame acquisition,

for example, would allow to receive precisely one symbol per frame, increasing

the data rate to 15 symbols per seconds, i.e. 135 b/s. Synchronized, high speed

camera could more than double this value thanks the higher amount of frames

per second. Other improvements can be introduced for the next smartphone gen-

eration: the introduction of a high-speed, VLC dedicated CMOS sensor and a

software modification for the camera drivers, as follows. The introduction of a

new sensor is easier than adding a full camera: for VLC purposes the lens and the

color filter are not relevant since, by removing them, a blurred, grayscale image

is achieved where the only informatio contained is the received signal depending

only on the rolling shutter. The CMOS sensor can also perform the conversion

from a 2D to 1D array reducing the throughput needed at the output. Combined

with the disabling of picture-related processing, the framerate can be fixed and

higher.

From the software point of view, a dedicated pipeline can be provided by the

Operating System for VLC processing, that at this point is performed at a low

level as it happens with other communication peripherals, thus speeding up the

data access and providing the decoded information to all the running applications,

allowing the developers to take advantage from the technology without having to

cope with it directly.

73

Appendices

75

Appendix A

Transmission circuit derivation

In this Appendix the derivation of the circuit values for the encoder is presented.

The encoder has to cope with the constraints anticipated in Chapter 4. Here, the

constraints are summarized to facilitate the reading:

The Luigino328 output varies from 0V to 5V and from 0mA to 40mA. Ex-

ceeding these values means to burn the board.

The LED section of the lamp has been measured to work at VS = 48V with

a current IL = 73mA; however, it has been verified that the system can still

work properly with VS = 30V , and in this condition the fake load disspiates

less power and the components are less stressed.

The lamp integrated circuit cannot see either a short or an open circuit at

the load section, otherwise the system goes into the emergency mode by

varying the PWM period to drive the LEDs or shutting down.

As a consequence, the encoder must integrate seamlessly with the load sec-

tion to work properly.

Starting from these requirements the circuit presented in figure A.1 has been de-

signed and the following values have been identified from component’s datasheets.

77

Appendix A. Transmission circuit derivation

Fake load

VS = VL + VDS = 30V

IL = 73mA

MOSFET

VDS = 0.7V

VG = 6.6V

Optocoupler

IA = 1mA

VC = 1V

LM317 Voltage Regulator

VAdj = 1.25V

VOUT = 25V

Arduino

VH = 5V

Figure A.1: The schematics of the modulator developed for this thesis with volt-ages (green arrows) and currents (red arrows)

78

The analysis of the circuit in figure A.1 requires that the following equations

must be satisfied:

Equations

Fake load

RL =VLIL

VL = VS − VDS

PL = VS · ILMOSFET

RC =VGIC

Optocoupler

RA =VAIA

LM317

VOUT = VAdj ·(

1 +R2

R1

)Arduino

VC + VA = VH

Solving all these equations would lead to the final circuit design.

Solving the equations leads to the final circuit design, but some modifications

have been made with respect to the analytical results to reduce the voltage drop

on VS between the LED ON and OFF conditions, and to adapt the ideal results

to the components that are available off the shelf. Also, the resistor RG has been

added between the MOSFET gate and the driving circuit in order to avoid unex-

pected current flows.

79

Appendix A. Transmission circuit derivation

The implemented circuit presents the following characteristics:

Fake load

VS = 37.2V

VL = 36.5V

VDS = 0.7V

IL = 73mA

RL = 500Ω

PL = 2.67W

MOSFET

RC = 5KΩ

RG = 39KΩ

Optocoupler

RA = 3.8KΩ

LM317

R1 = 39Ω

R2 = 751Ω

80

Appendix B

Transmitter program source

In this Appendix the transmitter software is presented. For brevity, some not

relevant parts have been omitted.

The analysis starts by the population and the management of the program

space, i.e. where the lookup table is stored. The use of program space is needed

to build large and fast lookup tables that cannot be stored in RAM.

A struct is used to represent each symbol, so that it is easier to write an algorithm

that scans and extract data from the program space. Each Symbol struct is

composed by the value field, that is compared to the string coming from the

computer so to extract the correct item, by the delay1 and delay2 fields, that

are the half-period duration of the selected frequency, and by the rep1 and rep2

fields that determine the duration of the transmission for each frequency.

Listing B.1: Program space section

1 #include <avr/pgmspace.h> // Program space library

2

3 /*In order to exploit the program space ,

4 a struct is used to represent each entry */

5 typedef struct

6 char value;

7 const short unsigned delay1;

8 const short unsigned rep1;

9 const short unsigned delay2;

10 const short unsigned rep2;

11 Symbol;

12

13 // Lookup table:

14 const Symbol LookUpSym [] PROGMEM =

15 0x41 , 482, 24, 360, 32, //A

81

Appendix B. Transmitter program source

16 0x42 , 482, 24, 330, 32, //B

17 ...

18 ...

19 0x13 , 210, 36, 119, 60, //\n

20 ;

In the following section the encoding process is performed: first, data are re-

ceived from the serial bus; then for each received character a delimiter is sent and

the character is encoded.

The encoding process consists in finding the correct lookup table row and unpack-

ing all of its information, i.e. the half period durations and repetitions.

Listing B.2: Encoding section

1 void loop()

2 if (Serial.available () > 0)

3 sendDelimiter ();

4 while (Serial.available () > 0) //Cycle until the buffer is

empty

5 char inSerial = Serial.read();

6 sendDelimiter ();

7 encode(inSerial);

8

9

10

11

12 void encode(char c)

13 char P;

14 short del1 , del2 , rep1 , rep2;

15 for (int i = 0; i < 128; i++ )

16 P = (char)pgm_read_byte_near (& LookUpSym[i].value);

17 if(P == c)

18 del1 = (short)pgm_read_word (& LookUpSym[i]. delay1);

19 rep1 = (short)pgm_read_word (& LookUpSym[i].rep1);

20 del2 = (short)pgm_read_word (& LookUpSym[i]. delay2);

21 rep2 = (short)pgm_read_word (& LookUpSym[i].rep2);

22

23 sender(del1 , rep1 , del2 , rep2);

24

25 break;

26

27

28

82

Transmitting the information is a matter of cycles that follows the information

collected by the encoder section:

Listing B.3: Signal output

1 void sender(int delay1 , int rep1 , int delay2 , int rep2)

2 for(int j=0; j<12; j++)

3 for (int i = 0; i < (rep1); i++)

4 if (i % 2 == 0)

5 digitalWrite(PinOut , HIGH);

6 digitalWrite(LED_L1 , HIGH);

7

8 else

9 digitalWrite(PinOut , LOW);

10 digitalWrite(LED_L1 , LOW);

11

12 delayMicroseconds(delay1);

13

14 for (int i = 0; i < (rep2); i++)

15 if (i % 2 == 0)

16 digitalWrite(PinOut , HIGH);

17 digitalWrite(LED_L1 , HIGH);

18

19 else

20 digitalWrite(PinOut , LOW);

21 digitalWrite(LED_L1 , LOW);

22

23 delayMicroseconds(delay2);

24

25

26 digitalWrite(PinOut ,HIGH);

27

28

29 void sendDelimiter () //The delimiter is fixed

30 for (int i = 1; i < 300; i++)

31 if (i % 2 == 0)

32 digitalWrite(PinOut , HIGH);

33 digitalWrite(LED_L1 , HIGH);

34

35 else

36 digitalWrite(PinOut , LOW);

37 digitalWrite(LED_L1 , LOW);

38

39 delayMicroseconds (640);

40

41

83

Appendix B. Transmitter program source

Finally, the setup section is presented.

Listing B.4: System setup

1 int PinOut = 2; // Transmission Pin

2 int LED_L1 = 13; // Feedback LED

3

4 void setup()

5 pinMode(PinOut , OUTPUT);

6 pinMode(LED_L1 , OUTPUT);

7 digitalWrite(PinOut , HIGH);

8 Serial.begin (9600);

9 Serial.println (" Ready");

10 sendDelimiter ();

11

84

Bibliography

[1] G. Zissis and P. Bertoldi. “Status of LED-Lighting world market in 2017”.In: (2018).

[2] M.S. Islim et al. “Towards 10 Gb/s OFDM-based VLX using a GaN violetmicro-LED”. In: (2017).

[3] N.Q. Pham, V.P. Rachim, and W.-Y. Chung. “High-accuracy VLC-basedindoor positioning system using multi-level modulation”. In: (2019-03-04).

[4] pureLiFi. pureLiFi. url: https://purelifi.com/ (visited on 11/03/2019).

[5] M Vasilakis. “DynaLight: A Dynamic Visible Light Communication Linkfor Smartphones”. In: (2015).

[6] H.Y. Lee et al. “RollingLight: enabling Line-of-Sight Light-to-Camera com-munications”. In: (2015).

[7] Y.S. Kuo et al. “Luxapose: Indoor positioning with mobile phones and vis-ible light”. In: (2014).

[8] L. E. M. Matheus et al. “Visible Light Communication: Concepts, Applica-tions and Challenges”. In: (2019).

[9] A. G. Bell. “The Photophone”. In: (1880).

[10] SLUX. SLUX Company homepage. url: http://www.slux.guru/ (visited on04/22/2019).

[11] Google LLC. “How People Use Their Devices”. In: (September, 2016).

[12] C. Danakis et al. “Using a CMOS camera sensor for Visible Ligt Commu-nication”. In: (2012).

[13] N. Rajagopal, P. Lazik, and A. Rowe. “Visual Light Landmarks for MobileDevices”. In: (2014).

[14] S. Damodaran, T. Shaikh, and N. K. Taylor. “Using Mobile Phone basedcamera to read information from a Li-Fi source”. In: (2016).

[15] IndiaMART. EDISON COB LED Light Chip. url: https://www.indiamart.com/proddetail/edison- led- cob- light- chip-13413454488.html (visited on04/20/2019).

85

BIBLIORGAPHY

[16] J. Grubor et al. “Bandwidth-efficient Indoor Optical Wireless Communica-tions with White Light-emitting Diodes”. In: (2008).

[17] F. A. Dahir, S. Ali, and M. M. Jawaid. “A Review of Modulation Schemesfor Visible Light Communication”. In: (2018).

[18] Intel. Wi-Fi IEEE802.11 Protocol summary. url: https://www.intel.co.uk / content / www / uk / en / support / articles / 000005725 / network - and -i - o / wireless - networking . html ? ga = 2 . 268583991 . 7987496 . 1555925419 -1645243942.1555925419 (visited on 04/20/2019).

[19] pureLiFi. “Shedding Light on LiFi”. In: (2017).

[20] SecureLink Datasheet. SecureLink. Firefly Wireless Networks, LLC. 2018.

[21] M. Uysal and H. Nouri. “Optical Wireless Communications - An emergingtechnology”. In: (2014).

[22] G. Jekabsons et al. “An analysis of Wi-Fi based indoor positioning accu-racy”. In: (2011).

[23] Bluetooth Special Interest Group. “Enhancing Bluetooth location serviceswith direction finding”. In: (2019-03).

[24] Y. Xu et al. “Accuracy analysis and improvement of visible light positioningbased on VLC system using orthogonal frequency division multiple access”.In: (2017-12-25).

[25] Blue Jay Eindhoven. Blue Jay Eindhoven. url: https://www.bluejayeindhoven.nl (visited on 04/23/2019).

[26] Signify. Signify Holdings. url: http://www.signify.com/ (visited on 04/23/2019).

[27] W. Shen and H. Tsai. “Testing vehicle-to-vehicle visible light communica-tions in real-world driving scenarios”. In: (2017).

[28] A. Singla, D. Sharma, and S. Vashisth. “Data connectivity in flights usingVisible Light Communication”. In: (2017).

[29] D. Tagliaferri and C. Capsoni. “High-speed wireless infrared uplink schemefor airplane passengers’ communications”. In: (2017).

[30] How much of an issue is Rolling Shutter? url: https://redsharknews.com(visited on 02/23/2020).

[31] D. Bradley et al. “Synchronization and Rolling Shutter compensation forconsumer video camera arrays”. In: (2009).

[32] Mat˘ ej ˘ Smıd and Ji˘ rı Matas. “Rolling Shutter Camera Synchronizationwith Sub-millisecond Accuracy”. In: (2017).

[33] Jinwei Gu et al. “Coded Rolling Shutter Photography: Flexible Space-TimeSampling”. In: (2010).

[34] Bright Power Semiconductor. url: https://bpsemi.com/ (visited on 06/01/2019).

86

BIBLIORGAPHY

[35] Non-isolated Buck Offline LED Driver. BP9916C. Bright Power Semicon-ductors.

[36] Arduino. url: https://www.arduino.cc/en/Guide/Introduction (visited on12/30/2019).

[37] Luigino328 Arduino-compatible board. Luigino328. RobotItaly.

[38] T.H. Do and M. Yoo. “Performance analysis of Visible Light Communicationusing CMOS sensors”. In: (2016).

[39] DTMF Touch Tone Decoder Using Microchip PIC Microprocessor. url:https://www.instructables.com/id/DTMF-Touch-Tone-Decoder-Using-Microchip-PIC-Microp/ (visited on 02/26/2020).

[40] S.Selvendran K. Manivannan A. Sivanantha Raja. “Channel Characteri-zation for Visible Light Communication with the help of MATLAB”. In:(2015).

[41] Engineering Toolbox, (2001). url: https://www.engineeringtoolbox.com(visited on 02/18/2020).

[42] The land of color. url: https : / / www . thelandofcolor . com (visited on02/18/2020).

[43] Asim Kumar Roy Choudhury. Principles of Colour and Appearance Mea-surement. Woodhead Publishing, 2014. isbn: 978-0-85709-229-8.

[44] A. De Cheveigne and H. Kawahara. “YIN, a fundamental frequency estima-tor for speech and music”. In: (2002).

87