badripatro report part2 09307903

70
Real-Time Video and Image Processing for Object Tracking using DaVinci Processor A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Technology by Badri Narayan Patro (Roll No. 09307903) Under the guidance of Prof. V. Rajbabu DEPARTMENT OF ELECTRICAL ENGINEERING INDIANINSTITUTE OF TECHNOLOGY–BOMBAY July 15, 2012

Upload: sitaram1

Post on 27-Jan-2016

223 views

Category:

Documents


7 download

DESCRIPTION

Badripatro Report Part2 09307903

TRANSCRIPT

Page 1: Badripatro Report Part2 09307903

Real-Time Video and Image Processing for ObjectTracking using DaVinci Processor

A dissertation submitted in partial fulfillment of

the requirements for the degree of

Master of Technology

by

Badri Narayan Patro(Roll No. 09307903)

Under the guidance of

Prof. V. Rajbabu

DEPARTMENT OF ELECTRICAL ENGINEERING

INDIAN INSTITUTE OF TECHNOLOGY–BOMBAY

July 15, 2012

Page 2: Badripatro Report Part2 09307903

This work is dedicated to my family and friends.

I am thankful for their motivation and support.

Page 3: Badripatro Report Part2 09307903

Dissertation Approval

The dissertation entitled

Real-Time Video and Image Processing for ObjectTracking using DaVinci Processor

by

Badri Narayan Patro(Roll No. 09307903)

is approved for the degree of

Master of Technology

Examiner Examiner

Guide Chairman

Date:

Place:

Page 4: Badripatro Report Part2 09307903

Abstract

A video surveillance system is primarily designed to track key objects, or people exhibiting sus-

picious behavior as they move from one position to another and record it for possible future use.

Object tracking is an essential part of surveillance systems. As part of this project, an algorithm

for object tracking in video based on image segmentation and blob detectection and identifi-

cation was implemented on Texas Instrument’s (TI’s) TMS320DM6437 Davinci multi media

processor. Using back ground substration, all objects present in the image can be detected irre-

spective of they are moving or not. With the help of image segmantation, the substracted image

is filteredout and free from salt pepper noise. The segmented image is processed for decting and

identifying the blobs presents, which is going to be tracked. The object tracking is carried out

by feature extraction and center of mass calculation in feature space of the image segmentation

results of successive frames. Consequently, this algorithm can be applied to multiple moving

and still objects in the case of a fixed camera.

In this project we develop and demonstrate a framework for real-time implementation

of image and video processing algorithms such as object tracking and image inversion using

DM6437 processor. More specifically we track single object and two object present in the

scene captured by a CCD camera that acts as the video input device and output is displayed

in LCD display. The tracking happens in real-time consuming 30 frames per second (fps) and

is robust to background and illumination changes. The performance of single object tracking

using background substraction and blob detection was very efficent in speed and accuracy as

compared to a PC (Matlab) implementation of a similar algorithm. Execution time for different

blocks of single object tracking were estimated using the profiler and accuracy of the detection

is varified using the debuger provided by TI code composer studio (CCS). We demonstrate

that the TMS320DM6437 processor provides at least ten-times speed-up and is able to track a

moving object in real-time.

ii

Page 5: Badripatro Report Part2 09307903

Contents

Abstract ii

List of Figures v

List of Tables vi

List of Abbreviations 1

1 Video and Image Processing Algorithms on TMS320DM6437 1

1.1 Video processing Demos on DM6437 DaVinci processor . . . . . . . . . . . 1

1.1.1 Demo 1 : Video Capture Application: . . . . . . . . . . . . . . . . . . 2

1.1.2 Demo 2 : Video Display Application: . . . . . . . . . . . . . . . . . . 5

1.1.3 Demo 3 : Video Encoder and Decoder Application: . . . . . . . . . . 5

1.1.4 Demo 4 : Video copy : . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.1.5 Demo 5: Video encdec . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Video Preview: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.3 Image Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.4 JPEG Standard Image Compression on DM6437 Processor . . . . . . . . . . . 18

1.4.1 JPEG Implementation on TMS320DM6437 . . . . . . . . . . . . . . . 18

1.4.2 Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.5 Code Composer Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1.5.1 CCS installation and Support . . . . . . . . . . . . . . . . . . . . . . . 25

1.5.2 Useful Types of Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.5.3 Support Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.5.4 Power On Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

iii

Page 6: Badripatro Report Part2 09307903

2 Video and Image Processing Algorithms Code Work Flow Diagrams and Profiling 28

2.1 Code Work Flow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.2 Procedure for Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3 Working code for multiple Object Tracking on DM6437 33

iv

Page 7: Badripatro Report Part2 09307903

List of Figures

1.1 Video IO Demo file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Video capture pseudo code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Video capture control flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4 Video capture application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Video control application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.6 Video encode pseudo code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.7 Video Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.8 Video Encoder and decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.9 Step 1 video preview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.10 Step 2 video preview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.11 Step 3 video preview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.12 Video preview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.13 Image inversion steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.14 Steps for Implementing JPEG Algorithm on TMS320DM6437 Platform . . . . 19

1.15 4:2:2 Subsample YUV Format . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.16 IDE for CCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.1 Video encdec file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.2 Video enc_decoder function . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.3 Video encoder and decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

v

Page 8: Badripatro Report Part2 09307903

List of Tables

1.1 Profiler data for Different function . . . . . . . . . . . . . . . . . . . . . . . . 23

vi

Page 9: Badripatro Report Part2 09307903

Chapter 1

Video and Image Processing Algorithms

on TMS320DM6437

Software development tool Code Composer Studio (CCS) is an integrated development envi-

ronment. The development system works in two modes, Simulator and Emulator. In the case of

the emulator CCS in connected to target board through an embedded emulation interface. CCS

not only allows the generation, compilation, assemble and link the source files but also give ac-

cess to debugging through profiling, software & hardware breakpoints, direct access to memory

and even control registers. CCS also defines the memory map and write different section of

code into these memory blocks.

Texas Instruments (TI) also provides a collection of optimized image/video processing

functions (IMGLIB). These library functions include C-callable, assembly optimized image/video

processing routines. These functions are typically used in computationally intensive real-time

applications where optimal execution speed is critical [24].

Finally for fast implementation one can use Matlab /Simulink. A Target support Library

for TC6 is available for use with the TIDM6437. Although such an implementation is neither

optimized nor efficient, it is good for the scenarios of algorithm validation and proof of concepts.

1.1 Video processing Demos on DM6437 DaVinci processor

This figure, which has 4 different modules. The first part is Video Recoder, input video is

captured in CCD Camera then call to FVID video driver then read that video frame form input

buffer then encode that video frame and write the encoded video data to a file. The second

1

Page 10: Badripatro Report Part2 09307903

module of this figure is video play back, in this case video file read from hard drive and decode

that file and put into the output buffer for display using the video output. Third module is

video loop through with encoder and decoder, in this module input video is captured in CCD

Camera then call to FVID video driver then read that video frame form input buffer then encode

that video frame then write encoded data to intermediate file, then decode the frame and write

decoded data to output buffer for Video display. Last module is video preview. In this module

input video is captured in CCD Camera then a call to FVID video driver reads that video frame

form input buffer and then the input buffer is passed to the output buffer for Video display as

shown in figure 5.1.

Figure 1.1: Video IO Demo file

1.1.1 Demo 1 : Video Capture Application:

Video Capture application uses the EPSI APIs for VPFE to capture raw video frames from the

video input device CCD camera.The frames captured are stored in a file.The control flow of the

video capture application is shown in Figure as shown in figure. This application uses only the

EPSI APIs for the video capture. The following is the sequence of operations:

1. First initialize video capture device using VPFE_open(). The device is configured with

help of a set of initialization parameters

2. we can configure the video capture device using VPFE_control(). The video device would

have been initialized in Step 1, if any other configuration required then that can handled

2

Page 11: Badripatro Report Part2 09307903

by VPFE_control().

3. Now we will capture a video frame using VPFE_getBuffer(). This function dequeues a

buffer from the VPFE buffer queue which contains the video frame captured.

4. After that write the video frame into a file using FILE_write(). The actual write time will

depend on the data transfer rate of the target media.

5. Return the buffer back to the VPFE buffer queue using VPFE_returnBuffer().

6. Check if more frames need to be captured. If yes, go to Step 3.

7. Close the VPFE device using VPFE_close()

The total flow is explained in video control application, video encode control flow and video

encode pseudo code. video capture pseudo code, video capture control flow,video capture ap-

plication is shown in figure figure 5.2, 5.3, 5.4 respectable.

Figure 1.2: Video capture pseudo code

Video Capture Code

#define FRAME_SIZE (720*480*2)

FILE_Handle fileHdl;

3

Page 12: Badripatro Report Part2 09307903

Figure 1.3: Video capture control flow

VPFE_Params vpfeParams;

void APP_videoCapture(int numFramesToCapture)

{

int nframes=0;

char *frame=NULL;

/* Initialize VPFE driver */

vpfeHdl = VPFE_open(&vpfeParams);

while (nframes++ < numFramesToCapture)

{

VPFE_getBuffer(vpfeHdl, frame);

FILE_write(fileHdl, frame, FRAME_SIZE);

VPFE_returnBuffer(vpfeHdl, frame);

4

Page 13: Badripatro Report Part2 09307903

Figure 1.4: Video capture application

}

VPFE_close(vpfeHdl);

}

1.1.2 Demo 2 : Video Display Application:

Video display application uses the EPSI APIs for VPBE to display the raw video frames read

from the video input file .The stored video frames are read by file read and place it into video

displaying device by using EPSI APIs for VPBE. This application is similar to Demo 1 replacing

captureing device to displaying device, we will perform read form file instead of write to the

file and VPFE to VPBE driver.

1.1.3 Demo 3 : Video Encoder and Decoder Application:

Video encoder application needs to capture video frames from the Video Processing Front End

(VPFE), pass the frames to a video encoder as input and store the encoded frames to a file.

Depending on the use of the application might transmit the encoded frames over the network

or the file is to video decoder as input and send the decoded output file to VPBE for displaying

video data, Which is explained in video control application, video encode control flow and

video encode pseudo code. The video control application, video encode control flow, video

encode pseudo code is shown in figure figure 5.5, 5.6, 5.7 respectable.

void APP_videoEncode(int numFramesToCapture)

{

int nframes=0;

5

Page 14: Badripatro Report Part2 09307903

Figure 1.5: Video control application

char *frame=NULL;

/******************************

* Creation Phase

*******************************/

/* Initialize VPFE driver */

vpfeHdl = VPFE_open(&vpfeParams);

/* Initialize Codec Engine */

engineHdl = Engine_open(ÂSÂSencodeŠŠ, NULL, NULL);

/* Initialize Video Encoder */

videncHdl = VIDENC_create(engineHdl, ÂSÂSh264encŠŠ,

&videncParams);

/* Configure Video Encoder */

VIDENC_control(videncHdl, XDM_SETPARAMS, &videncDynParams,

&videncStatus);

/* Initialize file */

fileHdl = FILE_open(ÂSÂStest.encŠŠ, ÂSÂSwŠŠ);

/******************************

* Execution Phase

*******************************/

while (nframes++ < numFramesToCapture)

{

VPFE_getBuffer(vpfeHdl, frame);

VIDENC_process(videncHdl, &inbufDesc, &outbufDesc,

6

Page 15: Badripatro Report Part2 09307903

&inArgs, &outArgs);

VPFE_returnBuffer(vpfeHdl, frame);

FILE_write(fileHdl, outBufDesc.bufs[0],

outArgs.bytesGenerated);

}

/******************************

* Deletion Phase

*******************************/

VPFE_close(vpfeHdl);

VIDENC_delete(videncHdl);

Engine_close(engineHdl);

FILE_close(fileHdl);

}

1.1.4 Demo 4 : Video copy :

Read complete frames from in(put Video file ), encode, decode, and write to out(output video

file ). This app expects the encoder to provide 1 buf in and get 1 buf out, and the buf sizes of the

in and out buffer must be able to handle NSAMPLES bytes of data. The video copy encoder

uses DMA to copy from the input buffer, and it uses DMA to copy to the encoded buffer. There-

fore, we must write back the cache before calling the process function, and we must invalidate

the cache after calling the process function.The complete flow is shown in figure 5.10.

1.1.5 Demo 5: Video encdec

Read form input buffer store in fh_in then encode fh_in file then store it to fh_enc .then decode

fh_en file and store it in fh_out.

Read complete frames from fh_in and put it into input buffer, then prepare "per loop"

buffer descriptor settings for input buffer, the encode the frame, Since the input buffer was

filled by the CPU (via fread), we must write back the cache for the subsequent DMA read by

the video encoder. then write encoded data to intermediate file, then decode the frame and write

7

Page 16: Badripatro Report Part2 09307903

decoded data to file fh_out. The complete flow is shown in figure 5.9.

The working principle of Video encoder/decoder demo module is given below

Step 0: Job of this demo is Read form file and encode the file and write back to a another file.

Step 1:Start CCSv3.3 using the desktop Shortcut icon and before to start ccs, make sure that

the DM6437 kit is connected pc (via USB emulation cable) or external emulator, the input from

CCD camera is connected to video input port of DM6437 board via composite to BNC con-

nector, output to the display device is connected to one of the three output port of the kit via

composite cable(input output connection can be done by using s-video interface also) and pow-

ered on all the peripherals and board.

Step 2:Load the video_encdec.pjt project from the dvsdk example directory

"Project" then "‘right click" then "open"

select video\_encdec.pjt from the C:/dvsdk\_1\_01\_00\_15/examples folder.

Step 3: work flow of the project is like main Function will call to CERuntime_init() to initialize

the CE environment then call TSK_create thread for calling smain() function. This function

takes command line input and and work according to that input. "‘FXn "‘ is alise name for xdc

function

CERuntime_init();

TSK_create((Fxn)smain, &attrs, argc, argv)

CERuntime_init() : This is a Codec Engine function, which Provides wide initialization of

the Codec Engine Runtime. This function is consist of CERuntime_exit(Void), which is used

for finalizes the CE module used in the current configuration and CERuntime_init(Void), This

function must be called prior to using any Codec Engine APIs and it initializes all Codec Engine

modules used in the current configuration.

Step 4: In smain(Int argc, String argv[])function, first we declare engine handle and VISA

handler for video encoder and decoder. i.e.

Engine_Handle ce = NULL;

VIDDEC_Handle dec = NULL;

VIDENC_Handle enc = NULL;

8

Page 17: Badripatro Report Part2 09307903

where VIDDEC_Handle and VIDENC_Handle are the VISA handler for encoder and decoder.

Then declare file type pointer for input,output and encoder file. Check argc, whether it is com-

mand line input or from file.

FILE *fh_in = NULL;

FILE *fh_enc = NULL;

FILE *fh_out = NULL;

Step 6: if argc is one or zero then there is no command line input so input, encoded and output

file is read form specified file location.

if (argc <= 1) {

/* Input file is 10 raw YUV CIF frames */

inFile = "..\\data\\akiyo_10frames_qcif_in.yuv";

encodedFile = "..\\data\\akiyo_10frames_qcif.h264";

outFile = "..\\data\\akiyo_10frames_qcif_out.yuv";

}

if argc is not equal 4 then we will an error otherwise corresponding assignment is takes place.

else if (argc != 4) {

fprintf(stderr, usage, argv[0]);

exit(1);

}

else {

progName = argv[0];

inFile = argv[1];

encodedFile = argv[2];

outFile = argv[3];

}

Step 7:Now we will start a start DSP Engine.Engine_open: This is a CE APIs used for

open a codec instance. An engine may be opened more than once, each open returns a unique

handle that can be used to create codec instances or get status of server.

9

Page 18: Badripatro Report Part2 09307903

(ce = Engine_open(engineName, NULL, NULL)) == NULL)

Step 8:Now we will create an instance of the encoder by using VISA APIs which is VI-

DENC_create function passes the request to the Engine, which determines if the requested

codec is local via the codec table. VIDDEC_control is used for configure different parameter

lik if i want to change frame rate so by using this i will change .

/* allocate and initialize video decoder and decoder on the engine */

enc = VIDENC_create(ce, encoderName, &vencParams);

dec = VIDDEC_create(ce, decoderName, &vdecParams)

/* Set Dynamic Params for Encoder and decoder */

status = VIDENC_control(enc, XDM_SETPARAMS, &encDynParams, &encStatus);

status = VIDDEC_control(dec, XDM_SETPARAMS, &decDynParams, &decStatus);

Step 9: DMA read by the video encoder

/* When input buffer is full , we must writeback the

cache for the subsequent DMA read by the video encoder.*/

Memory_cacheWbInv(src, IFRAMESIZE);

Memory_cacheInv(encoded, EFRAMESIZE);

Step 10:Encode the frame then write encoded data to itermediate file then decode the frame

/*encode the frame */

status = VIDENC_process(enc, &inBufDesc, &encodedBufDesc, &encInArgs,

&encOutArgs);

/* write encoded data to itermediate file */

fwrite(encoded, encOutArgs.bytesGenerated, 1, fh_enc);

/* decode the frame */

status = VIDDEC_process(dec, &encodedBufDesc, &outBufDesc, &decInArgs,

&decOutArgs);

10

Page 19: Badripatro Report Part2 09307903

Step 11: write Decoded data into a file by fwrite

/* write decoded data to file */

printf("Writing decoded frame %d to file...\n\n", n);

fwrite(dst, obufSizes, 1, fh_out);

Step 12:delete codec, close CE and free buffer and FIVD resources.

/* teardown the codecs */

VIDENC_delete(enc);

VIDDEC_delete(dec);

/* close the engine */

Engine_close(ce);

/* free buffers */

Memory_contigFree(encodedBuf, framesize);

/* Free Video Driver resources: */

for (i=0; i<FRAME_BUFF_CNT && status == 0; i++) {

FVID_freeBuffer(hGioVpfeCcdc, frameBuffTable[i]);

Step 13:Delete channel for VPBE and VPFE

/* Delete Channels */

FVID_delete(hGioVpfeCcdc);

FVID_delete(hGioVpbeVid0);

FVID_delete(hGioVpbeVenc);

In this project we develop and demonstrate a framework for real-time object tracking using

the Texas Instrument’s (TI’s) TMS320DM6437 Davinci multi media processor. More specifi-

cally we track single object and two object present in the scene captured by a camera that acts as

the video input device. The tracking happens in real-time consuming 30 frames per second(fps)

and is robust to background, illumination changes. The approach involves an off-line detec-

tion algorithm which is a pre-processing step. As part of the real-time tracking the proposed

approach does background subtraction, image segmentation, blobs detection and identification,

feature extraction and calculation of center of mass for target tracking. Also Real-time imple-

mentation of basic image processing algorithms such as image inversion, edge detection using

11

Page 20: Badripatro Report Part2 09307903

Sobel operator and Image compression were carried out on DaVinci digital media processor

DM6437. The input was captured using a CCD camera and output is displayed in LCD display.

Blob detection and identification was very slow due to high computational complexity due lim-

iting speed of processor ,coding style and the algorithm processes of multiple bolb detection,

identification and centroid calculation.Execution time for different blocks of single object track-

ing were estimated using the profiler and acquricy of the detection is test by debuger provided

by TI code composer studio (CCS).

There are two approches for video processing algorithms on TMS320DM6437 : the Real

time approach by using CCD camera and Display and second one is that we can perform this

by using read file from hard drive and process( ENC DEC) and also other video process it

and store that file into hard drive . Second case no need for configure capture and display

channel and checking PAL or NTSC etc. First case is used real time video preview and video

Encoder/Decoder where as second one is used in video recorder or video copy or video en-

coder/decoder to/from file.

The working principle of video preview is explained in 3 steps.

• Step 1:Read video data through VPFE, use EDMA to route through SCR into external

DDR memory as shown in figure 5.1. FVID_exchange(vpfeChan, bufp);

• Step 2:Transfer raw video data from DDR memory into L1 Data memory using EDMA.CPU

will Process L1 Video data by using the programs in L1 program memory then store the

processed video data to L1 memory and store result back to DDR using EDMA as shown

in figure 5.2.

VIDENC_process(videncChan, inbuf, outbuf, inarg, outarg);

• Step 3: Read video data from external DDR memory, use EDMA to route through SCR,

Put video data into Display through VPBE as shown in figure 5.3.

FVID_exchange(hGioVpbeVid0, &frameBuffPtr);

12

Page 21: Badripatro Report Part2 09307903

Figure 1.6: Video encode pseudo code

13

Page 22: Badripatro Report Part2 09307903

Figure 1.7: Video Copy

Figure 1.8: Video Encoder and decoder

Figure 1.9: Step 1 video preview

14

Page 23: Badripatro Report Part2 09307903

Figure 1.10: Step 2 video preview

Figure 1.11: Step 3 video preview

15

Page 24: Badripatro Report Part2 09307903

1.2 Video Preview:

Read video data form CCD camera through VPFE, use EDMA to route through SCR into ex-

ternal DDR memory. Transfer raw video data from DDR memory into L1 Data memory using

EDMA.CPU will Process L1 Video data. Store the processed video data to L1 memory and store

result back to DDR using EDMA.Read video data from external DDR memory, use EDMA

to route through SCR, Put video data into Display through VPBE and its implementation on

DM6437 is shown in figure 5.4.

Figure 1.12: Video preview

1.3 Image Inversion

Read video data through VPFE, use EDMA to route through SCR into external DDR memory.

Transfer raw video data from DDR memory into L1 Data memory using EDMA. CPU will

Process L1 Video data by using the image inversion programs which is invert the gray label of

the video frame. which can be performed by subtracting gray label value from maximum gray

value(255). Store the processed video data to L1 memory and store result back to DDR using

EDMA.Read video data from external DDR memory, use EDMA to route through SCR, Put

video data into Display through VPBE and its implementation on DM6437 is shown in figure

5.5.

Read complete frames from in buffer pointer, image invertion code, encode, decode, and

write to out buffer pointer.

16

Page 25: Badripatro Report Part2 09307903

Figure 1.13: Image inversion steps

/* grab a fresh video input frame */

FVID_exchange(hGioVpfeCcdc, &frameBuffPtr);

/* Set as input buffer to Encoder: */

src = frameBuffPtr->frame.frameBufferPtr;

** write user code for image invertion by taking frameBuffPtr->

frame.frameBufferPtr **

image_inverse( &src[0]);

/* encode the frame */

status = VIDENC_process(enc, &inBufDesc, &encodedBufDesc,

&encInArgs,&encOutArgs);

/* decode the frame */

status = VIDDEC_process(dec, &encodedBufDesc, &outBufDesc,

&decInArgs,&decOutArgs);

/* display the video frame */

FVID_exchange(hGioVpbeVid0, &frameBuffPtr);

we can do using this code also, Read complete frames from in buffer pointer, image invertion

code and write to out buffer pointer.

/* grab a fresh video input frame */

FVID_exchange(hGioVpfeCcdc, &frameBuffPtr);

17

Page 26: Badripatro Report Part2 09307903

/* Set as input buffer to Encoder: */

src = frameBuffPtr->frame.frameBufferPtr;

/* write user code for image invertion by taking frameBuffPtr->

frame.frameBufferPtr */

image_inverse( &src[0]);

/* display the video frame */

FVID_exchange(hGioVpbeVid0, &frameBuffPtr);

1.4 JPEG Standard Image Compression on DM6437 Proces-

sor

subsectionIntroduction Joint Photographic Experts Group (JPEG) [10] is a worldwide standard

for the compression of still images. JPEG standard includes two basic compression methods,

each with various modes of operation. A DCT-based method is specified for “lossy” com-

pression, and a predictive method for “lossless” compression. JPEG features a simple lossy

technique known as the Baseline method, a subset of the other DCT-based modes of operation.

The Baseline method has been by far the most widely implemented JPEG method to date, and is

sufficient in its own right for a large number of applications. In this section provides a detailed

study of various steps of JPEG standards such as Discrete Cosine Transform (DCT), quanti-

zation, zigzag and run length encoding .It includes specifications for both- lossless and lossy

compression algorithm. Here mainly we focus on lossy encoding [11] as JPEG lossy standard

is designed to reduce the high frequency component of the image frame that human eye cannot

detect easily. Because human eyes cannot detect slight change in the color space, but they can

easily perceive slight change in the intensity .

reference http://msdn.microsoft.com/en-us/library/windows/desktop/bb530104

1.4.1 JPEG Implementation on TMS320DM6437

The block diagram of different steps for implementing JPEG image compression algorithm on

TMS320DM6437 platform is shown in figure 6.1.

18

Page 27: Badripatro Report Part2 09307903

Figure 1.14: Steps for Implementing JPEG Algorithm on TMS320DM6437 Platform

1. Capturing Image on TMS320DM6437 An image frame can be captured on the TMS320DM6437

platform from the standard video input device like digital camera by connecting the Video

Out port of the camera to the Standard Video In port of the board. The VPFE standard

library function FVID_exchange is used in C code to capture the image. This function

grabs an image frame, stores it in the internal board memory buffer and assigns a buffer

pointer to the start location of the memory. This facilitates further processing of the im-

age.

2. Converting Image from YUV Format to Gray Scale An image in YUV format can

be converted to grey scale format by selectively discarding the blue and red difference

chrominance components. Human eyes are more sensitive to the luminance component

then that of chrominance. The image captured via FVID_exchange function is in the 4:2:2

19

Page 28: Badripatro Report Part2 09307903

YUV format. Here, only Y components were selected for further processing. U and V

components were discarded for effectively converting the YUV image to the Grey scale,

in which U and V components are sub sampled to be present in the interleaved fashion

between every Y components. It is known as packed YUV color format, which is shown

below in figure 2.2.1.1. In packed YUV color format every other byte is Y and every

fourth byte is either U or V in the format of Y U Y V Y U Y V as shown in figure 6.2.

Figure 1.15: 4:2:2 Subsample YUV Format

.

3. Applying DCT and IDCT on 8x8 Grey Scale Blocks

After converting the entire image from YUV format to Grey Scale format, the DCT is ap-

plied sequentially on individual 8X8 grey scale block in order to convert the image from

spatial domain into the frequency domain. The equation for DCT explained in previously

can be used to apply the DCT on 8x8 blocks but they require lots of mathematical com-

putation and hence higher execution time, which in turn degrades the performance of

the system. Therefore, we implemented for another popular approach to implement DCT

in this project.

The DCT for the block size of N x N is given by

(Xb)u,v =C(u)√N/2

C(v)√N/2

N−1∑i=0

N−1∑j=0

(xb)i,j cos(2i+ 1)uπ

2Ncos

(2j + 1)vπ

2N,

where 0 ≤ u, v < 8 and

C(u) =

1√2

u = 0

1 u > 0

.

X(u,v) is the output frequency of input image x(i,j)

20

Page 29: Badripatro Report Part2 09307903

In this approach, the image matrix of size 8x8 is first multiplied with cosine transform

matrix and then with its transposed matrix of same size in order to achieve the DCT 8x8

matrix in order to convert the entire image from spatial domain to frequency domain. The

transpose matrix is generated by transposing rows and columns of the cosine transform

matrix.

In order to verify the correctness of the implementation of DCT on TMS320DM6437

platform, Inverse Discrete Cosine Transform was applied on all 8x8 blocks of the image.

The same approach for DCT was used to implement IDCT but the order of the multipli-

cations was reversed. Thus, each 8x8 DCT matrix was first multiplied by the transposed

cosine matrix and then by the original cosine transform matrix of the same size . The

image was converted back to the spatial domain from the frequency domain after the

application of IDCT. The original image was regenerated and successfully displayed on

LCD Display.

4. Implementation of Quantization and Dequantization on the TMS320DM6437 The

image matrix after DCT occupies more space then the original image matrix. The main

goal of quantization is to discard the excessive amount of information and reduce the

number of bits needed to store the output values. This is done by dividing each of the

output DCT coefficients by a quantization value and then rounding the resulting value to

an integer. The high frequency coefficients are quantized more heavily then the DC and

some of the high-energy AC coefficients. Quantization takes advantage of the fact that

human eye is not sensitive to the change in the high frequency coefficients and hence the

minor change in the image quality goes unnoticed.

The mathematical formula of the quantization and dequantization steps are given by

Quantized Value (u, v) = DCT (u, v)/ Quantum (u, v)

Rounded to the nearest zero Dequantization formulae:

DCT (u, v) = Quantized (u, v) * Quantum (u, v)

Every 8x8 DCT matrix was divided by the quantized matrix of the same size in order

to quantize the high frequency DCT coefficients to zero. By dividing every 8x8 matrix

21

Page 30: Badripatro Report Part2 09307903

with quantization matrix, most of the high frequency DCT coefficients with pixel val-

ues smaller than quantization thershold corresponding quantization matrix values were

reduced to zeros. This step contributes most to the lossy ness of the JPEG algorithm as

the finer detail of the image is lost due to these truncations of the high frequency DCT

coefficients to zeros. The original values cannot be restored back in the decompression

5. Zigzag and Inverse zigzag sequencing of the image pixels

After the quantization step, each 8x8 matrices are scanned in a zigzag manner (in order

of their frequencies) one after another in a way that it will produce the output data stream

consisting of many consecutive zeros. These zeros are discarded in the Run Length En-

coding(RLE) in order to store the entire image in as few storage space as possible.

6. Run Length Encoding and Decoding Implementation on TMS320DM6437 The Run

Length Encoding step follows the zigzag scan step in the JPEG compression algorithm.

One of the standard data compression algorithms such as RLE (Run Length Encoding)

or Huffman Encoding can be applied to the resulting zigzag sequences. we opted for the

Rung Length Encoding for this project due to simplicity of its computation.

1.4.2 Profiling

Profiling is used to measure code performance and make sure for efficient use of the DSP targets

resources during debug and development sessions. Basically profiling is carried out by profiler,

which is inbuilt in code composer studio. Using profiler, developers can easily profile all C/C++

functions in their application for instruction cycles or other events such as cache misses or hits,

pipeline stalls and branches. Profiler ranges is used to concentrate efforts on high-usage areas

of code during optimization, Developers can generate one-tuned code. Profiling is available for

ranges of assembly, C++ or C code in any combination. To increase productivity, all profiling

facilities are available throughout the development cycle.

Profiling is used on different function of image compression Algorithm JPEG standard and

the time taken to execute each function is measured by considering its inclusive and exclusive

cycle, access count and processor clock frequency. The whole profiling process is carried out

by following steps

22

Page 31: Badripatro Report Part2 09307903

Table 1.1: Profiler data for Different function

Function name Access count Incl Cycle Excl cycle Incl time taken(ms)

multiply1 18 101009506 34331926 8.016

multiply2 18 199555672 68702923 15.837

multiply3 6 31110626 10630951 7.407

multiply4 5 58176765 20066440 16.621

multiply_coefficient 12 12933984 5291010 1.539

multiply_coefficient_reverse 11 68746 68746 0.009

zig_zag_pattern 11 49957 49957 0.006

invzig_zag_pattern 11 48280 48280 0.006

rle_invrle 108 1719464 1719464 0.023

1.5 Code Composer Studio

Code Composer Studio (CCS) provides an Integrated Development Environment (IDE) to in-

corporate the software tools used to develop applications targeted to Texas Instruments Digital

Signal Processors. CCS includes tools for code generation,such as a C compiler, an assembler,

and a linker. It has graphical capabilities and supports real-time debugging. It provides an easy-

to-use software tool to build and debug programs.

The C compiler compiles a C source program with extension .c to produce an assem-

bly source file with extension .asm.The assembler assembles an.asm source file to produce a

machine language object file with extension.obj. The linker combines object files and object

libraries as input to produce an executable file with extension.out. This executable file repre-

sents a linked common object file format (COFF), popular in Unix-based systems and adopted

by several makers of digital signal processors.

To create an application project, one can “add” the appropriate files to the project. Com-

piler/linker options can readily be specified. A number of debugging features are available,

including setting breakpoints and watching variables; viewing memory, registers, and mixed C

and assembly code; graphing results; and monitoring execution time. One can step through a

program in different ways (step into, over, or out).

23

Page 32: Badripatro Report Part2 09307903

Real-time analysis can be performed using real-time data exchange (RTDX). RTDX allows

for data exchange between the host PC and the target DVDP, as well as analysis in real time

without stopping the target. Key statistics and performance can be monitored in real time.

Through the joint team action group (JTAG), communication with on-chip emulation support

occurs to control and monitor program execution. The DM6437 EVM board includes a JTAG

interface through the USB port.

CCS provides a single IDE to develop an application by offering following features:

• Programming DSP using C/C++

• Ready-to-use built-in functions for video and image processing

• Run-time debugging on the hardware

• Debugging an application using software breakpoints

Some of the steps involved in developing a successful application include creation of a

project environment, development of code using C or C++, linking appropriate library functions,

compiling the code, converting it into an assembly language code and downloading it onto the

DM6437 platform using JTAG interface. The image of the IDE for CCS is shown below:

Figure 1.16: IDE for CCS

Combining all the features such as advanced DSP core, interfaces and on-board memory,

along with the advantages of CCS, TMS320DM6437 was considered an obvious choice for this

project. The DVDP stood out as an excellent platform for this project.

24

Page 33: Badripatro Report Part2 09307903

1.5.1 CCS installation and Support

Use the USB cable to connect the DVDSK board to the USB port on the PC. Use the 5-V

power supply included with the DVDSK package to connect to the +5-V power connector on

the DVDSK to turn it on. Install CCS with the CD-ROM included with the DM6437 target

support file, preferably using the c:\CCSv3.3 structure . The CCS icon should be on the desktop

as “CCS” and is used to launch CCS.The code generation tools (C compiler, assembler, linker)

are used with CCS version 3.x. CCS provides useful documentations included with the DVDSK

package on the following (see the Help icon):

1. Code generation tools (compiler, assembler, linker, etc.).

2. Tutorials on CCS, compiler, RTDX.

3. DSP instructions and registers.

4. Tools on RTDX, DSP/basic input/output system (DSP/BIOS), and so on.

There are also examples included with CCS within the folder c:\C6416\examples. They

illustrate the board and chip support library files, DSP/BIOS, and so on. CCS Version 3.x was

used to build and test the examples included in this book.A number of files included in the

following subfolders/directories within c:\C6416 (suggested structure during CCS installation)

can be very useful:

1. myprojects: a folder supplied only for your projects.

2. bin: contains many utilities.

3. docs: contains documentation and manuals.

4. c6000\cgtools: contains code generation tools.

5. c6000\RTDX: contains support files for real-time data transfer.

6. c6000\bios: contains support files for DSP/BIOS.

7. examples: contains examples included with CCS.

8. tutorial: contains additional examples supplied with CCS.

25

Page 34: Badripatro Report Part2 09307903

1.5.2 Useful Types of Files

Your working project floder with a number of files with different extensions. They include:

1. file.pjt: to create and build a project named file

2. file.c: C source program

3. file.asm: assembly source program created by the user, by the C compiler, or by the linear

optimizer

4. file.sa: linear assembly source program.The linear optimizer uses file.sa as input to pro-

duce an assembly program file.asm

5. file.h: header support file.

6. file.lib: library file, such as the run-time support library file rts6700.lib

7. file.cmd: linker command file that maps sections to memory

8. file.obj: object file created by the assembler

9. file.out: executable file created by the linker to be loaded and run on the C6713 processor

10. file.cdb: configuration file when using DSP/BIOS

11. file.sbl,.tcl,.tco,.h62,.s62,.paf2,.map

1.5.3 Support Files

The following support files located in the folder support (except the library files) are used for

most of the examples and projects discussed in this book:

1. evmdm6437.gel: what is gel file and what is its use .

2. evmdm6437.c: contains functions to initialize the DSK, the codec, the serial ports, and

for I/O. It is not included with CCS.

3. evmdm6437.h: header file with function prototypes. Features such as those used to select

the mic input in lieu of line input (by default), input gain, and so on are obtained from

this header file (modified from a similar file included with CCS).

26

Page 35: Badripatro Report Part2 09307903

4. dm6437.cmd: sample linker command file. This generic file can be changed when using

external memory in lieu of internal memory.

5. Vectors_intr.asm: a modified version of a vector file included with CCS to handle in-

terrupts. Twelve interrupts, INT4 through INT15, are available, and INT11 is selected

within this vector file.They are used for interrupt-driven programs.

6. Vectors_poll.asm: vector file for programs using polling.

7. rtsdm6437.lib,evmdm6437bsl.lib,evmdm6437csl.lib: run-time, board, and chip support

library files, respectively. These files are included with CCS and are locat.

check chip support is present in dm6437 or not.

1.5.4 Power On Self Test

• Power up DSK and watch LEDs

• Power On Self Test (POST) program stored in FLASH memory automatically executes

• Connect camera and displays to the EVM board to corresponding video input and output

port.

• Connect the power cord to the evm board

• Check,in the video display we will get the video wich is taken from input camera with

out any processing.

• In order to program the processor, we need a PC which is installed with code composer

studio IDE

• Connet the usb on board emulater to the board or connect XDS510 external emulator to

the board.

• Write the program in ccs than compile the program buide the program than connect to

board and load the program to processor.

• During, LED near USB emulator will blink blink then remain off

27

Page 36: Badripatro Report Part2 09307903

Chapter 2

Video and Image Processing Algorithms

Code Work Flow Diagrams and Profiling

2.1 Code Work Flow Diagrams

The Video Encoder Flow diagram is shown in figure 8.1, 8.2 and 8.3.

28

Page 37: Badripatro Report Part2 09307903

Figure 2.1: Video encdec file

29

Page 38: Badripatro Report Part2 09307903

Figure 2.2: Video enc_decoder function

30

Page 39: Badripatro Report Part2 09307903

Figure 2.3: Video encoder and decoder

31

Page 40: Badripatro Report Part2 09307903

2.2 Procedure for Profiling

Steps for profiling

• Load the compiled program into target Processor.

• Select the profile tab in CCS window and Select setup icon.then a profile window pop up

on the CCS screen.

• Click on the profile clock in the profile window for enable profiler.

• Move to custom tab, at the bottom of the profiling window, click on CPU cycle option.

• Go to Ranges tab, at the bottom of the profiling window. Here all the function and loops

that can be profiled are visible.

• We can select the function or the loop from the list visible and select function or loop by

using the space-bar.

• Also we can add program lines in the profile by selecting lines then Right click go to

profile go to Range .

• The corresponding line will appear in the profile window. Add the starting break-point

before the profiling area and one break-point after the profile area, to gather the profile

count. To add a break point select the line, Go to Debug then put break-point.

• After putting the break point run the code.

• We can observe the profiling results by selecting Profile menu in the main window, select

viewer.

32

Page 41: Badripatro Report Part2 09307903

Chapter 3

Working code for multiple Object

Tracking on DM6437

/*

* ======== video_preview.c ========

*

*/

/* runtime include files */

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#include <stdarg.h>

/* BIOS include files */

#include <std.h>

#include <gio.h>

#include <tsk.h>

#include <trc.h>

/* PSP include files */

#include <psp_i2c.h>

33

Page 42: Badripatro Report Part2 09307903

#include <psp_vpfe.h>

#include <psp_vpbe.h>

#include <fvid.h>

#include <psp_tvp5146_extVidDecoder.h>

/* CSL include files */

#include <soc.h>

#include <cslr_sysctl.h>

/* BSL include files */

#include <evmdm6437.h>

#include <evmdm6437_dip.h>

/* Video Params Defaults */

#include <vid_params_default.h>

/* This example supports either PAL or NTSC depending on position of JP1 */

#define STANDARD_PAL 0

#define STANDARD_NTSC 1

#define FRAME_BUFF_CNT 6

#define LAST_ROW 480

#define LAST_COL 720

static int read_JP1(void);

static CSL_SysctlRegsOvly sysModuleRegs = (CSL_SysctlRegsOvly )CSL_SYS_0_REGS;

//*******************************************************

// USER DEFINED FUNCTIONS

34

Page 43: Badripatro Report Part2 09307903

//*******************************************************

void extract_uyvy (void * currentFrame);

void write_uyvy (void * currentFrame);

//void uyvy2rgb();

void tracking();

void copy_frame();

void frame_substract();

//void rgb2uyvy();

//void noise_removal();

/////////////////////////////////////////////////

//*******************************************************

// VARIABLE ARRAYS

//*******************************************************

unsigned char I_y[480][720];

unsigned char I_u[480][360];

unsigned char I_v[480][360];

unsigned char I_y1[480][720];

unsigned char I_u1[480][360];

unsigned char I_v1[480][360];

unsigned char I_y2[480][720];

unsigned char I_u2[480][360];

unsigned char I_v2[480][360];

unsigned char I_y3[480][720];

unsigned char I_u3[480][360];

unsigned char I_v3[480][360];

35

Page 44: Badripatro Report Part2 09307903

unsigned char I_y4[480][720];

unsigned char I_u4[480][360];

unsigned char I_v4[480][360];

int MAX_NUM_BLOBS =200;

// = zeros(1, 4*MAX_NUM_BLOBS);

//unsigned char I_r[480][720];

//unsigned char I_g[480][720];

//unsigned char I_b[480][720];

//unsigned char I_temp[480][720];

////////////////////////

/*

* ======== main ========

*/

void main() {

printf("Video Preview Application\n");

fflush(stdout);

/* Initialize BSL library to read jumper switches: */

36

Page 45: Badripatro Report Part2 09307903

EVMDM6437_DIP_init();

/* VPSS PinMuxing */

/* CI10SEL - No CI[1:0] */

/* CI32SEL - No CI[3:2] */

/* CI54SEL - No CI[5:4] */

/* CI76SEL - No CI[7:6] */

/* CFLDSEL - No C_FIELD */

/* CWENSEL - No C_WEN */

/* HDVSEL - CCDC HD and VD enabled */

/* CCDCSEL - CCDC PCLK, YI[7:0] enabled */

/* AEAW - EMIFA full address mode */

/* VPBECKEN - VPBECLK enabled */

/* RGBSEL - No digital outputs */

/* CS3SEL - LCD_OE/EM_CS3 disabled */

/* CS4SEL - CS4/VSYNC enabled */

/* CS5SEL - CS5/HSYNC enabled */

/* VENCSEL - VCLK,YOUT[7:0],COUT[7:0] enabled */

/* AEM - 8bEMIF + 8bCCDC + 8 to 16bVENC */

sysModuleRegs -> PINMUX0 &= (0x005482A3u);

sysModuleRegs -> PINMUX0 |= (0x005482A3u);

/* PCIEN = 0: PINMUX1 - Bit 0 */

sysModuleRegs -> PINMUX1 &= (0xFFFFFFFEu);

sysModuleRegs -> VPSSCLKCTL = (0x18u);

return;

}

/*

* ======== video_preview ========

*/

37

Page 46: Badripatro Report Part2 09307903

void video_preview(void) {

FVID_Frame *frameBuffTable[FRAME_BUFF_CNT];

FVID_Frame *frameBuffPtr;

GIO_Handle hGioVpfeCcdc;

GIO_Handle hGioVpbeVid0;

GIO_Handle hGioVpbeVenc;

int status = 0;

int result;

int i;

int standard;

int width;

int height;

/* Set video display/capture driver params to defaults */

PSP_VPFE_TVP5146_ConfigParams tvp5146Params =

VID_PARAMS_TVP5146_DEFAULT;

PSP_VPFECcdcConfigParams vpfeCcdcConfigParams =

VID_PARAMS_CCDC_DEFAULT_D1;

PSP_VPBEOsdConfigParams vpbeOsdConfigParams =

VID_PARAMS_OSD_DEFAULT_D1;

PSP_VPBEVencConfigParams vpbeVencConfigParams;

standard = read_JP1();

/* Update display/capture params based on video standard (PAL/NTSC) */

if (standard == STANDARD_PAL) {

width = 720;

height = 576;

vpbeVencConfigParams.displayStandard = PSP_VPBE_DISPLAY_PAL_INTERLACED_COMPOSITE;

}

else {

38

Page 47: Badripatro Report Part2 09307903

width = 720;

height = 480;

vpbeVencConfigParams.displayStandard = PSP_VPBE_DISPLAY_NTSC_INTERLACED_COMPOSITE;

}

vpfeCcdcConfigParams.height = vpbeOsdConfigParams.height = height;

vpfeCcdcConfigParams.width = vpbeOsdConfigParams.width = width;

vpfeCcdcConfigParams.pitch = vpbeOsdConfigParams.pitch = width * 2;

/* init the frame buffer table */

for (i=0; i<FRAME_BUFF_CNT; i++) {

frameBuffTable[i] = NULL;

}

/* create video input channel */

if (status == 0) {

PSP_VPFEChannelParams vpfeChannelParams;

vpfeChannelParams.id = PSP_VPFE_CCDC;

vpfeChannelParams.params = (PSP_VPFECcdcConfigParams*)&vpfeCcdcConfigParams;

hGioVpfeCcdc = FVID_create("/VPFE0",IOM_INOUT,NULL,&vpfeChannelParams,NULL);

status = (hGioVpfeCcdc == NULL ? -1 : 0);

}

/* create video output channel, plane 0 */

if (status == 0) {

PSP_VPBEChannelParams vpbeChannelParams;

vpbeChannelParams.id = PSP_VPBE_VIDEO_0;

vpbeChannelParams.params = (PSP_VPBEOsdConfigParams*)&vpbeOsdConfigParams;

hGioVpbeVid0 = FVID_create("/VPBE0",IOM_INOUT,NULL,&vpbeChannelParams,NULL);

status = (hGioVpbeVid0 == NULL ? -1 : 0);

}

/* create video output channel, venc */

39

Page 48: Badripatro Report Part2 09307903

if (status == 0) {

PSP_VPBEChannelParams vpbeChannelParams;

vpbeChannelParams.id = PSP_VPBE_VENC;

vpbeChannelParams.params = (PSP_VPBEVencConfigParams *)&vpbeVencConfigParams;

hGioVpbeVenc = FVID_create("/VPBE0",IOM_INOUT,NULL,&vpbeChannelParams,NULL);

status = (hGioVpbeVenc == NULL ? -1 : 0);

}

/* configure the TVP5146 video decoder */

if (status == 0) {

result = FVID_control(hGioVpfeCcdc, VPFE_ExtVD_BASE+PSP_VPSS_EXT_VIDEO_DECODER_CONFIG, &tvp5146Params);

status = (result == IOM_COMPLETED ? 0 : -1);

}

/* allocate some frame buffers */

if (status == 0) {

for (i=0; i<FRAME_BUFF_CNT && status == 0; i++) {

result = FVID_allocBuffer(hGioVpfeCcdc, &frameBuffTable[i]);

status = (result == IOM_COMPLETED && frameBuffTable[i] != NULL ? 0 : -1);

}

}

/* prime up the video capture channel */

if (status == 0) {

FVID_queue(hGioVpfeCcdc, frameBuffTable[0]);

FVID_queue(hGioVpfeCcdc, frameBuffTable[1]);

FVID_queue(hGioVpfeCcdc, frameBuffTable[2]);

}

/* prime up the video display channel */

if (status == 0) {

FVID_queue(hGioVpbeVid0, frameBuffTable[3]);

40

Page 49: Badripatro Report Part2 09307903

FVID_queue(hGioVpbeVid0, frameBuffTable[4]);

FVID_queue(hGioVpbeVid0, frameBuffTable[5]);

}

/* grab first buffer from input queue */

if (status == 0) {

FVID_dequeue(hGioVpfeCcdc, &frameBuffPtr);

}

/* loop forever performing video capture and display */

while ( status == 0 ) {

/* grab a fresh video input frame */

FVID_exchange(hGioVpfeCcdc, &frameBuffPtr);

extract_uyvy ((frameBuffPtr->frame.frameBufferPtr));

copy_frame();

frame_substract();

//tracking();

write_uyvy ((frameBuffPtr->frame.frameBufferPtr));

/* display the video frame */

FVID_exchange(hGioVpbeVid0, &frameBuffPtr);

}

41

Page 50: Badripatro Report Part2 09307903

}

/*

* ======== read_JP1 ========

* Read the PAL/NTSC jumper.

*

* Retry, as I2C sometimes fails:

*/

static int read_JP1(void)

{

int jp1 = -1;

while (jp1 == -1) {

jp1 = EVMDM6437_DIP_get(JP1_JUMPER);

TSK_sleep(1);

}

return(jp1);

}

void extract_uyvy(void * currentFrame)

{

int r, c;

for(r = 0; r < 480; r++)

{

for(c = 0; c < 360; c++)

42

Page 51: Badripatro Report Part2 09307903

{

I_u1[r][c] = * (((unsigned char * )currentFrame) + r*720*2+4*c+ 0);

I_y1[r][2*c] = * (((unsigned char * )currentFrame) + r*720*2+4*c+ 1);

I_v1[r][c] = * (((unsigned char * )currentFrame) + r*720*2+4*c+ 2);

I_y1[r][2*c+1] = * (((unsigned char * )currentFrame) + r*720*2+4*c+ 3);

}

}

}

void write_uyvy (void * currentFrame)

{

int r, c;

int offset;

offset = 1;

for(r = 0; r < 480; r++)

{

for(c = 0; c < 360; c++)

{

// * (((unsigned char * )currentFrame) + offset)=I_temp[r][c] ;

//offset = offset + 2;

* (((unsigned char * )currentFrame) + r*720*2+4*c+ 0)= I_u[r][c] ;

* (((unsigned char * )currentFrame) + r*720*2+4*c+ 1)= I_y[r][2*c] ;

* (((unsigned char * )currentFrame) + r*720*2+4*c+ 2)= I_v[r][c] ;

* (((unsigned char * )currentFrame) + r*720*2+4*c+ 3)= I_y[r][2*c+1];

43

Page 52: Badripatro Report Part2 09307903

/*

if(r > row1 && r < row2 && c > col1 && c < col2)

* (((unsigned char * )currentFrame)+ offset) =I_temp[r][c];

else

* (((unsigned char * )currentFrame)+ offset) = 0;

offset++;

* (((unsigned char * )currentFrame)+ offset) = 0x80;

offset++;

*/

}

}

}

void copy_frame()

{

int r, c;

for(r = 0; r < 480; r++)

{

44

Page 53: Badripatro Report Part2 09307903

for(c = 0; c < 360; c++)

{

I_u[r][c] = I_u1[r][c];

I_y[r][2*c] = I_y1[r][2*c] ;

I_v[r][c] = I_v1[r][c] ;

I_y[r][2*c+1] = I_y1[r][2*c+1] ;

}

}

}

void frame_substract()

{

int arr[1000];

int r, c,m,n,p,q,l,i,j,ix,jx,a,t;

int cent_x,cent_y,cent_z;

int centroid_x[10000],centroid_y[10000];

int clow,chigh,rlow,rhigh;

int rtemp,ctemp;

int count,LL,LH,t_area;

int flag,ind;

int iblob=0;

int k=1;

for(t=0;t<1000;t++)

45

Page 54: Badripatro Report Part2 09307903

arr[t]=0;

for(r = 0; r < 480; r++)

{

for(c = 0; c < 360; c++)

{

I_u3[r][c]= I_u1[r][c] - I_u2[r][c] ;

I_y3[r][2*c] = I_y1[r][2*c] - I_y2[r][2*c];

I_v3[r][c] =I_v1[r][c] -I_v2[r][c] ;

I_y3[r][2*c+1] = I_y1[r][2*c+1]- I_y2[r][2*c+1];

/*//I_u[r][c]= I_u1[r][c] - I_u[r][c] ;

I_y[r][2*c] = I_y1[r][2*c] - I_y[r][2*c];

//I_v[r][c] =I_v1[r][c] -I_v[r][c] ;

I_y[r][2*c+1] = I_y1[r][2*c+1]- I_y[r][2*c+1];

*/

}

}

for(r = 0; r < 480; r++)

{

for(c = 0; c < 360; c++)

{

I_u2[r][c] = I_u1[r][c];

I_y2[r][2*c] = I_y1[r][2*c] ;

I_v2[r][c] = I_v1[r][c] ;

I_y2[r][2*c+1] = I_y1[r][2*c+1] ;

}

}

46

Page 55: Badripatro Report Part2 09307903

for(m = 0; m < 480; m++)

{

for(n = 0; n < 360; n++)

{

if((I_u3[m][n]<45 || I_u3[m][n]>200) & (I_y3[m][2*n]<45 || I_y3[m][2*n]>200) & (I_v3[m][n] <45 || I_v3[m][n]>200) & (I_y3[m][2*n+1] <45 || I_y3[m][2*n+1]>200))

{

I_u3[m][n] = 128 ;

I_y3[m][2*n] = 16 ;

I_v3[m][n] = 128 ;

I_y3[m][2*n+1] = 16;

}

}

}

/*

for (i=0;i<LAST_ROW;i++){

for (j=0;j<LAST_COL;j++){

I_y4[i][j]=0;}}

for (i=0;i<LAST_ROW;i++){

for (j=0;j<LAST_COL;j++){

if(I_y3[i][j]!=0 && I_y4[i][j]==0 ){

clow=j;

rlow=i;

flag=1;

47

Page 56: Badripatro Report Part2 09307903

chigh=clow+1;

rhigh=rlow+1;

while(flag==1){

flag=0;

if(chigh!=LAST_COL && rlow<LAST_ROW ){//along horz

for (rtemp =rlow;rtemp<=rhigh;rtemp++){

if (I_y3[rtemp][chigh+1]!=0){

chigh=chigh+1;

flag=1;

break;

}//end of the if I_y3

}//end for loop

}//end of if loop chigh

if(rhigh!=LAST_ROW && clow<LAST_COL ){//along vert

for (ctemp =clow;ctemp<=chigh;ctemp++){

if(I_y3[rhigh+1][ctemp]!=0){

rhigh=rhigh+1;

flag=1;

break;

}//end of the if I_y3

}////end for loop

}//end of if loop rhigh

}; //end of while

count=0;

48

Page 57: Badripatro Report Part2 09307903

LL=rhigh-rlow+1;

LH=chigh-clow+1;

t_area=LL*LH;

for (ix = rlow;ix<=rhigh;ix++)

for (jx = clow;jx<=chigh;jx++)

if(I_y3[ix][jx]!=0)

count=count +1;

if(count>(t_area/2)&& (t_area>1000) ){//% for selecting blob

iblob=iblob+1;

arr[k]=rlow;

arr[k+1]=clow;

arr[+2]=rhigh;

arr[k+3]=chigh;

k=k+4;

for (ix = rlow;ix<=rhigh;ix++)

for (jx = clow;jx<=chigh;jx++)

I_y4[ix][jx]=iblob;

//end %for jx

//end % for ix

}//end%t_area

}//end of if(I_y3[i][j]!=0 && I_y4[i][j]

}//end of for j;

}//end of for iend;

49

Page 58: Badripatro Report Part2 09307903

//tracking

cent_x=0;

cent_y=0;

cent_z=0;

ind=1;

for( a=1;a<=4*iblob;a=a+4)

{

for (m=arr[a];m<=arr[a+2];m++)

{

for (n=arr[a+1];n<=arr[a+3];n++)

{

//for(m = 0; m < 480; m++)

// {

//for(n = 0; n < 360; n++)

//{

// if((I_u3[m][n]<4 || I_u3[m][n]>200) & (I_y3[m][2*n]<4 || I_y3[m][2*n]>200) & (I_v3[m][n] <4 || I_v3[m][n]>200) & (I_y3[m][2*n+1] <4 || I_y3[m][2*n+1]>200))

if((I_y3[m][n] <4 || I_y3[m][n]>200))

{

I_u3[m][n] = 128 ;

I_y3[m][2*n] = 16 ;

I_v3[m][n] = 128 ;

I_y3[m][2*n+1] = 16;

}

50

Page 59: Badripatro Report Part2 09307903

else

{

cent_x= cent_x + m ;

cent_y= cent_y + n ;

cent_z= cent_z + 1 ;

}

//centroid_x= (cent_x/cent_z);

//centroid_y=(cent_y/cent_z);

} //end of n

} //end of m

centroid_x[ind]= (cent_x/cent_z);

centroid_y[ind]=(cent_y/cent_z);

ind=ind+1;

}//end of k

//drawing rectangle

for(l=1;l<=ind;l++)

{

for(p =centroid_x[l]-10 ; p < centroid_x[l]+10; p++)

{

for(q = centroid_y[l]-10; q < centroid_y[l]+10; q++)

{

if(p== centroid_x[l]-10 || p==centroid_x[l]+9 || q ==centroid_y[l]-10 || q==centroid_y[l]+9)

{

I_u[p][q] = 255;

51

Page 60: Badripatro Report Part2 09307903

I_y[p][2*q] = 255;

I_v[p][q] = 255;

I_y[p][2*q+1] = 255;

}

else

{

I_u[p][q] = I_u[p][q];

I_y[p][2*q] = I_y[p][2*q];

I_v[p][q] = I_v[p][q];

I_y[p][2*q+1] = I_y[p][2*q+1];

}

}

}

}//end of l loop

*/

}//end of function

void tracking()

{

int r, c,m,n,p,q;

int cent_x,cent_y,cent_z;

int centroid_x,centroid_y;

52

Page 61: Badripatro Report Part2 09307903

int dim_x,dim_y;

cent_x=0;

cent_y=0;

cent_z=0;

for(m = 0; m < 480; m++)

{

for(n = 0; n < 360; n++)

{

if((I_u3[m][n]<45 || I_u3[m][n]>200) & (I_y3[m][2*n]<45 || I_y3[m][2*n]>200) & (I_v3[m][n] <45 || I_v3[m][n]>200) & (I_y3[m][2*n+1] <45 || I_y3[m][2*n+1]>200))

{

I_u3[m][n] = 128 ;

I_y3[m][2*n] = 16 ;

I_v3[m][n] = 128 ;

I_y3[m][2*n+1] = 16;

}

else

{

cent_x= cent_x + m ;

cent_y= cent_y + n ;

cent_z= cent_z + 1 ;

}

53

Page 62: Badripatro Report Part2 09307903

centroid_x= (cent_x/cent_z);

centroid_y=(cent_y/cent_z);

}

}

for(p =centroid_x-10 ; p < centroid_x+10; p++)

{

for(q = centroid_y-10; q < centroid_y+10; q++)

{

if(p== centroid_x-10 || p==centroid_x+9 || q ==centroid_y-10 || q==centroid_y+9)

{

I_u[p][q] = 255;

I_y[p][2*q] = 255;

I_v[p][q] = 255;

I_y[p][2*q+1] = 255;

}

else

{

I_u[p][q] = I_u[p][q];

I_y[p][2*q] = I_y[p][2*q];

I_v[p][q] = I_v[p][q];

I_y[p][2*q+1] = I_y[p][2*q+1];

}

}

}

54

Page 63: Badripatro Report Part2 09307903

}

55

Page 64: Badripatro Report Part2 09307903

References

[1] Takashi morimoto, Osama kiriyama, youmei harada, ‘‘Object tracking in video images

based on image segmentation and pattern matching”, IEEE conference proceedings, vol no

05, page no, 3215-3218, 2005

[2] Yamaoka, K.; Morimoto, T.; Adachi, H.; Koide, T.; Mattausch, H.J.; , ‘‘Image segmentation

and pattern matching based FPGA/ASIC implementation architecture of real-time object

tracking”, Design Automation, 2006. Asia and South Pacific Conference on , vol., no., pp. 6

pp., 24-27 Jan. 2006 doi: 10.1109/ASPDAC.2006.1594678

[3] Qiaowei Li; Shuangyuan Yang; Senxing Zhu , ‘‘Image segmentation and major ap-

proaches, " Computer Science and Automation Engineering (CSAE), 2011 IEEE Interna-

tional Conference on, vol.2, no., pp.465-468, 10-12 June 2011

[4] Patra, D.; Santosh Kumar, K.; Chakraborty, D.; ,‘‘Object Tracking in Video Images Using

Hybrid Segmentation Method and Pattern Matching”,India Conference (INDICON), 2009

Annual IEEE , vol., no., pp.1-4, 18-20 Dec. 2009 doi: 10.1109/INDCON.2009.5409361.

[5] :Watve, A.K., ‘‘Object tracking in video scenes”, M.Tech. seminar, IIT, Kharagpur, In-

dia,2010.

[6] Uy, D.L.,‘‘An algorithm for image clusters detection and identification based on color for an

autonomous mobile robot”, Science and Education, Oak Ridge Inst., TN, DOE/OR/00033–

T670, 1996

[7] Bochem, A.; Herpers, R.; Kent, K.B.; ,‘‘ Acceleration of Blob Detection within Images in

Hardware”, University of New Brunswick, Dec 15, 2009,pp 1-37.

[8] Kaspers, A.; ,‘‘Blob Detection”, Biomedical Image Sciences, Image Sciences Institute,

UMC Utrecht, May 5, 2011.

56

Page 65: Badripatro Report Part2 09307903

[9] Gupta.M., ’’Cell Identification by Blob Detection”,UACEE International Journal of Ad-

vances in Electonics Engineering Volume 2 : Issue 1,2012.

[10] Hinz, S.; , ‘‘Fast and subpixel precise blob detection and attribution”, Image Processing,

2005. ICIP 2005. IEEE International Conference on , vol.3, no., pp. III- 457-60, 11-14 Sept.

2005 doi: 10.1109/ICIP.2005.1530427.

[11] A. R. Francois., ‘‘ Real-time multi-resolution blob track-ing”, Technical Report IRIS-04-

423, Institute for Robotics and Intelligent Systems, University of South-ern California, July

2004.

[12] M. Mancas et al, ‘‘ Augmented Virtual Studio”, Tech. rep. 4. 2008. Pp.: 1, 3.

[13] Dharamadhat, T.; Thanasoontornlerk, K.; Kanongchaiyos, P.; , ‘‘Tracking object in video

pictures based on background subtraction and image matching”, Robotics and Biomimetics,

2008. ROBIO 2008. IEEE International Conference on , vol., no., pp.1255-1260, 22-25 Feb.

2009 doi: 10.1109/ROBIO.2009.4913180.

[14] Piccardi, M., ‘‘Background subtraction techniques: a review”, Systems, Man and Cyber-

netics, 2004 IEEE International Conference on, vol.4, no., pp. 3099- 3104 vol.4, 10-13 Oct.

2004.

[15] Andrews,A., ’’Targeting multiple objects in real time”, B.E, thesis, University of Calgary,

Canada, October,1999.

[16] Saravanakumar, S.; Vadivel, A.; Saneem Ahmed, C.G., ‘‘Multiple human object tracking

using background subtraction and shadow removal techniques”, Signal and Image Processing

(ICSIP), 2010 International Conference on, vol., no., pp.79-84, 15-17 Dec. 2010.

[17] ZuWhan. K., ’’Real time object tracking based on dynamic feature grouping with back-

ground subtraction”, Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE

Conference on , vol., no., pp.1-8, 23-28 June 2008 doi: 10.1109/CVPR.2008.4587551

[18] Isard, M.; MacCormick, J.; , ’’BraMBLe: a Bayesian multiple-blob tracker”, Computer

Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on , vol.2,

no., pp.34-41 vol.2, 2001 doi: 10.1109/ICCV.2001.937594

57

Page 66: Badripatro Report Part2 09307903

[19] Gonzales, R.C., Woods,R.E., ’’Digital Image Processing-Second Edition”, Prentice Hall,

2002.

[20] Haralick, R.M., Shapiro, L. G., ’’Computer and Robot Vision”, Volume I, Addison-

Wesley, 1992, pp. 28-48.

[21] Castagno, R., Ebrahimi, T., Kunt, M.,‘‘Video Segmentation Based on Multiple Features

for Interactive Multimedia Applications”, IEEE Transactions on Circuits and Systems for

Video Technology, Vol. 8, No. 5, pp.562-571, September 1998.

[22] Kenako, T., Hori, O.,‘‘Feature selection for reliable tracking using template matching”, in

Proc. IEEE Intl. Conference on Computer Vision and Pattern Recognition (CVPR’03), vol.

1, June 2003, pp. 796-802.

[23] Bochem, A.; Herpers, R.; Kent, K.B.; ,‘‘Hardware Acceleration of BLOB Detection

for Image Processing”, Advances in Circuits, Electronics and Micro-Electronics (CEN-

ICS), 2010 Third International Conference on , vol., no., pp.28-33, 18-25 July 2010 doi:

10.1109/CENICS.2010.12.

[24] Mostafa, A., Mehdi, A., Mohammad, h., Ahmad, A., ‘‘Object Tracking in Video Se-

quence Using Background Modeling", Australian Journal of Basic and Applied Sciences,

5(5): 967-974, 2011 In Proceedings of IEEE Workshop on Application of Computer Vision,

1998,

[25] Babu, R.V.; Makur, A., ‘‘Object-based Surveillance Video Compression using Fore-

ground Motion Compensation, " Control, Automation, Robotics and Vision, 2006. ICARCV

’06. 9th International Conference on, vol., no., pp.1-6, 5-8 Dec. 2006

[26] Comaniciu, D.; Ramesh, V.; Meer, P., ‘‘Real-time tracking of non-rigid objects using

mean shift, " Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Confer-

ence on, vol.2, no., pp.142-149 vol.2, 2000.

[27] G.L. Foresti, ‘‘A real-time system for video surveillance of unattended outdoor environ-

ments", IEEE Trans. Circuits and Systems for Vid. Tech., Vol. 8, No. 6, pp. 697?704, 1998.

[28] G.L. Foresti, ‘‘Moving Target Detection and Classification from Real-Time Video", In

Proceedings of IEEE Workshop on Application of Computer Vision, 1998,

58

Page 67: Badripatro Report Part2 09307903

[29] Elbadri, M.; Peterkin, R.; Groza, V.; Ionescu, D.; El Saddik, A., ‘‘Hardware support

of JPEG, " Electrical and Computer Engineering, 2005. Canadian Conference on, vol., no.,

pp.812-815, 1-4 May 2005

[30] Deng, M.; Guan, Q. ; Xu, S., ‘‘Intelligent video target tracking system based on DSP,

" Computational Problem-Solving (ICCP), 2010 International Conference on, vol., no.,

pp.366-369, 3-5 Dec. 2010

[31] Liping, K.; Zhefeng, Z.; Gang, X., ‘‘The Hardware Design of Dual-Mode Wireless Video

Surveillance System Based on DM6437, " Networks Security Wireless Communications

and Trusted Computing (NSWCTC), 2010 Second International Conference on, vol.1, no.,

pp.546-549, 24-25 April 2010

[32] Pescador, F.; Maturana, G.; Garrido, M.J.; Juarez, E.; Sanz, C., ‘‘An H.264 video decoder

based on a DM6437 DSP, " Consumer Electronics, 2009. ICCE ’09. Digest of Technical

Papers International Conference on, vol., no., pp.1-2, 10-14 Jan. 2009

[33] Wang, Q.; Guan, Q. ; Xu, S.; Tan, F., ‘‘A network intelligent video analysis system based

on multimedia DSP, " Communications, Circuits and Systems (ICCCAS), 2010 International

Conference on, vol., no., pp.363-367, 28-30 July 2010

[34] Kim, C., Hwang, J.N., ‘‘Object-based video abstraction for video surveillance system, ”

IEEE Trans. circuits and Systems for Video Technology y, vol. 12, no. 12, pp. 11281138,

Dec. 2002.

[35] Nishi, T., Fujiyoshi, H., ‘‘Object-based video coding using pixel state analysis,” in IEEE

Intl. Conference on Pattern Recognition, 2004.

[36] William K. Pratt, ‘‘ Digital Image Processing (second edition)”, John Wiley & Sons, New

York, 1991, ISBN 0-471-85766-1.

[37] Wallace, G.K., ‘‘The JPEG still picture compression standard, " Consumer Electronics,

IEEE Transactions on, vol.38, no.1, pp.xviii-xxxiv, Feb 1992

[38] Seol,S. W., et al., ‘‘An automatic detection and tracking system of moving objects using

double differential based motion estimation", Proc. of Int. Tech. Conf. Circ./Syst., Comput.

and Comms. (ITC-CSCC2003), page no. 260? 263, 2003

59

Page 68: Badripatro Report Part2 09307903

[39] Dwivedi, V., ‘‘JPEG IMAGE COMPRESSION AND DECOMPRESSION WITH MOD-

ELING OF DCT COEFFICIENTS ON THE TEXAS INSTRUMENT VIDEO PROCESS-

ING BOARD TMS320DM6437”, Master of science, California State University, Sacra-

mento, Summer 2010.

[40] Kapadia, P.,‘‘Car License Plate Recognition Using Template Matching Algorithm”, Mas-

ter Project Report, California State University, Sacramento,Fall 2010.

[41] Gohil, N. , ‘‘Car License Plate Detection”, Masters Project Report, California State Uni-

versity, Sacramento, Fall 2010.

[42] Texas Instruments Inc., ‘‘TMS320DM6437 DVDP Getting Started Guide”, Texas, July

2007.

[43] Texas Instrument Inc., ‘‘TMS320DM6437 Digital Media Processor”, Texas, pp 1-5, 211-

234, June 2008.

[44] Texas Instruments Inc., ‘‘TMS320C64x+ DSP Cache UserâAZs Guide”, Literature Num-

ber: SPRU862A, Table 1-6, Page 23, October 2006.

[45] Texas Instrument Inc., ‘‘TMS320DM643x DMP Peripherals Overview Reference Guide”,

pp 15-17,June 2007.

[46] Texas Instrument Inc., ‘‘TMS320C6000 Programmers Guide”, Texas, pp 37-84,March

2000.

[47] Xilinx Inc., ‘‘The Xilinx LogiCOREâDc IP RGB to YCrCb Color-Space Converter”, pp

1-5, July 2010.

[48] Texas Instruments Inc., ‘‘How to Use the VPBE and VPFE Driver on TMS320DM643x”.

Dallas, Texas, November 2007.

[49] Texas Instrument Inc., ‘‘TMS320C64X+ DSP Cache”, User Guide, pp 14-26, February

2009.

[50] Texas Instruments technical Reference, ‘‘TMS320DM6437 Evaluation Module”, Spec-

trum Digital , 2006.

[51] Keith Jack, ‘‘Video Demystified: A Handbook for the Digital Engineer”, 4th Edition.

60

Page 69: Badripatro Report Part2 09307903

[52] B.I. (Raj) Pawate, ‘‘Developing Embedded Software using DaVinci&OMAP Technology”

[53] Al Bovik, Department of Electrical and Computer Engineering, UTA Texas, ‘‘Handbook

of Image & Video Processing”, Academic Press Series, 1999.

[54] Bonnie L. Stephens, Student Thesis on “Image Compression Algorithms”, California State

University, Sacramento, August 1996

[55] Berkeley Design Technology, Inc.,‘‘The Evolution of DSP Processors”, World Wide Web,

http://www.bdti.com/articles/evolution.pdf, Nov. 2006.

[56] Berkeley Design Technology, Inc., ‘‘Choosing a Processor: Benchmark and Beyond”,

World Wide Web,http://www.bdti.com/articles/20060301_TIDC_Choosing.pdf, Nov. 2006.

[57] University of Rochester, ‘‘DSP Architectures: Past, Present and Future”, World Wide

Web, http://www.ece.rochester.edu/research/wcng/papers/CAN_r1.pdf, Nov. 2006.

[58] Steven W. Smith,‘‘ The Scientist and Engineer’s Guide to Digital Signal Processing”,

Second Edition, California Technical Publishing, 1999.

[59] Texas Instruments Inc., ‘‘TMS320DM642 Technical Overview”, Dallas, Texas, Septem-

ber 2002.

61

Page 70: Badripatro Report Part2 09307903

Acknowledgments

I express my sincere thanks and deep sense of gratitude to my supervisor Prof. V Rajbabu for

his invaluable guidance, inspiration, unremitting support,encouragement and for his stimulating

suggestions during in the preparation of this report. His persistence and inspiration during the

“ups and downs” in research, and his clarity and focus during the uncertainties, have been very

helpful to me. Without his continuous encouragement and motivation, the present work would

not have seen the light of day.

I acknowledge with thanks to all EI lab members and TI-DSP lab members, at IIT Bombay

who have directly or indirectly helped me throughout my stay in IIT. I Would also like to thank

the assistance provided by the department staff, central library staff and computer faculty staff.

I would like to express my sincere thanks to Mr. Ajay Nandoriya and Mr. K.S Nataraj for

their help and support during the project work.

The family members are of course, a source of faith and moral strength. I acknowledge

the shower of blessing and love of my parents, Mr. Rajiba Lochana Patro and Mrs. Uma Rani

patro, also Godaborish patro and Madhu sundan patro for their unrelenting moral supports in

difficult times.I wish to express my deep gratitude towards all of my friends and colleagues for

providing me constant moral support, their support makes my stay in institute pleasant. I have

enjoyed every moment that I spent with all of you.

And finally I am thankful to God in whom I trust.

Date: Badri Narayan Patro