keynote 2011

510
1 PROCEEDINGS 11,12 March 2011 www.care.ac.in

Upload: sheik-imam

Post on 29-Nov-2014

807 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: Keynote 2011

1

PROCEEDINGS

11,12 March 2011 www.care.ac.in

Page 2: Keynote 2011

2

ORGANISING COMMITTEE

CHIEF PATRON

Shri K. N. Ramajayam

Chairman G Narayanan Educational Trust

PATRON

Shri B. Prative Chend

Chief Executive Officer G Narayanan Educational Trust Group of Institutions

CHAIR PERSON

Dr. S. Padmanabha

Director G Narayanan Educational Trust Group of Institutions

CONVENOR

Dr. K. Murugu Mohan Kumar

Dean CARE School of Engineering

CO – CONVENOR

Dr. A. K. Santra

Dean CARE School of Computer Applications

Dr. S. Ramanathan Dean CARE School of Business Management

SECRETARY

Dr. K. Rangnathan,

Professor, CARE School of Business Management

Page 3: Keynote 2011

3

JOINT SECRETARY

Dr. K. Revathy HOD, Mathematics Dr. S. Parthasarathy HOD, Chemistry Mr. P. Ravindran HOD, Civil Mrs. D. Vimala HOD, MCA Mr. A.Velsamy HOD, MBA Mr. K. Karunamurthy HOD, Mechanical Mrs. J. Maya HOD, Material Science

MEMBERS

STAFF

Mrs. S. Santhi Mr. B. MuthuKrishnan Mr. M. Murali Mr. M. Anish Alfred Vaz Mr. C. Senthil Nathan Mrs. J. R. Lekha Ms. K. A. Sarah Mr. D. Vasudevan Mrs. S. Bhuvaneshwari Mrs. M. Bhuvaneshwari Mrs. K. Sathya Prabha Ms. P. R. T Malathi Ms. S. Akilandeswari

STUDENTS

M. Clarita II MBA C. Raja guru II MBA M. Ranganathan II MBA M. Sneka II MCA M. Sooriyaa II MCA R. Sujitha I MBA P. Yazhina I MBA S. Balaji I MBA T. Dhivya I MBA J. Hariharan I MBA G. Maunndeeswari I MCA S. Manjul I MCA

D. Sagesthwaran I MCA

K. Guhanathan II CIVIL A. Balamurugan II CIVIL M. Alexzander II MECH P. Balaguru II MECH S. Manikandan II MSE R. Kurubakaran II MSE P. Vikram II EEE N.S. Vignesh II EEE T. M. Praveen Kumar I MECH

P. Balasubramanian I MECH C. Santhosh Kumar Roa I MSE

G. Madhura I EEE K. Vijay I CIVIL

Page 4: Keynote 2011

4

MESSAGE

Shri K. N. Ramajayam

Chairman, GNET

It is a great pleasure for CARE group of Institutions to conduct its 1st

Integrated National Conference titled

‘Keynote ‘. “Education should bring to light the ideal of the individual”, said J.P. Richter, and I hope this

Endeavour would spread the light of knowledge far and wide. I also hope that this conference would be a

good platform for professionals, researchers and students of various disciplines to gain and exhibit their

knowledge. I wish the organizers and participants and grand success.

Date: 07.03.2011 K. N. Ramajayam

Page 5: Keynote 2011

5

MESSAGE

Shri B. Prative Chend

Chief Executive Officer, GNET

The social focus on education has been just on graduating and getting a high paying job. In the process other

countries hold most of the Intellectual Property rights. We pay a premium to use their ideas in our products. Is

this the way forward? We believe our country has an abundance of brilliant minds. KEYNOTE is our little effort

to change the dynamics in bringing out great ideas from our Professionals, Research Scholars, Faculty and

Students and using it to build cost effective useful products and processes. Together let us set the tone for the

future.

Date: 07.03.2011 B. Prative Chend

Page 6: Keynote 2011

6

MESSAGE

Dr. S. Padmanabha

Director, GNET-CARE

This institute is one green ambience and beauty location place at Cauvery basin where you could be away

from the world, yet still be a part of it in which new ideas and creativity can flourish in the efficient hands of

multicultural faculties. We have lot of stuff to provide your brain an incomparable fuel to pace up with the

today’s most updated and refined knowledge of the basic and advanced Engineering/Technological and

application oriented concepts at our esteemed institute. “KEY NOTE-2011”, a National Conference being

organized on 11- 12 March 2011 on the themes “Technology for Sustainable Tomorrow” by School of

Engineering, “Business Strategies for Sustainable Economy by School of Business Management and on

Computer Intelligence & Applications jointly organized by School of Computer Applications & Mathematics

Department. I wish good luck to all the Delegates, Paper presenting authors and at large the participants a

pleasant stay with value addition.

Date: 07.03.2011 Dr. S. Padmanabha

Page 7: Keynote 2011

7

CONTENTS

S. No

Description 1 Technical Session – I ( 11.03.2011 : 11.15 pm – 01.00 pm ) 2 Technical Session – II ( 11.03.2011 : 02.15 pm – 04.00 pm ) 3 Technical Session – III ( 12.03.2011 : 09.30 am – 11.30 am ) 4 Technical Session – IV ( 12.03.2011 : 11.45 am – 01.45 pm )

School of Computer Applications Computational Intelligence and Applications

5 Technical Session – I ( 11.03.2011 : 11.15 pm – 01.00 pm ) 6 Technical Session – II ( 11.03.2011 : 02.15 pm – 04.00 pm ) 7 Technical Session – III ( 12.03.2011 : 09.30 am – 11.30 am ) 8 Technical Session – IV ( 12.03.2011 : 11.45 am – 01.45 pm )

School of Engineering Technology for Sustainable Tomorrow

9 Technical Session – I ( 11.03.2011 : 11.15 pm – 01.00 pm ) 10 Technical Session – II ( 11.03.2011 : 02.15 pm – 04.00 pm ) 11 Technical Session – III ( 12.03.2011 : 09.30 am – 11.30 am ) 12 Technical Session – IV ( 12.03.2011 : 11.45 am – 01.45 pm )

School of Business Management Paper ID Paper Title

13 MBA-MAR-03 GREEN MARKETING INITIATIVES BY INDIAN CORPORATE- PROSPECTS AND

CONFRONTS IN FACING GLOBAL COMPETITION

14 MBA-MAR-06 GREEN MARKETING 16 MBA-FIN-02 RISK ANALYSIS IN INVESTMENT PORTFOLIOS 17 MBA-FIN-05 RISK ANALYSIS IN INVESTMENT PORTFOLIO 18 MBA-FIN-06 RISK ANALYSIS IN INVESTMENT PORTFOLIO 19 MBA-FIN-07 RISK ANALYSIS IN INVESTMENT PORTFOLIO

20 MBA-FIN-03 RISK AND RETURN ANALYSIS SELECTED MUTUAL FUND COMPANIES IN INDIA

21 MBA-HR-05 EMPLOYEE PERCEPTION SURVEY- A EFFECTIVE TOOL FOR AN ENTREPRENEUR IN A CURRENT SCENARIO

22 MBA-HR-06 RECRUITMENT AND RETENTION IN A GARMENT INDUSTRY 23 MBA-HR-07 EMPLOYEE SATISFACTION 24 MBA-HR-02 IMPACT OF ORGANISATION CULTURE ON EMPLOYEE PERFORMANCE 25 MBA-HR-03 ANALYSIS ON THE EFFECTIVENESS OF PERFORMANCE APPRAISAL

26 MBA-HR-04

TALENT HUNT IN INDIAN CORPORATE THROUGH TALENT MANAGEMENT PRACTICES: RETAINING EMPLOYEES FOR GAININGCOMPETITIVE ADVANTAGE

27 MBA-ED-01 A STUDY ON THE PERCEPTION OF ENTREPRENEURS’ TOWARDS ENTREPRENEURSHIP DEVELOPMENT PROGRAMME ORGANISED BY

Page 8: Keynote 2011

8

SISI

28 MBA-ED-04 PROFILE OF A GLOBAL ENTREPRENEUR IN THE PRESENT MARKET SCENARIO

29 MBA-GEN-01 QUALITY MANAGEMENT SYSTEM IN RESEARCH AND DEVELOPMENT

30 MBA-FIN-04 MICROFINANCE IS BUT ONE STRATEGY BATTLING AND IMMENSE PROBLEM

31 MBA-ED-03 TO STUDY THE ROLE GOVERNMENT SUPPORTIVE THAT AFFECTS THE GROWTH OF WOMEN ENTREPRENEUR IN SMALL SCALE SECTOR

School of Computer Applications Paper ID Paper Title

32 SCA47 COMPONENT ANALYSIS BASED FACIAL EXPRESSION RECOGNITION

33 SCA48 A HYBRID COOPERATIVE QUANTUM-EHAVED PSO ALGORITHM FOR IMAGE SEGMENTATION USING MULTILEVEL THRESHOLDING

34 SCA61 BIOMETRIC AUTHENTICATION IN PERSONAL IDENTIFICATION

35 SCA65 THE RESEARCH DETECTION ALGORITHMS USING MULTIMODAL SIGNAL AND IMAGE ANALYSIS

36 SCA71 A FAST IMAGE COMPRESSION ALGORITHM BASED ON SPIHT 37 SCA58 AN ADAPTIVE MOBILE LIVE VIDEO STREAMING 38

SCA64

A COMPREHENSIVE STUDY OF CREDIT CARD FRAUD DETECTION TECHNIQUES USING STATISTICS TOOLS

39 SCA68 A COMPARATIVE STUDY OF ALGORITHMS FOR FOREST FIRE DETECTION

40 SCA82 IMAGE STEGNOGRAPHY USING PIXEL INDICATOR TECHNIQUE 41 SCA22 CLOUD COMPUTING SYSTEM ARCHITECTURE

42 SCA46 ISSUES IN CLOUD COMPUTING SOLVED USING MIDDLE SERVER TECHNOLOGY

43 SCA63 ANALYSIS AND IMPLEMENTATION OF A SECURE MOBILE INSTANT MESSENGER (JARGON) USING JABBER SERVER OVER EXISTING PROTOCOLS

44 SCA75 CLOUD COMPUTING – THE VIRTUAL WORLD

45 SCA 83 CONJUGATE GRADIENT ALGORITHMS IN SOLVING INTEGER IN A PROGRAMING OPTIMISATION PROBLEMS

46 SCA 85 A SURVEY ON META HEURISTICS ALGORITHMS

47 SCA01 DISTRIBUTED VIDEO ENCODING FOR WIRELESS LOW-POWER SURVEILLANCE NETWORK

48 SCA02 J2ME-BASED WIRELESS INTELLIGENT VIDEO SURVEILLANCE SYSTEM USING MOVING OBJECT RECOGNITION TECHNOLOGY

49 SCA40 THE NS-2 SIMULATOR BASED IMPLEMENTATION AND PERFORMANCE ANALYSES OF MANYCAST QOS ROUTING ALGORITHM

50 SCA51 STUDY OF PSO IN CRYPTOGRAPHIC SECURITY ISSUES

51 SCA55 ENHANCING IRIS BIOMETRIC RECOGNITION SYSTEM WITH CRYPTOGRAPHY AND ERROR CORRECTION CODES

52 SCA80 THE NEW PERSPECTIVES IN MOBILE BANKING

53 SCA84 PERFORMANCE EVALUATION OF DESTINATION SEQUENCE DISTANCE VECTOR AND DYNAMIC SOURCE ROUTING PROTOCOL IN AD-HOC NETWORKS

School of Engineering Paper ID Paper Title

54 SCE01 ON LINE VEHICLE TRACKING USE GIS & GPS 55 SCE02 USAGE OF DRIVE TRAINS AS AUXILLARY ENGINES IN TRUCKS

56 SCEO3 SENSOR WEB DOMESTIC LPG GAS LEAKAGE DETECTION IN DOMESTIC SYSTEM

57 SCE04 IMPEMENTATION OF ZERO VOLTAGE SWITCHING REASONANT

58 SCE05 THREE PORT SERIES RESONANT DC_DC CONVERTER TO INTERFACE,RENEWABLE ENERGY SOURCES WITH BI DIRECTIONAL

Page 9: Keynote 2011

9

LOAD AND ENERGY STORAGE PORTS

59 SCE06 IMPLEMENTATION OF ZERO VOLTAGE TRANSITION CURRENT FED FULL BRIDGE PWM CONVERTER WITH BOOST CONVERTER

60 SCE07 DETECTING ABANDONED OBJECTS WITH A MOVING CAMERA

61 SCE08 REPAIR AND REHABILITATION 62 SCE09 LOW COST HOUSING

63 SCE10 COST EFFECTIVE CONSTRUCTION TECHNOLOGY TO MITIGATE THE CLIMATE CHANGE

64 SCE11 EVALUATION OF STRESS CRACKING RESISTANCE OF VARIOUS HDPE DRAINAGE GEO NET

65 SCE12 RECENT CONSTRUCTION MATERIAL – ULTRA HIGH STRENGTH CONCRETE

66 SCE13 AUTOMATION IN WASTE WATER TREATMENT 67 SCE14 SEISMIC BEHAVIOR OF REINFORCED CONCRETE BEAM 68 SCE15 WATERSHED MANAGEMENT 69 SCE16 FOOT OF NANOTECHNOLOGY 70 SCE17 INTELLIGENT TRANSPORT SYSTEM

71 SCE18 OPTIMISATION OF MACHINING PARAMETERS FOR FORM TOLERANCE IN EDM FOR ALUMINIUM METAL MATRIX COMPOSITES

72 SCE19 OPTIMISATION OF SQUEEZE CASTING PROCESS PARAMETER FOR WEAR USING SIMULATED ANNEALING

73 SCE20 OPTIMISATION OF NDT TESTING WITH RESPECT TO DIAMETER AND DENSITY CHANGES

74 SCE21 OPTIMAL MANIPULATOR DESIGN USING INTELLIGENT TECHNIQUES(GA AND DE)

75 SCE22 STUDY OF PROJECT MANAGEMENT IN CAR INFOTAINMENT

76 SCE23 HEAT TRANSFER AUGMENTATION IN A PILOT SIMULATED SOLAR POND

77 SCE24 PETRI NET METHODS: A SURVEY IN MODELING AND SIMULATION OF PROJECTS

78 SCE25 QUALITY SUSTENANCE ON VENDOR AND INVENTORY PRODUCTS

79 SCE26 EFFECT ON SURFACE ROUGHNESS OF TURNING PROCESS IN CNC LATHE USING NANO FLUID AS A COOLANT

80 SCE27 VARIOUS CO-GENERATION SYSTEMS – A CASE STUDY

81 SCE28 THERMAL CONDUCTIVITY IMPROVEMENT OF LOW TEMPERATURE ENERGY STORAGE SYSTEM USING NANO-PARTICLE

82 SCE29 DESIGN OF JIGS, FIXTURES, GAUGING AND THE PROCESS SHEET PLAN FOR THE PRODUCTION OF AUXILIARY GEAR BOX-MARK IV GEARS AND SHAFTS

Page 10: Keynote 2011

10

CARE School of Business Management

National Conference On

Business Strategies on Sustainable Economy

Technical Session-I Venue: H3 (Drawing Hall I st

Floor)

Date: 11.3.2011 Time: 11.15am – 01.00pm Major Area: Marketing

S.NO PAPER-ID PAPER TITLE NAME OF AUTHOR

NAME OF

INSITUTION

PLACE /STATE

1. MBA-MAR-

03

GREEN MARKETING INITIATIVES BY

INDIAN CORPORATE- PROSPECTS AND

CONFRONTS IN FACING GLOBAL

COMPETITION

Miss.Sarunya.T & Miss.

Saranya.D

M.I.E.T. Engineering

College Trichy/ Tamilnadu

2. MBA-MAR-

06

GREEN MARKETING

S.Paul Raj (Student)

Sudarsan

Engineering College Pudukkottai/

Tamilnadu

3. MBA-MAR-

08

GREEN MARKETING

Nathania Savielle,

Hameetha

Holy Cross College Trichy/ Tamilnadu

Page 11: Keynote 2011

11

CARE School of Business Management

National Conference On

Business Strategies on Sustainable Economy

Technical Session-II Venue: H3 (Drawing Hall I st

Floor)

Date: 11.3.2011 Time: 02.15pm – 04.00pm Major Area: Finance

S.NO PAPER-ID PAPER TITLE NAME OF

AUTHOR

NAME OF

INSITUTION PLACE /STATE

1. MBA-FIN-02

“RISK ANALYSIS IN INVESTMENT

PORTFOLIOS”

S.Jeroke Sharlin

Sudharsan

Engineering

College

Pudukkottai/

Tamilnadu

2. MBA-FIN-05

RISK ANALYSIS IN INVESTMENT

PORTFOLIO

Suman Holy Cross College Trichy/ Tamilnadu

3. MBA-FIN-06

RISK ANALYSIS IN INVESTMENT

PORTFOLIO

Mrs. Reshma

Sibichan

Garden City

College Bangalore/Karnataka

4. MBA-FIN-07

RISK ANALYSIS IN INVESTMENT

PORTFOLIO

C. Hemalatha

P.A. College Of

Engineering

Technology

Pollachi/ Tamilnadu

5. MBA-FIN-03 RISK AND RETURN ANALYSIS SELECTED

MUTUAL FUND COMPANIES AND INDIA

Gopinath

S.T.

Shenbagavallir.

Anu

Anna University Trichy/ Tamilnadu

Page 12: Keynote 2011

12

CARE School of Business Management

National Conference On

Business Strategies on Sustainable Economy

Technical Session-III Venue: H1 (New Library Building)

Date: 12.3.2011 Time: 09.30am -11.30am Major Area: HR Management

S.No

PAPER-ID

PAPER TITLE

NAME OF AUTHOR

NAME OF

INSITUTION

PLACE /STATE

1. MBA-HR-

05

EMPLOYEE PERCEPTION SURVEY- A

EFFECTIVE TOOL FOR AN

ENTREPRENEUR IN A CURRENT

SCENARIO

Ms. Shubha N, Prof.

S.A. Vasantha Kumara,

Ms. Reena Y.A

Dayananda Sagar

College Of

Engineering,

Bangalore/Karnataka

2. MBA-HR-

06

RECRUITMENT AND RETENTION IN A

GARMENT INDUSTRY

Mohammed Ziaulla,

S A Vasantha Kumarag.

Mahendra

DSCE Bangalore/Karnataka

3. MBA-HR-

07

EMPLOYEE SATISFACTION

Anju.S.B

Dr. H. Ramakrishna,

Kaushik. M.K

DSCE

Bangalore/Karnataka

4. MBA-HR-

02

IMPACT OF ORGANISATION CULTURE

ON EMPLOYEE PERFORMANCE

Naveen Kumar P,

S A Vasanthakumara

Darshan Godbole

DSCE Bangalore/Karnataka

5. MBA-HR-

03

ANALYSIS ON THE EFFECTIVENESS OF

PERFORMANCE APPRAISAL

Asha D.B

S. A. Vasantha Kumara

Dayananda Sagar

College Of

Engineering

Bangalore/Karnataka

6. MBA-HR-

04

TALENT HUNT IN INDIAN

CORPORATE THROUGH TALENT

MANAGEMENT PRACTICES:

RETAINING EMPLOYEES FOR

GAININGCOMPETITIVE ADVANTAGE

Miss V.Sujo &

Miss. M.Mubeena Begum

M.I.E.T. Engineering

College Trichy/ Tamilnadu

Page 13: Keynote 2011

13

CARE School of Business Management

National Conference On

Business Strategies on Sustainable Economy

Technical Session-IV Venue: H3 (Drawing Hall I st

Floor)

Date: 12.3.2011 Time: 11.45am – 01.45pm Major Area: General Management/Ed

S .No PAPER-ID PAPER TITLE NAME OF AUTHOR

NAME OF

INSITUTION

PLACE /STATE

1. MBA-ED-01

A STUDY ON THE PERCEPTION OF

ENTREPRENEURS’ TOWARDS

ENTREPRENEURSHIP DEVELOPMENT

PROGRAMME ORGANISED BY SISI

Dr. B.Balamurugan

Hallmark Business

School Trichy/ Tamilnadu

2. MBA-ED-04

PROFILE OF A GLOBAL

ENTREPRENEUR IN THE PRESENT

MARKET SCENARIO

Dr. Malaramannan Srinivasa

Engineering College

Perambalur /

Tamilnadu

3. MBA-GEN-01

QUALITY MANAGEMENT SYSTEM IN

RESEARCH AND DEVELOPMENT

Krishna S Gudi

Dr N S Kumar

R.V.Praveena Gowda

DSCE Bangalore/Karnataka

4. MBA-FIN-04

MICROFINANCE IS BUT ONE

STRATEGY BATTLING AND IMMENSE

PROBLEM

M. Samtha Holy Cross College Trichy/ Tamilnadu

5. MBA-ED-03

TO STUDY THE ROLE GOVERNMENT

SUPPORTIVE THAT AFFECTS THE

GROWTH OF WOMEN

ENTREPRENEUR IN SMALL SCALE

SECTOR.

Dr.Chitra Devi Vel Sri Ranga Sanku

College Avadi/ Tamilnadu

Page 14: Keynote 2011

14

CARE School of Computer Applications

National Conference On

Computer Intelligence and Applications

Technical Session-I Venue: H2 (Drawing Hall I st

Floor)

Date: 11.3.2011 Time: 11.15 am – 01.00 pm Major Area: Image Processing

S.NO PAPER-ID PAPER TITLE NAME OF AUTHOR NAME OF

INSITUTION

PLACE /STATE

1. SCA47 COMPONENT ANALYSIS BASED FACIAL EXPRESSION RECOGNITION

Naresh Allam Punithavathy Mohan B. V. Santhosh Krishna

Hindustan Institute Of Technology And Science

Chennai/ Tamilnadu

2. SCA48

A HYBRID COOPERATIVE QUANTUM-EHAVED PSO ALGORITHM FOR IMAGE SEGMENTATION USING MULTILEVEL THRESHOLDING

J. Manokaran K. Usha Kingsly Devi

Anna University Of Technology

Tirunelveli/ Tamilnadu

3. SCA61

BIOMETRIC AUTHENTICATION IN PERSONAL IDENTIFICATION

Rajendiran Manojkanth B. Mari Kesava Venkatesh

PSNA College Of Engineering And Technology

Dindigul / Tamilnadu

4. SCA65

THE RESEARCH DETECTION ALGORITHMS USING MULTIMODAL SIGNAL AND IMAGE ANALYSIS

K. Baskar K. Gnanathangavelu S. Pradeep

Bharathidasan University Technology Park Periyar E.V.R College

Trichy/ Tamilnadu

5. SCA71 A FAST IMAGE COMPRESSION ALGORITHM BASED ON SPIHT

S.Balamurugan G.Saravanan

Prist University Thanjavur / Tamilnadu

Page 15: Keynote 2011

15

CARE School of Computer Applications

National Conference On

Computer Intelligence and Applications

Technical Session-II Venue: H1 (New Library Building)

Date: 11.3.2011 Time: 02.15pm – 04.00pm Major Area: Real Time Applications

S.No

PAPER-

ID

PAPER TITLE

NAME OF AUTHOR

NAME OF

INSITUTION

PLACE /STATE

1. SCA58

AN ADAPTIVE MOBILE LIVE VIDEO STREAMING

K. Senthil

Pbcet College Of Engineering And Technology

Trichy/ Tamilnadu

2. SCA64

A COMPREHENSIVE STUDY OF CREDIT CARD FRAUD DETECTION TECHNIQUES USING STATISTICS TOOLS

V. Mahalakshmi J .Maria Shilpa K. Baskar

Bharathidasan University Technology Park Periyar E.V.R College

Trichy/ Tamilnadu

3. SCA68

A COMPARATIVE STUDY OF

ALGORITHMS FOR FOREST FIRE DETECTION

K.Baskar Prof. Geetha Prof .Shantha Rrobinson

Periyar E.V.R College

Trichy/ Tamilnadu

4. SCA82

IMAGE STEGNOGRAPHY USING PIXEL INDICATOR TECHNIQUE

S. Sangeetha J.J.College Of Arts And Science

Puddukottai/Tamilnadu

Page 16: Keynote 2011

16

CARE School of Computer Applications

National Conference On

Computer Intelligence and Applications

Technical Session-III Venue: H2 (Drawing Hall I st

Floor)

Date: 12.3.2011 Time: 09.30am – 11.30am Major Area: Cloud Computing

S .NO PAPER-ID PAPER TITLE NAME OF AUTHOR

NAME OF INSITUTION

PLACE /STATE

1. SCA22 CLOUD COMPUTING SYSTEM ARCHITECTURE

A.Jainulabudeen B .Mohamed Faize Basha

Jamal Mohammed College

Trichy/ Tamilnadu

2. SCA46

ISSUES IN CLOUD COMPUTING SOLVED USING MIDDLE SERVER TECHNOLOGY

Palivela Hemant Nitin.P.Chawande Hemantwani

Annasaheb Chudaman Patil College Of Engineering

Mumbai/ Maharashtra

3. SCA63

ANALYSIS AND IMPLEMENTATION OF A SECURE MOBILE INSTANT MESSENGER (JARGON) USING JABBER SERVER OVER EXISTING PROTOCOLS

Padmini Priya S. Chandra

Vit University Vellore/

Tamilnadu

4. SCA75 CLOUD COMPUTING – THE VIRTUAL WORLD

S. Sabarirajan Jj College Of Engineering & Technology

Trichy/ Tamilnadu

5. SCA83

CONJUGATE GRADIENT ALGORITHMS IN SOLVING INTEGER LINEAR PROGRAMMING OPTIMIZATION PROBLEMS

Dr. G. M. Nasira S. Ashok Kumar M. Siva Kumar

Government Arts College, Salem, Sasurie College Of Engineering, Vijayamangalam Tirupur

Salem/ Tamil Nadu

6. SCA85 A SURVEY ON META-HEURISTICS ALGORITHMS

S. Jayasankari, Dr. G.M. Nasira, P. Dhakshinamoorthy m. Sivakumar

Government Arts College, Salem, Sasurie College Of Engineering, Vijayamangalam Tirupur

Salem/ Tamil Nadu

Page 17: Keynote 2011

17

CARE School of Computer Application

National Conference On

Computer Intelligence and Applications

Technical Session-IV Venue: H2 (Drawing Hall I st

Floor)

Date: 12.3.2011 Time: 11.45am – 01.45pm Major Area: Networking & Security Issues

S.NO PAPER-

ID PAPER TITLE NAME OF AUTHOR

NAME OF INSITUTION

PLACE /STATE

1. SCA01

DISTRIBUTED VIDEO ENCODING FOR WIRELESS LOW-POWER SURVEILLANCE NETWORK

G. Bhargav R.V. Praveena Gowda

Dayanand Sagar College Of Engineering

Bangalore/ Karnataka

2. SCA02

J2ME-BASED WIRELESS INTELLIGENT VIDEO SURVEILLANCE SYSTEM USING MOVING OBJECT RECOGNITION TECHNOLOGY.

G. Bhargav, R.V.Praveena Gowda

Dayanand Sagar College Of Engineering

Bangalore/ Karnataka

3. SCA40

THE NS-2 SIMULATOR BASED IMPLEMENTATION AND PERFORMANCE ANALYSES OF MANYCAST QOS ROUTING ALGORITHM

Dr. K. Sri Rama Krishna S. Chittibabu J. Rajeswara Rao

V.R Siddhartha Engineering

Vijayawada/ Andhrapradesh

4. SCA51

STUDY OF PSO IN CRYPTOGRAPHIC SECURITY ISSUES

R. Ahalya

Bharathidasan University

Trichy/ Tamilnadu

5. SCA55

ENHANCING IRIS BIOMETRIC RECOGNITION SYSTEM WITH CRYPTOGRAPHY AND ERROR CORRECTION CODES

J. Martin Sahayaraj Prist University Thanjavur/ Tamilnadu

6. SCA80 THE NEW PERSPECTIVES IN MOBILE BANKING

P.Sathya R.Gobi Dr.E.Kirubakaran

Jayaram College Of Engg. & Tech Bharathidasan University & Bhel

Trichy/ Tamilnadu

7. SCA84

PERFORMANCE EVALUATION OF

DESTINATION SEQUENCED

DISTANCE VECTOR AND DYNAMIC

SOURCE ROUTING PROTOCOL IN

AD-HOC NETWORKS

Dr. G. M. Nasira S. Vijayakumar P. Dhakshinamoorthy

Government Arts College, Salem, Sasurie College Of Engineering, Vijayamangalam Tirupur

Salem/ Tamil Nadu

Page 18: Keynote 2011

18

CARE School of Engineering

National Conference On

Technology for Sustainable Tomorrow

Technical Session-I Venue: H1 (New Library Building)

Date: 11.3.2011 Time: 11.15 am – 01.00 pm Major Area: EEE

S. No PAPER-

ID PAPER TITLE

NAME OF AUTHOR

NAME OF INSITUTION

PLACE /STATE

1. SCE01 ON LINE VEHICLE TRACKING USE GIS&GPS

V.Naga Sushimitha S.Meenakshi

MKCE Karur /

TamilNadu

2. SCE02 USAGE OF DRIVE TRAINS AS AUXILLARY ENGINES IN TRUCKS

R.Venkatraman B.Vasanth

Rajalakshmi Engineering College

Chennai / TamilNadu

3. SCEO3 SENSOR WEB DOMESTIC LPG GAS LEAKAGE DETECTION IN DOMESTIC SYSTEM

N.Chitra devi V.Renganathan P.Sathish

Kumaraguru College Of Technology

Coimbatore / TamilNadu

4. SCEO4 IMPEMENTATION OF ZERO VOLTAGE SWITCHING REASONANT

N. Dhivya M.Murugan

KSR College Of Engineering

Namakkal / TamilNadu

5. SCE05

THREE PORT SERIES RESONANT DC_DC CONVERTER TO INTERFACE,RENEWABLE ENERGY SOURCES WITH BI DIRECTIONAL LOAD AND ENERGY STORAGE PORTS

P.Karthikeyan KSR College Of Engineering

Namakkal / TamilNadu

6. SCE06

IMPLEMENTATION OF ZERO VOLTAGE TRANSITION CURRENT FED FULL BRIDGE PWM CONVERTER WITH BOOST CONVERTER

C. Devigarani Dr. I. Gnanambal S. Mahendran

KSR College Of Engineering

Namakkal / TamilNadu

7. SCE07

DETECTING ABANDONED OBJECTS WITH A MOVING

CAMERA

J.Vikram Prohit SMK Fomra Institute Of Technology

Chennai / TamilNadu

Page 19: Keynote 2011

19

CARE School of Engineering

National Conference On

Technology for Sustainable Tomorrow

Technical Session-II Venue: H2 (Drawing Hall I st

Floor)

Date: 11.3.2011 Time: 02.15pm – 04.00pm Major Area: CIVIL

S. No

PAPER-

ID

PAPER TITLE

NAME OF AUTHOR

NAME OF

INSITUTION

PLACE /STATE

1. SCE08 REPAIR AND REHABILITATION

P.M.MohammedHaneef C.Periathirumal

AUT-Thirukuvalai Nagapattinam /

TamilNadu

2. SCE09 LOW COST HOUSING Sandeep Kumar & M.

Vinodh Kumar Jayam College of Engineering

Dharmapuri / TamilNadu

3. SCE10

COST EFFECTIVE CONSTRUCTION TECHNOLOGY TO MITIGATE THE CLIMATE CHANGE

G. Parthipan R. Lakshmikanth

Periyar Mainaymai University

Thanjavur/ TamilNadu

4. SCE11

EVALUATION OF STRESS CRACKING RESISTANCE OF VARIOUS HDPE DRAINAGE GEO NET

Sheik Imam S, Karthick C

PSNA College Engineering

Coimbatore / TamilNadu

5. SCE12 RECENT CONSTRUCTION MATERIAL – ULTRA HIGH STRENGTH CONCRETE

M. sokkalingam P. Gowtham

JJ College of Engineering

Trichy / TamilNadu

6. SCE13 AUTOMATION IN WASTE WATER TREATMENT

D Subashri, A Sangeetha

Periyar Mainaymai University

Thanjavur/ TamilNadu

7. SCE14

SEISMIC BEHAVIOR OF REINFORCED CONCRETE BEAM

P C Karthick

Institute of Road Transport and Technoloy

Erode/ TamilNadu

Page 20: Keynote 2011

20

CARE School of Engineering

National Conference On

Technology for Sustainable Tomorrow

Technical Session-III Venue: H3 (Drawing Hall I st

Floor)

Date: 12.3.2011 Time: 09.30am – 11.30am Major Area: Civil / Mechanical

S.No PAPER-ID PAPER TITLE NAME OF AUTHOR NAME OF

INSITUTION PLACE /STATE

1. SCE15 WATERSHED MANAGEMENT P.Satishkumar M.munesh

AUT-Thirukuvalai

Nagapattinam / TamilNadu

2. SCE16 FOOT OF NANOTECHNOLOGY

C.Justin Raj R. Vinoth Kumar

AUT-Thirukuvalai

Nagapattinam/ TamilNadu

3. SCE17 INTELLIGENT TRANSPORT SYSTEM

B Vijayakumar M Ranjith

CARE School of Engineering

Trichy / TamilNadu

4. SCE18

OPTIMISATION OF MACHINING PARAMETERS FOR FORM TOLERANCE IN EDM FOR ALUMINIUM METAL MATRIX COMPOSITES

Raja T

Prist University

Thanjavur/ TamilNadu

5.

SCE19

OPTIMISATION OF SQUEEZE CASTING PROCESS PARAMETER FOR WEAR USING SIMULATED ANNEALING

Vinayaka Murthy V

Prist University

Thanjavur/ TamilNadu

6.

SCE20

OPTIMISATION OF NDT TESTING WITH RESPECT TO DIAMETER AND DENSITY CHANGES

Baskaran R

Prist University

Thanjavur/ TamilNadu

7.

SCE21

OPTIMAL MANIPULATOR DESIGN USING INTELLIGENT TECHNIQUES(GA AND DE)

P Rajesh

JJ College of Engineering

Trichy / TamilNadu

8.

SCE22

STUDY OF PROJECT MANAGEMENT IN CAR INFOTAINMENT

R Mukesh R V Praveena Gowda

Dayanand Sagar

Bengaluru / Karnataka

Page 21: Keynote 2011

21

CARE School of Engineering

National Conference On

Technology for Sustainable Tomorrow

Technical Session-IV Venue: H1 (New Library Building)

Date : 12.3.2011 Time : 11.45am – 01.45pm Major Area : Mechanical

S .NO PAPER-

ID PAPER TITLE NAME OF AUTHOR

NAME OF INSITUTION

PLACE /STATE

1. SCE23

HEAT TRANSFER AUGMENTATION IN A PILOT SIMULATED SOLAR POND

Mugesh Babu K, Prakash K

JJ College of Engineering

Trichy / TamilNadu

2. SCE24

PETRI NET METHODS: A SURVEY IN MODELING AND SIMULATION OF PROJECTS

Rashmi L .Malghan M.B.Kiran

Dayanand Sagar College of Engineering

Bengaluru / Karnataka

3. SCE25

QUALITY SUSTENANCE ON VENDOR AND INVENTORY PRODUCTS

Vigneshwar Shankar , R.V.Praveen Gowda, Dr.H.K.Ramakrishna,

Dayanand Sagar College of Engineering

Bengaluru / Karnataka

4. SCE26

EFFECT ON SURFACE ROUGHNESS OF TURNING PROCESS IN CNC LATHE USING NANO FLUID AS A COOLANT

G. Mahesh, Dr. Murugu Mohan Kumar

Indira Ganeshesan of Engineering

Trichy / TamilNadu

5. SCE27 VARIOUS CO-GENERATION SYSTEMS – A CASE STUDY

S.Vignesh, S.Vignesh Kumar

JJ College of Engineering

Trichy / TamilNadu

6. SCE28

THERMAL CONDUCTIVITY IMPROVEMENT OF LOW TEMPERATURE ENERGY STORAGE SYSTEM USING NANO-PARTICLE

K Karunamurthy, K Murugu Mohan Kumar R Suresh Isravel , A L Vignesh

JJ College of Engineering

Trichy / TamilNadu

7. SCE29

DESIGN OF JIGS, FIXTURES, GAUGING AND THE PROCESS SHEET PLAN FOR THE PRODUCTION OF AUXILIARY GEAR BOX-MARK IV GEARS AND SHAFTS

S Harini Reddy, R.Ravindran , D. Shakthivel., P.

Shivakumar.

Dr Mahalingam College of Engineering

Pollachi/ TamilNadu

Page 22: Keynote 2011

22

1. GREEN MARKETING INITIATIVES BY INDIAN CORPORATE- PROSPECTS

AND CONFRONTS IN FACING GLOBAL COMPETITION

Miss.Sarunya.T & Miss. Saranya.D

Final Year MBA,

M.I.E.T. Engineering College, Trichy

Abstract

Currently, there is mounting awareness among the

consumers all over the world concerning protection

of environment. The growing awareness among the

consumers regarding their environmental protection

had inculcated the interest among people to bestow

a clean earth to their progeny. Various studies by

environmentalists indicate that people are

concerned about the environment and are changing

their behavior pattern so as to be less hostile

towards it. Now we see that most of the consumers,

both individual and industrial, are becoming more

concerned about environment-friendly products.

Most of them believe that environment-friendly

products are safe to use. Now is the era of

recyclable, non-toxic and environment-friendly

goods. As a result, green marketing has emerged,

which aims at marketing sustainable and socially-

responsible products and services in the society.

This has become the new mantra for marketers to

satisfy the needs of consumers and earn better

profits. In this paper the authors limelight the

essence of green marketing and its relevance in

improving the competence levels of the companies

in the global competition.

INTRODUCTION

Green marketing refers to the process of selling

products and/or services based οn thеіr

environmental benefits. Such a product or service

mау bе environmentally friendly іn іt οr produced

аnd/οr packaged іn аn environmentally friendly

way. One of the major assumptions of green

Page 23: Keynote 2011

23

marketing іѕ that the potential consumers

significantly decide a product or service on the basis

of its ‘greenness’. Also the consumers wіll bе

willing tο pay more fοr green products thаn thеу

wοuld fοr a less-green comparable alternative

product.

According to the American Marketing Association,

green marketing іѕ the marketing οf products thаt

аrе presumed tο bе environmentally safe. Thus

green marketing incorporates broad range οf

activities including product modification, changes

tο thе production process, packaging changes, аѕ

well аѕ modifying advertising.

According to Mr. J. Polonsky, green marketing can

be defined as, "All activities designed to generate

and facilitate any exchange intended to satisfy

human needs or wants such that satisfying of these

needs and wants occur with minimal detrimental

input on the national environment."

WHY GREEN MARKETING?

Companies that develop new and improved products

and services with environment inputs in mind give

themselves access to new markets, increase their profit

sustainability, and enjoy a competitive advantage over

the companies which are not concerned for the

environment. There are basically five reasons for which

a marketer should go for the adoption of green

marketing. They are:

a) Opportunities or competitive advantage

b) Corporate social responsibilities (CSR)

c) Government pressure

d) Competitive pressure

e) Cost or profit issues

Most of the companies are venturing into green

marketing due to the following reasons:

a. Opportunity or Competitive Advantage:

In India, around 25% of the consumers prefer

environmental-friendly products, and around 28% may

be considered healthy conscious. There fore, green

marketers have diverse and fairly sizeable segments to

cater to. The Surf Excel detergent which saves water

(advertised with the message—"do bucket paani roz

bachana") and the energy-saving LG consumers

durables are examples of green marketing. We also

have green buildings which are efficient in their use of

energy, water and construction materials, and which

reduce the impact on human health and the

environment through better design, construction,

operation, maintenance and waste disposal. In India,

the green building movement, spearheaded by the

Confederation of Indian industry (CII) - Godrej Green

business Center, has gained tremendous impetus over

the last few years. From 20,000 sq ft in 2003, India's

green building footprint is now over 25 million sq ft.

b. Corporate Social Responsibilities (CSR)

Many companies have started realizing that

they must behave in an environment-friendly fashion.

They believe both in achieving environmental objectives

as well as profit related objectives. The HSBC became

the world's first bank to go carbon-neutral last year.

Other examples include Coca-Cola, which has invested

in various recycling activities. Walt Disney World in

Florida, US, has an extensive waste management

program and infrastructure in place.

c. Governmental Pressure

Page 24: Keynote 2011

24

Various regulations rare framed by the

government to protect consumers and the society at

large. The Indian government too has developed a

framework of legislations to reduce the production of

harmful goods and by products. These reduce the

industry's production and consumers' consumption of

harmful goods, including those detrimental to the

environment; for example, the ban of plastic bags in

Mumbai, prohibition of smoking in public areas, etc.

d. Competitive Pressure

Many companies take up green marketing to

maintain their competitive edge. The green marketing

initiatives by niche companies such as Body Shop and

Green & Black have prompted many mainline

competitors to follow suit.

e. Cost Reduction/ Cost of Profit Issues

Reduction of harmful waste may lead to

substantial cost savings. Sometimes, many firms

develop symbiotic relationship whereby the waste

generated by one company is used by another as a cost-

effective raw material. For example, the fly ash

generated by thermal power plants, which would

otherwise contributed to a gigantic quantum of solid

waste, is used to manufacture fly ash bricks for

construction purposes.

BENEFITS OF GREEN MARKETING

Today's consumers are becoming more and

more conscious about the environment and are also

becoming socially responsible. Therefore, more

companies are responsible to consumers' aspirations

for environmentally less damaging or neutral products.

Many companies want to have an early-mover

advantage as they have to eventually move towards

becoming green. Some of the advantages of green

marketing are as given below:

• It ensures sustained long-term growth along

with profitability.

• It saves money in the long run, thought initially

the cost is more.

• It helps companies market their products and

services keeping the environment aspects in

mind. It helps in accessing the new markets

and enjoying competitive advantage.

• Most of the employees also feel proud and

responsible to be working for an

environmentally responsible company.

PROBLEMS OF GREEN MARKETING

Many organizations want to turn green, as an

increasing number of consumers' ant to associate

themselves with environmental-friendly products.

Alongside, one also witnesses confusion among the

consumers regarding the products. In particular, one

often finds distrust regarding the credibility of green

products. Therefore, to ensure consumer confidence,

marketers of green products need to be much more

transparent, and refrain from breaching any law or

standards relating to products or business practices.

Many marketers tried and failed with green

sales pitches over the last decade. Some of their wrong

approaches include:

a. Overexposure and lack of credibility:

So many companies made environmental claims that

the public became skeptical of their validity.

Page 25: Keynote 2011

25

Government investigations into some green claims (e.g.

the degradability of trash bags) and media reports of

the spotty environmental track records behind others

only increased consumer’s doubts. This backlash

resulted in many consumers thinking environmental

claims were just marketing gimmicks.

b. Consumer behavior:

Research studies have shown that consumers as a

whole may not be wiling to pay a premium for

environmental benefit, although certain market

segments will be. Most consumers appear unwilling to

give up the benefits of other alternatives to choose

green products. For example, some consumers dislike

the performance, appearance, or texture of recycled

paper and household products. And some consumers

are unwilling to give up the convenience of disposable

product such as diapers.

c. Poor Implementation:

In jumping on the green marketing bandwagon

many firms did a poor job implementing their marketing

program. Products were poorly designed in terms of

environmental worthiness, overpriced, and

inappropriately promoted. Some ads failed to make the

connection between what the company was doing for

the environment and how it affected individual

consumers.

PATHS TO GREENNESS

Green marketing involves focusing on

promoting the consumption of green products.

Therefore, it becomes the responsibility of the

companies to adopt creativity and insight, and be

committed to the development of environment-friendly

products. This will help the society in the long run.

Companies which embark on green marketing should

adopt the following principles in their path towards

"greenness."

• Adopt new technology/process or modify

existing technology/process so as to reduce

environmental impact.

• Establish a management and control system

that will lead to the adherence of stringent

environmental safety norms.

• Using more environment-friendly raw

materials at the production stage itself.

• Explore possibilities of recycling of the used

products so that it can be used to offer similar

or other benefits with less wastage.

MARKETING STRATEGIES

The marketing strategies for green marketing include:

• Marketing Audit (including internal and

external situation analysis)

• Develop a marketing plan outlining strategies

with regard to 4 P's

• Implement marketing strategies

• Plan results evaluation

For green marketing to be effective, you have to

do three things; be genuine, educate your customers,

and give them the opportunity to participate.

1) Being genuine means that

a) that you are actually doing what you claim to

be doing in your green marketing campaign and

Page 26: Keynote 2011

26

b) that the rest of your business policies are

consistent with whatever you are doing that's

environmentally friendly. Both these conditions have to

be met for your business to establish the kind of

environmental credentials that will allow a green

marketing campaign to succeed.

2) Educating your customers isn't just a matter of letting

people know you're doing whatever you're doing to

protect the environment, but also a matter of letting

them know why it matters. Otherwise, for a significant

portion of your target market, it's a case of "So what?"

and your green marketing campaign goes nowhere.

3) Giving your customers an opportunity to participate

means personalizing the benefits of your

environmentally friendly actions, normally through

letting the customer take part in positive environmental

action.

INITIATIVES TAKEN UP BY BUSINESS ORGANISATIONS

TOWARDS GREEN MARKETING

Many companies have started realizing that

they must behave in an environment friendly fashion.

They believe both in achieving social & environmental

objectives as well as financial objectives. India is

growing at 9% annually and expected to double its

energy consumption between 2005 and 2030, is under

pressure to take action for providing clean environment

for all future generations to come. Many Indian

companies have come forward for the cause of

environmental concerns and issues requiring immediate

attention like: global warming, Water and Air pollution,

E-waste.

NTPC Limited has decided to allocate 0.5% of

distributable profit annually for its "Research and

Development Fund for Sustainable Energy," for

undertaking research activities in development of green

and pollution free technologies.

In India, around 25% of the consumers prefer

environmental-friendly products, and around 28% may

be considered healthy conscious. Therefore, there is a

lot of diverse and fairly sizeable untapped segment in

India which green marketers can serve through offering

eco-friendly products for profitability and survival in the

era of globalization.

For example, Mahindra Group has formally

announced the launch of project Mahindra Hariyali in

which 1 million trees will be planted nation-wide by

Mahindra employees and other stakeholders including

customers, vendors, dealers, etc. by October 2008. Of

these, 1,50,000 trees have already been planted by

Mahindra employees since September 2007.

Nokia's environmental work is based on life

cycle thinking. This means that we aim to minimize the

environmental impact of our products throughout our

operations, beginning with the extraction of raw

materials and ending with recycling, treatment of

waste, and recovery of used materials.

India is a world leader in green IT potential,

according to a recently released global enterprise

survey. Indian respondents scored over respondents

from 10 other countries in expecting to pay 5% or more

for green technology if its benefits for the environment

and return on investment (ROI) are proven in a survey

conducted by GreenFactor, which researches and

Page 27: Keynote 2011

27

highlights green marketing opportunities. Among the

companies that have succeeded thus far in their green

marketing strategies are Apple, HP, Microsoft, IBM,

Intel, Sony and Dell. HCL has a comprehensive policy

designed to drive its environment management

program ensuring sustainable development. HCL is duty

bound to manufacture environmentally responsible

products and comply with environment management

processes right from the time products are sourced,

manufactured, bought by customers, recovered at their

end-of-life and recycled.

Potato starch trays made by Dutch PaperFoam

protect the new iPhone just launched by Apple

Computer which equals 90 percent reduction in the

carbon footprint compared to the plastic tray used in

the past. Indian Oil also aims at developing techno-

economically viable and environment-friendly products

& services for the benefit of millions of its consumers,

while at the same time ensuring the highest standards

of safety and environment protection in its operations

CONCLUSION

A clever marketer is one who not only

convinces the consumer, but also involves the

consumer in marketing his product. Green marketing

should not be considered as just one more approach to

marketing, but has to be pursued with much greater

vitality, as it has an environmental and social dimension

to it. With the threat of global warming alarming large,

it is extremely important that green marketing becomes

the norm rather than an exception or just a fad.

Marketers also have the responsibility to make the

consumers understand the need for and benefits of

green products as compared to non-green ones. In

green marketing, consumers are willing to pay more to

maintain a cleaner and greener environment. Finally,

consumers, industrial buyers and suppliers need to

pressurize effects on minimize the negative effects on

the environment-friendly.

REFERENCE

1. http://www.e-articles.info/e/a/title/Green-

Marketing/

2. http://www.greenmarketing.net/stratergic.ht

ml

3. http://www.epa.qld.gov.au/sustainable_

industries

4. http://www.articlesbase.com

5. http://www.greenmarketing.net/stratergic.ht

ml

6. http://www.epa.qld.gov.au/sustainable_

industries

7. http://www.wmin.ac.uk/marketing

research/marketing/green mix

Page 28: Keynote 2011

28

2. GREEN MARKETING

S.Paul Raj Sudharsan Engineering College. .Pudukkottai

Abstract:

"Green marketing is a phenomenon which has

developed particular import in the modern market.

This concept has enabled for the re-marketing and

packaging of existing products which already

adhere to such guidelines.

Green consumerism is on the rise in America, but

its environmental effects are contested. Does green

marketing contribute to the greening of American

consciousness, or does it encourage corporate green

washing? This tenuous ethical position means that

eco-marketers must carefully frame their

environmental products in a way that appeals to

consumers with environmental ethics and buyers

who consider natural products as well as

conventional items. Thus, eco-marketing constructs

a complicated ethical identity for the green

consumer. Environmentally aware individuals are

already guided by their personal ethics. In trying to

attract new consumers, environmentally minded

businesses attach an aesthetic quality to

environmental goods. In an era where

environmentalism is increasingly hip, what are the

implications for an environmental ethics infused

with a sense of aesthetics?

This article analyzes the promotional materials of

three companies that advertise their environmental

consciousness: Burt's Bee's Inc., Tom's of Maine,

Inc., and The Body Shop Inc. Responding to an

increasing online shopping market, these companies

make their promotional and marketing materials

available online, and these web-based materials

replicate their printed catalogs and indoor

advertisements. As part of selling products to

consumers based on a set of ideological values,

these companies employ two specific discursive

strategies to sell their products: they create

enhanced notions of beauty by emphasizing the

performance of their natural products, and thus

infuse green consumerism with a unique

environmental aesthetic. They also convey ideas of

health through community values, which in turn

enhances notions of personal health to include

ecological well-being. This article explicates the

ethical implications of a personal natural care

discourse for eco-marketing strategies, and the

significance of a green consumer aesthetic for

environmental consciousness in general.

AUTHOR: S.PAUL RAJ

COAUTHOR: D. CHANDRU Asst Prof

Mail: [email protected]

Page 29: Keynote 2011

29

2. GREEN MARKETING –

Purpose & Scope:

It seems like every day another familiar brand claims to have

gone green. Companies marketing their green achievements were

once a small segment of forward thinking organizations, but have

since grown into a group of unlikely advocates that includes an

oil company and the world’s largest retailer. Just what is green

marketing, and to what extent are companies integrating its

principles into their communications, and green thinking into

their operations? The objective of this paper is to provide marketing executives

with a high level understanding of green marketing and some of

the recent developments in this space.

Torque has addressed 4 key questions from the

perspective of a marketing executive:

1. What is green marketing? 2. Does it pay to go green? 3. Who else is going green?

4. What should companies consider when going

green?

Green Marketing – What is it?

Green marketing has been around for longer than

one might suspect – in fact, a workshop on

‘ecological marketing’ was held by the American

Marketing Association in 1975. Since then, the

definition has been refined and segmented into 3

main buckets:

Retailing Definition: The marketing of

products that are presumed to be environmentally

safe. 1

Social Marketing Definition: The

development and marketing of products designed to

minimize negative effects on the physical

environment or to improve its quality. 2

Environmental Definition: The efforts by

organizations to produce, promote, package, and

reclaim products in a manner that is sensitive or

responsive to ecological concerns. 3

All three of these definitions speak to the fact that

green marketing involves informing consumers

about initiatives you’ve undertaken that will benefit

the environment, with the overall goal of improving

sales or reducing costs.

The term green marketing is often used loosely and

in the wrong context. It’s worth noting that ‘green

marketing’ is actually related to another topic that is

gaining visibility in corporate circles – corporate

social responsibility (CSR). There are four general

buckets encompassed in corporate social

responsibility, one of which specifically can be tied

to green marketing:

Social responsibility (i.e. avoid child labor)

Ethical responsibility (i.e. protects whistle

blowers, have respectful workplace policies)

Environmental responsibility (i.e. adopt business

practices that are environmentally friendly)

Governance (i.e. fulfilling requirements for a

neutral board of directors)

Page 30: Keynote 2011

30

This context is important to consider, because it

clarifies that every time a company does something

that feels nice, it cannot be parlayed into a green

marketing campaign – only activities which benefit

the environment can.

Can going green actually provide financial returns

and create shareholder value? Let’s consider this

question from a number of different perspectives:

1. Do CEOs consider the environment as

part of their strategy?

CEOs are ultimately responsible for determining

which strategies will create value – their current

thinking on the impact of going green is one

valuable data point in assessing the value of green

marketing. A 2007 survey of 2,000 executives

indicates that 60% view climate change as

important to consider with regard to their

company’s overall strategy. However, 44% of

CEOs indicate that climate change is not a

significant item on their agenda.4 Thus, while there

is significant awareness among CEOs of the extent

through which consumers view them through a

green lens, it would appear that a minority are

currently acting on this knowledge.

2. Do Green Companies outperform the

market?

A 2007 study by Goldman Sachs reveals that

companies on an ethical list (compiled by Goldman

Sachs) outperformed the MSCI World Index by an

average of 25%, with 72% of companies

outperforming their sector peers. While this study

considers a company’s ‘green’ attitude along with

other elements of corporate social responsibility, the

findings still suggest that companies can in fact ‘do

good by being good’.

There are other signs that investors are starting to

pay more attention to green companies – the San

Francisco Chronicle reports that there are at least 25

green funds that invest in companies based, at least

in part, on their approach to dealing with

environmental issues.

3. Doesn’t going green really just mean

spending money?

Going green often means reducing waste – this can

actually have a positive impact on the bottom line in

addition to providing valuable stories to fuel

marketing messaging. Giving out fewer napkins,

using less ink and paper, encouraging customers to

use online billing – these are all green initiatives

that can quickly reduce company costs.

4 MCKINSEY SURVEY + MARKETING

MAGAZINE

Who Else Is Going Green?

(And have they really gone green?)

The normal business operations of every business

inevitably have an impact on the environment. With

consumers becoming increasingly conscious and

critical of any negative environmental impacts, you

Page 31: Keynote 2011

31

don’t have to look hard to find examples of

companies that have gone green in an effort to

persuade consumers that they’re doing all they can

for Mother Earth.

The challenge consumers take on (with varying

levels of diligence) is to identify which companies

are authentically green, and which are imposters

guilty of ‘green washing’. Consider the following

examples and place them in the context of the green

washing continuum:

Green washing Continuum

Green washing authentic green

1. WAL-MART

Wal-Mart launched it’s highly visible

environmental strategy in 2004. Since then, their

goals have encompassed everything from:

• Cutting energy usage at stores by 30% to reduce

greenhouse gas emissions

• Spending $500 million per year to improve truck

fleet fuel economy

• Working towards monitoring overseas suppliers to

ensure they meet environmental standards

2. BRITISH PETROLEUM

In 2000, British Petroleum rebranded to “BP” and

adopted the slogan “beyond petroleum.” According

to BP, this positioning captures their goals of

finding cleaner ways to produce and supply oil, as

well as researching and developing new types of

fuels.

The origins of green washing stem from the popular

political term, whitewashing – “a coordinated

attempt to hide unpleasant facts, especially in a

political context”.

Companies guilty of green washing are the ones that

have publicly made vague, misleading,

unrepresentative, unsubstantiated or irrelevant

claims about their green initiatives.

Authentic Green – to achieve this status, a company

would have launched legitimate green initiatives

that generally stand up to scrutiny from consumers

or 3rd party accreditation bodies.

3. FAIRMONT

Fairmont has attempted to position their brand as an

industry leader in green practices through several

approaches, including:

• Hiring a full time director of environmental affairs

• Launching a 200-ton per year recycling program

• Offering free parking for guests with hybrid cars

• Retrofitting lighting with energy efficient bulbs

4. GENERAL ELECTRIC

In 2005, GE launched their Ecomagination

campaign. Clarifying how the green positioning is

integrated with their overall business strategy, CEO

Jeffrey Immelt notes that the campaign is primarily

“a way to sell more products and services”. To this

end, GE has pledged to:

• Double investments in clean R&D by 2010

• Increase revenues from Ecomagination products to

$20 billion by 2010

Page 32: Keynote 2011

32

• Reduce their own greenhouse gas emissions by

1% over 7 years

(When they would have increased by 40%)

5. LOBLAWS

In 2007, CEO Gaelen Weston appeared in company

advertisements touting their promise to reduce

waste through the introduction of reusable shopping

bags.

CONSIDERATIONS BEFORE GOING GREEN

If you don’t completely buy into all of the green

initiatives listed above, you’re not alone. In fact, a

recent study suggests that 72% of consumers are

skeptical about most companies’ claims of being

environmentally conscious.5 unfortunately; this

makes things more difficult for companies with

legitimate green agendas to capture the trust of

consumers.

To ensure your green marketing campaign is

well designed and captures the interest and

admiration of consumers, consider the following

rules:

Don’t ask marketing to paint you green

Skeptical consumers can quickly discern between

companies with legitimate green strategies, and

those who are imposters. It’s usually immediately

evident when the only green thing about a company

is their logo and marketing materials. Consumers

are looking for green thinking to be ingrained across

the value chain – from product development to

distribution.

GO GREEN WHERE IT ACTUALLY

MATTERS

Some green strategies are harder for consumers to

buy than others. To legitimately market you as

‘green’, a company should select an area of their

business where benefits from going green will be

actually sizeable. For example, British Petroleum

would have a difficult time convincing consumers

that they’re green on the grounds that their sales

team no longer prints copies of their PowerPoint

presentation before the Monday meeting. Marketing

these types of trivial (and unrepresentative)

commitments to ‘going green’ destroys credibility

with consumers.

DON’T FIRE EVERYONE BUT YOUR CHIEF

GREEN OFFICER

Merely positioning a company as ‘being green’ will

seldom deliver success – in isolation this

positioning does not provide a sustainable

competitive advantage. Instead, green strategies

must be aligned with a solid business strategy and

complemented by sound leadership. For example,

the marketing team behind GE’s creative

Page 33: Keynote 2011

33

Ecomagination campaign would not look quite as

intelligent if there wasn’t also a team of engineers

and product development analysts that identified the

clean tech and clean energy sectors as growth

opportunities for GE. Without their insight,

Ecomagination would just be another hollow

buzzword….

CONCLUSION

Green marketing is gaining significant attention

from both CEOs and consumers. Given that a

carefully crafted green marketing strategy can earn

credibility with customers and provide a platform

for revenue growth, it’s an area worthy of additional

consideration.

4. RISK ANALYSIS IN INVESTMENT PORTFOLIO

S. Jeroke shalin, Sudarsan Engineering College, Pudukkottai

ABSTRCT

Most of the investors’ objective in investment is to

earn a fair return on their money with minimum risk.

But risks and returns are inseparable one. Risk in

holding securities could result in the possibility of

realizing lesser returns that was originally expected

by way of dividends or interest. Risk includes the

possibility of loss or injury and the degree of

probability of such a loss. Portfolio is a combination

of securities such as stocks, bonds and money market

instruments like commercial papers, certificate of

deposits etc., These assets are blended together so as

to obtain optimum return with minimum risk. The

main objective of portfolio management is to

maximize yield and minimize risk.

This paper put in plain words about i) Risk analysis

by calculating Standard deviation, Variance and beta

and ii) Reducing risk by portfolio diversification.

The risks can be eliminated by diversifying the

portfolio of securities. i.e., holding securities of two

companies in a particular industry is more risky than

holding securities of two companies belonging to two

different industries. Diversification relates to the

principle of “putting all eggs in different baskets”.

Page 34: Keynote 2011

34

So, proper diversification and holding number of

security types across industry lines such as electronic,

steel, chemical, automobile, utility, mining etc.,

minimizes risk.

*S.Jeroke sharlin, Lecturer,

Department of MBA, Sudharsan engineering

college, Pudukottai. Mail id-

[email protected]. Mobile-9944372411

Risks are of two types. Namely, 1) Market risk or

systematic risk 2) Company risk or unsystematic

risk. Systematic risk arise due to general factors in

the market such as money supply, inflation,

economic recession, industrial policy, interest rate

policy of the government, credit policy, tax policy,

etc.These are the factors which affect the whole

market. These risks cannot be eliminated and so they

are called non diversifiable risk.

Unsystematic risk represents fluctuations in returns

of a security due to facts associated with a particular

company and not the market as a whole. The factors

causing unsystematic risks include labour unrest,

strike, change in market demand, change in

competitive environment, change in fashion and

computer preference, etc. The unsystematic risks can

be eliminated by diversifying the portfolio securities.

Standard deviation and variance

Standard deviation is a good measure for

calculating the risk of individual stocks. The concepts

of standard deviation and variance, beta, correlation

co-efficient, co-variance etc., are also used to explain

the measure of risk. Standard deviation and variance

are the most useful methods for calculating

variability. In the following example, stocks of X Ltd

and Y Ltd. are compared.

1. Risk of stocks of X Ltd and Y Ltd.

X Ltd (Stock) Y Ltd. (Stock)

10 13

18 10

2 12

21 14

9 11

Expected returns = 60 Expected returns= 60

Arithmetic Mean=Total return/n, 60/5=12

The expected returns for both the companies are the

same. But the spread differs –X Ltd is riskier than Y

Ltd., because returns at different points of time are

uncertain with respect to its stock. The average stock

Page 35: Keynote 2011

35

for X Ltd and Y Ltd is 12. But X Ltd seems to be

riskier than Y Ltd., considering future outcomes.

Another example of probability distribution is given

below, to specify expected return as well as risk. The

expected return is the weighted average of returns.

Weighted average returns (expected average returns)

are arrived at by multiplying each return by the

associated probability and adding them together.

2. Stocks of X Ltd and Y Ltd.,

S.no

(1)

Return

(Ri)

X Ltd (2)

Probability

(Pi)

Weighted

average Return

(PiRi)

(1)

Return

(Ri)

Y Ltd (2)

Probability

(Pi)

Weighted

average Return

(PiRi)

1 8 0.15 1.20

2 9 0.20 1.80

3 10 0.30 3.00 9 0.30 2.70

4 11 0.20 2.20 10 0.40 4.00

5 12 0.15 1.80 11 0.30 3.30

Total 1.00 10.00% 1.00 10.00%

Expected return= = E(R)= ∑ (PiRi)

Though both the companies X Ltd., and Y Ltd., have

identical expected average returns their spread

differs. The range or spread in X Ltd is from 8 to 12

while for Y Ltd., it ranges between 9 and 11 only.

The spread of dispersion can be measured by

standard deviation.

Beta measures non-diversifiable risk and so it

is referred to as systematic risk to the market. Beta

shows how the price of a security responds to market

forces. In effect, the more responsive the price of a

security is to changes in the market, with the returns

for the market. So, it shows the relationship between

the stock return and market return.

Betas are useful to investors in assessing risk and

understanding the impact of the market movements

on the return expected from a share of stock. For

example, if the market is expected to provide 10%

rate of return over the next year, a stock having a beta

of 1.70 would be expected to give an increase in

return approximately 17%(1.70x10%) over the same

period. Beta is a better measure of risk used in the

analysis of portfolios.

Portfolio Construction

Portfolio construction refers to the allocation of funds

among variety of financial assets open for

investment. Portfolio construction is undertaken in

the light of the basic objective of the portfolio

Page 36: Keynote 2011

36

management. The basic objective of portfolio

management is to maximize yield and minimize risk.

The other objectives of the portfolio management

include i) regular income or stable return ii)

Appreciation of capital iii) Marketability and

Liquidity iv) Safety of investment and v) minimizing

tax liability.

Approaches in portfolio construction

There are two approaches in the construction

of the portfolio of securities

(1)Traditional approach

The traditional approach is basically a financial

plan applicable to individuals. It takes care of

individuals needs such as housing, life insurance, and

pension plans

The steps involved in the traditional approach

are:

1. Analysis of constraints 2.Determination of

objectives, 3.Selection of portfolio,

4.Assessment of risk and return; and

5.Diversification

These are evident from the following chart

(2)Markowitz efficient frontier approach

Markowitz model which is described as modern

approach lays emphasis on the process of selecting

portfolio. Under Markowitz model, selection of

common stocks portfolio receives more attention than

that of bond portfolio. The selection stock is not

based on the need for income or appreciation. But

risks and return analysis plays a major role in the

selection of stock portfolio. Investors are keen on

getting returns which may be in the form of dividend

or interest.

Harry Markowitz’s model explains how an investor

can build an efficient portfolio which results in

maximization of returns with minimum risk.

1. The traditional theory evaluates the

performance of portfolio only in terms of the

Selection of portfolio

Assessment of Risk and return

Diversification

Bond and Bond Common Stock

Common Stock

Analysis of constraints

Determination of objectives

Page 37: Keynote 2011

37

rate of return, particularly in comparison with

other assets of the same risk class.

2. Markowitz theory and the modern theory of

Portfolio evaluate on the basis of risk adjusted

return. Modern portfolio theory holds that the

portfolio evaluation should be based on both

risk and return and the objective should be to

optimize the return for a given level of risk.

Reducing risk by Diversification

Holding securities of a single company is

more risky than those of two or more companies. It is

further analyzed that holding securities of two

companies in a particular industry is more risky than

holding securities of two companies belonging to two

different industries. To put it in other words, two

companies- say, one in electronic industry and the

other in automobile industry are less risky than two

in either electronic industry or automobile industry.

So, proper diversification and holding number of

security types across industry lines such as electronic,

steel, chemical, automobile, utility, mining etc.,

minimizes risk.

An investor can reduce portfolio risk simply by

holding combinations of instruments which are not

perfectly positively correlated (correlation coefficient

)). In other words, investors can

reduce their exposure to individual asset risk by

holding a diversified portfolio of assets.

Diversification may allow for the same portfolio

expected return with reduced risk

Reference

1. Markowitz, H., 1952, Portfolio selection, Journal

of Finance, Vol.7, No.1, p7-91.

2. Mean variance analysis and the diversification of

risk by Leigh H.Halliwell.

3. Security analysis and portfolio management by

Punithavathy pandiyan.

4.Markowitz, H.M. (1959). Portfolio Selection:

Efficient Diversification of Investments. New York

:

Page 38: Keynote 2011

38

5. Risk Analysis in Investment Portfolio

Suman k asrani Holy crossTrichy-620017

The purpose of RISK ANALYSIS IN

INVESTMENT is to assess the economic prospects

of a proposed investment project. It is a process of

calculating the expected return based on cash flow

forecasts of many, often inter-related, project

variables

Introduction :-

• Your investment portfolio is not only

about pursuit of returns; it is also about risk

management.

• An investment portfolio generally

consists of various investment instruments

such as Mutual funds, Bonds, Securities

(Primary and secondary market), Real

Estate, Precious metals, commodities

(gold/silver) etc….

Risk can be classified under two main heads

Systematic and Unsystematic Risk

• Systematic Risk :-

– Market Risk – Risk inherent to

the market

– Risk that cannot be eliminated

Unsystematic :-

Company Specific Risk

– Risk inherent to the specific

security

• E.g. Microsoft Stock has

risks beyond investing in the

stock market, such as anti-

trust, competitors,

management succession, etc.

– Diversifiable – Can be

eliminated through diversification

Various Risk Measures :-

• Standard Deviation

• Alpha

• Beta

• Treynor’s ratio

• Sharpe’s ratio

Standard Deviation :-

• Standard Deviation is the square root of

the variance

• Standard Deviation is a measure of

dispersion around the mean. The higher the

standard deviation, the greater the dispersion

of returns around the mean and the greater

the risk.

When an asset is held in isolation, the appropriate

measure of risk is standard deviation

Beta :-

• When an asset is held as part of a well-

diversified portfolio, the appropriate

measure of risk is its co-movement with the

market portfolio, as measured by Beta.

• Beta (ß)

Page 39: Keynote 2011

39

– A measure of market risk, to

which the returns on a given stock

move with the market

– If beta = 1.0, average stock

– If beta > 1.0, stock riskier than

average

– If beta < 1.0, stock less risky

than average

Most stocks have betas in the range of 0.5 to 1.5

Sharpe and Treynor Ratios :-

• Sharpe Ratio – calculates the average

return over and above the risk-free rate of

return per unit of portfolio risk.

– Sharpe Ratio = (Ri – Rf) / si

Ri = Average return of the portfolio during period i

Rf = Average return of the risk-free rate during

period i

si = standard deviation (risk) of the portfolio during

period i

Treynor Ratio – calculates the average return over

and above the risk-free rate of return per unit of the

world market portfolio risk.

– Treynor Ratio = (Ri – Rf) / bi

Ri = Average return of the portfolio during period i

Rf = Average return of the risk-free rate during

period i

bi = The systematic (market) risk of the world

market portfolio during period i

Sharpe and Treynor Ratios :-

• Risk and performance analysts use

Sharpe and Treynor ratios to analyze the

effectiveness of the portfolio manager

– If the Sharpe and Treynor ratios

are below 1.0, then the portfolio

manager is taking too much risk for

the return the portfolio is generating.

Tips for wise investments in Real estate :

• Speculating versus Investing

Buying a chunk of land and hoping it goes

up in value is SPECULATING. Buying a

property to collect high income in the form

of rent is INVESTING. Investing is a much

safer (and smarter) way to go.

• “Property Will Always Go up in Value”

Don’t believe this dangerous myth! Japanese real

estate at the close of the century and American real

estate today indicate that buying because “it has to

go up” is one of the worst reasons to do so. Hoping

for – or worse, expecting – a price rise is

speculation. Make sure the investment makes great

sense from a positive-cash-flow perspective first.

Then if the property falls in value, you’re still “right

side up” on your cash flows. Consider any

appreciation to be simply icing on the cake when it

comes to speculative real estate investing.

• Getting Started in Real Estate Investing

with Residential Property

Page 40: Keynote 2011

40

It’s easier to understand, purchase and manage than

other types of property. If you’re a homeowner,

you’ve already got experience here. And you’re the

boss. Start close to home, so you can stay on top of

things.

• Truthful Real Estate Investment Advice:

Don’t Believe Everything You Hear or Read

Sellers and real estate agents ultimately want you to

buy that property. So what they’re telling you is

most likely the rosy scenario, not the actual

scenario. If the property has been a rental, ask the

seller for his Schedule E form from his taxes. It’ll

show his ACTUAL revenue and expenses, or at

least the ones he reported to the government. What

you can expect to earn is somewhere between what

he reported to the IRS and what he’s promising you.

• Where To Buy

There is – as you probably know – a widely

held belief that the three most important

factors involved in real estate success are

“Location, Location, Location.”

Clear view of risks involved in mutual

funds :

Mutual funds are the ultimate solution for a small

investor. A small investor used to face various

problems before the introduction of mutual funds.

These were:

1. LIMITED FUNDS - Small investors had to face

the problems of investing in the big companies

.They used to have limited fund with them which

disabled them to invest in blue chip companies.

2. LACK OF EXPERT ADVICE - Expert advice

used to be out of their reach due to affordability

problem.

3. HIGH BROKERAGE RATES - Small

investors had to purchase shares through brokers

who used to charge high brokerage and there were

lot more fraudulent activities to misguide small

investors.

4. TIME LAG - This was another problem in

receipt of shares There used to be delay in getting

the shares

5. LIMITED ACCESS TO INFORMATION -

They used to face problem in receiving information

regarding the shares and Companies

6. PROBLEM REGARDING ALLOTMENT OF

SHARES - They used to face this problem in case

of big companies because big companies normally

used to allot on pro rata basis.

7. COMMISSION AND BROKERAGES - High

rate of commission and brokerages had to be borne

by them.

Simple tips for investment in securities:-

Page 41: Keynote 2011

41

• Get Long-Term Tax Breaks

on Short-Term Investments

• Be Sure to Lock in the 15%

Rate on Qualified Dividends

• Think Twice Before Borrowing

to Buy Dividend-Paying Shares

• Watch Out for Tax Rate on

Foreign Dividends

Reshma Sibichan, Garden City College, Bangalore, Karnataka

I. INTRODUCTION

isk in a common man’s language is uncertainty.

Hence, Risk Analysis means the analysis of

uncertainty. Risk analysis may also be defined as a

technique which identifies and evaluates reasons

that may put at risk the success of a project or

achieving a set target. Risk Analysis helps to reduce

the level of uncertainty and to identify measures to

profitably deal with these uncertainties. Investment

portfolio is a basket of various investment options

that an investor chooses to maximize his/her returns

on the investment. While making an investment

portfolio, the first question that comes to the mind

of any investor is the appetite for risk he/she has

and the second thing is the purpose and expectation

of the investment. When we discuss what can be

added into our basket of portfolio, it is also

important to understand what should not be

included in the basket and why?

An investment portfolio is very vital to the

financial growth of an investment. Like a well

nurtured plant the portfolio must be controlled and

pruned and should be well maintained to see it grow

and bear fruit which can be enjoyed at leisure or at

the time of need. To ensure that our portfolio will

yield results that we can smile at, the portfolio

should be analyzed systematically over the period

of time.

A portfolio analysis will look at the life and

the health condition of an investment to make sure

that the money grows even when we are asleep. By

the analysis we get to understand the different

investments that need to be added into the portfolio

and the investments that need to be removed from

the portfolio so that the bearing of risk on the

investor is evened out. The analysis will determine

that at no point in time the investor is exposed to

6. Risk Analysis in Investment Portfolios

R

Page 42: Keynote 2011

42

risks that he cannot bear or that an investment is not

giving the desired return because it is not

completely exposed to the risk which can be borne

by the investor.

An investor will be in charge of his

investments by analyzing his portfolio periodically.

He will have the direction that he needs to follow to

get the maximum return on his investment and also

cautions him from taking decisions which may

prove to be very expensive. Proper and systematic

analysis of one’s portfolio will give him the comfort

on which he can lean and relax when his

investments work for him.

II. TYPES OF RISKS IN AN INVESTMENT PORTFOLIO

Understanding the different risks associated in an

investment portfolio is very crucial to reducing if

not eliminating risk in a portfolio. Each

investment is associated with a different type of

risk and in some cases few investments may be

associated with all types of risks. So

understanding different types of risks will help to

striking a balanced portfolio and also will make

an investment perform better.

The risks are mainly classified as 1.

Unsystematic risk or Specific Risk and 2.

Systematic Risk.

Unsystematic Risk or Specific Risk are from

known sources and can be controlled. All most all

portfolios are open to the elements of specific

risk. However, the investors compensate them

with other options because of the character of this

risk. It is generally associated with investment to

a certain security and this risk can be minimized

or eliminated by adding more securities from the

same class of investments.

To make it more specific let me take an

example of investments make in the banking

sector. If I expose my investments to own shares

of SBI, I expose my investment to specific risk.

The performance of my investment is based on

the performance of that particular share and my

risk is high. If I need to cope with specific risk, all

I need to do is spread my investments to 15

different shares of the banking industry and my

risk is diversified. When I make my investment in

15 different shares, I would have an asset class

and as such Asset Classes are not associated with

Unsystematic Risks because they are diversified.

Unsystematic risks can be caused due to one

or more of these factors.

1. Business Risk are risks which relates to the

performance of the business, turnover,

returns, profits, market share etc which is

directly linked to the market conditions,

suppliers, product, position of competitors.

This risk will have direct bearing on the

performance of the company. It can be set

right by making an analysis into the company

policies and corrective measures can be

implemented.

Page 43: Keynote 2011

43

2. Financial Risk relates to the financial policies

followed by a company. It may relate to debt

equity ratio, turnover ratios with regards to

stock, sales, creditors and debtors, liquidity

ratio, bad debts etc. Fallouts on these factors

will lead to less profit and less earnings to the

investors. This can be corrected by proper

financial planning and analysis of the

situation.

3. Insolvency Risk relates to a situation where in

the borrower of securities may default in

making payments with respect to interest and

principal amount. The credit rating of the

borrower may have tarnished and may be due

to certain governmental policies the company

becomes insolvent and the investor does not

get any thing in return for his investment. This

risk again can be eliminated by proper

management and by making necessary

changes in the company policies which will

adapt itself to the current scenarios.

The second type of risk which is mainly faced by

an investor is the Systematic Risk. This risk is

associated with a certain asset class. When an asset

class becomes free of all specific risks, it is exposed

to systematic risk which cannot be diversified. Each

and every asset class has to face this risk and

overcome it to achieve the desired returns. Every

asset class has a chronological relationship with

another asset class. This means that a certain sector

of investment perform in a different manner at

different point of time in a fiscal phase. To make it

more clear let me highlight it with an example.

During the economic recession almost all the

sectors were hit very badly. Some of the industries

which did very badly were the construction field,

real estate. The prices of the yellow metal varied

during 2008. The stock markets struggled during the

first and second quarter and completely went down

in the third quarter. The investment that did well

was government securities where the returns were

linked to the interest rates.

Even though the performance of almost all the

asset classes dipped in 2008 during the economic

recession, the degree to which each asset class

plundered varied. Hence, an investor who had

diversified his portfolio into various asset classes

performed better than those who had their portfolio

only in equity. The most classic example of

systematic risk is the movement of the stock market

and their impact on the earnings. If our portfolio

basket has stocks from various sectors, it does not

spread the systematic risk. However the risk can be

lessened by prevarication the investments in un

related assets which is easy said than done. The risk

can also be managed to a certain extent by optimum

stop management options to ensure that the capital

invested is secure. This is should be used in the

portfolio analysis strategy. The various examples of

systematic risk are:

1. Market Risk: This risk is the outcome of

changes in the demand and supply in the market

based on the information being available and the

expectations being set up in the market. The market

will be influenced by the perceptions of the

Page 44: Keynote 2011

44

investors based on the information and over which

we do not have any control.

2. Interest Rate Risk: The ROI is always linked

on the interest rate that has been indicated on it and

the difference in the market interest rate

periodically. Since the cost of borrowed funds is

depended on the interest rate, the perception of

investors would change according to the changes

noticed in the interest rates. Since the investors do

not have control over the credit policies and the

changes in the interest rates, the return on

investments would be risky and in some cases may

warrant early refund of the borrowed fund.

3. Purchasing Power Risk: The increase in

inflation would give rise to increased prices which

in turn increases the production costs and decreases

the profit margins. The expected ROI will change

due to the same and also there may be a demand

which may be created by having shortage of supply

which will further increase the prices. This demand

may be due to the expectation of changes in interest

rates in the future or due to the inflation. This will

have its bearing on the purchasing power and this is

something that cannot be controlled.

These are some of the systematic risks that

we come across when we analyze our investment

portfolios.

Apart from these risks we also come across

risks which are controllable and some which are

uncontrollable. They are, Political Risks which may

arise due to change in Government or their policies,

Management Risks which may arise due to

mismanagement and wrong management decisions

which may lead to financial losses, and

Marketability Risks involving liquidity problem of

assets or conversion of assets from one form to

another.

III Factors Determining Risk Appetite

There are several factors that determine the

Risk appetite or the Risk bearing capacity of an

investor. The risk appetite varies from investor to

investor. Some of the factors that is discussed here

are Age, Income, Sex, Education, Investment

Period, Purpose of Investment and Amount

available for investment.

Age is a very vital determinant of the risk an

investor would want to take for his investments.

The relation between Age and Risk is inversely

proportionate. As the age increases the risk an

investor will bear will come down. Younger the

investor higher will be the risk he will take as he

has got his whole life to rebuild his capital in case

of any downhill he faces. As and when an investor

reaches his age of retirement he would want to

secure his investment and it is acceptable for him to

earn fewer returns rather than to bear more risk. In

the current market scenarios a portfolio manager

always keeps in mind the age of an investor when

formulating his client’s portfolio.

Page 45: Keynote 2011

45

Income is also an important component in

determining the risk aversion a customer has.

Higher the disposable income a customer has the

risk he agrees to take will be high. If a customer sets

apart major part of his income towards meeting

basic necessities, then he falls under a low income

bracket and any investment he makes would be too

close to his heart and it will directly be linked to

very secure investments and this type of customer

would be very conservative.

Sex and Education also has a direct bearing

on the investment decision that will be made. If the

Customer is a female, then if her income is the

primary source of income to the family, then the

risk bearing appetite will be less and if it is vice

versa i.e. if her income is the secondary income to

the family then she would be ready to bear a certain

amount of risk. Likewise an educated person will be

able to understand the dynamics of the market and

is ready to take the risk and experiment.

Investment Period is another factor that

should be considered when analyzing risk. Risky

portfolios require the investor to be invested for a

longer period of time to get the maximum of the

portfolio. Any temporary setbacks should not

trigger off any hasty decisions which results in the

portfolio taking a nose dive into loss. An

appropriate example for this is the investors who

waited when the market went into down and there

were huge losses which were on paper. The

customers who had faith in the market and waited

for it to regain got back the lost however, the

customers who did not follow the game of wait and

watch incurred huge setbacks because they allowed

the loss on paper to be executed.

The last aspect that needs to be looked into

is the purpose for investment. The customers

financial goals is also a major determinant of the

risk an investor will take. Customers is their middle

age would not want to risk the funds set aside for

the higher studies of their children or funds set aside

for the marriage of their daughters. If the customer

is looking at capital gains over a period of time,

then he will surely be able to take on more risk to

get maximum returns.

IV Techniques to Analyze Risk

There are several methods by which the risk of a

portfolio can be analyzed. They are broadly

classified as traditional and modern. We shall look

at the traditional techniques first. They are:

1. Standard Deviation of Returns of an Investment

Portfolio: What we would focus here is the variance

in return of each investment and the expected return

on the investment portfolio.

2. Tracking the Error: Here the standard deviation

of the variance in return between the investment

Page 46: Keynote 2011

46

portfolio and a particular benchmark is tracked to

identify the error in the portfolio.

3. Relative Risk Measure: Here we analyze the

covariance of a security with the market index and

it is separated by the difference in the market index,

and

4. Prospects of loss: Here we analyze the chances

that the earnings from an investment will be below

the point of reference.

The modern techniques of risk analysis are:

1. Value at Risk: Here we analyze the probability of

the maximum extent to which loss can be incurred

during a specific period.

2. Restricted Value at Risk: Here we analyze the

losses we expect which will exceed the Value at

Risk.

3. Anticipated Deficit: Here we analyze the variance

between the returns we have actually received with

the specified returns of the portfolio when we incur

loss.

These are the techniques by which risk can

be analyzed both qualitatively and quantitatively.

Apart from these techniques, risk can also be

analyzed based on the prices of an asset, the

financial reports, and also ratios associated with

risk. Price of an asset speaks a lot about the risk the

asset carries. The costlier an investment the less

risky it will be. So also the financial reports can be

used to analyze risk. The better the earnings of a

company over a period of time, the investment will

be less risky. Also by analyzing the financial health

of a company using ratios we can determine the risk

we will be bearing by investing in the company.

V Methods to Minimize Risk

Whatever may be the quantum of

investment, the risk element would be present in all

investments. There is a certain amount of risk which

is present in all portfolios. The only thing we can do

is to minimize it to the least extent possible. The

basic factor that needs to be done by an investor is

to first find out the risk level he is willing to bear.

Based on that he can design his portfolio with risky

but, performing asset. Investors can also minimize

their risk by diversifying their portfolios in different

asset classes and also by diversifying into various

options in a specified asset class. The asset classes

which require an investor to be invested for a longer

period of time can be chosen to meet the long term

financial goals of the investor. To meet the

immediate or short term financial goals an investor

can invest in the asset class which is highly liquid.

Sometimes investors can be influenced by

the testimonials of few individuals who have made

big bucks over a short span of time. However, this

may not be a safe bet as the risk involved is very

high and in some cases may wipe out the entire

portfolio.

Page 47: Keynote 2011

47

Investors who do a proper homework and analyze

the financial reports of various options and

understand their trend can reduce their risk to a

great extent.

The risk can also be minimized by using a

certain class to evade the risk of another asset class.

A risky asset class can be combined with a less

risky asset class. For example when investment in

land can be used to earn capital gains, another

investment can be made in bonds that are safe and

secure.

Another way to reduce risk is by making

investments in liquid markets where there is always

market to buy and sell the assets. Assets that cannot

be marketed readily are subject to huge losses

which may be sudden in nature.

Investing in asset classes that are non-

correlative is also an ideal way to reduce risk. An

investor can own assets that do not have any

correlation with each other to earn high return with

reduced risk. So when a certain asset class fails to

perform well, there is another asset class which will

make it up for the portfolio. By understanding the

risk of each asset class an ideal portfolio can be

created that can perform well under any market

conditions. When diversifying a portfolio, all

specific risks that are associated with it needs to be

addressed and eliminated and efforts should be put

in to minimize the systematic risk which is

inevitable in all portfolios however good they may

be.

VI Conclusions

By the details provided in this paper it has

become evident that even the most conservative

portfolios have a certain amount of risk factor.

Portfolio management is not about eliminating risk

but rather in formulating strategies to earn

maximum return while exposing oneself to the risk

he is capable of bearing. Risk Analysis is not a

onetime analysis or a study that once the portfolio is

designed we can forget about it and relax but, it is

an ongoing exercise that needs to be done

religiously and systematically over a period of time

till the investment holds good. The investor who has

the patience to watch and wait and allow their

portfolio to perform will earn the maximum from

their investment than those investors who change

portfolios at the slightest of change in the market

conditions and are always unhappy about their

portfolio.

Before any investment is designed, it is very

important to consider all the possible risks the

portfolio will be open to. It is not worth to take all

the risks but rather be a moderate risk taker and

have a financial future which is more relaxing and

fruitful.

Page 48: Keynote 2011

48

7. RISK ANALYSIS IN INVESTMENT PORTFOLIO

Hemalatha.C P.A.College of Engineering and Technology, Pollachi.

Abstract

The main reason why people have professionals

manage their portfolio is to leverage their expertise

in order to generate maximum wealth from their

investments. A well-managed portfolio will not

only take care of diversification, but also allocate

resources per the investor's financial objectives and

appetite for risks. Besides, portfolio management

makes sure there is constant monitoring of your

investments. Many companies have forayed into

this area of financial service. One such company is

GEPL. It offers investment portfolio management

services to help its clients realize their financial

goals.

The main aim of risk analysis in portfolio is to give

a caution direction to the risk and return of an

investor on portfolio. Individual securities have risk

return characteristics of their own. Therefore,

portfolio analysis indicates the future risk and return

in holding of different individual instruments. The

portfolio analysis has been highly successful in

tracing the efficient portfolio. Portfolio analysis

considers the determination of future risk and return

in holding various blends of individual securities.

An investor can sometime reduce portfolio risk by

adding another security with greater individual risk

than any other security in the portfolio. Portfolio

analysis is mainly depending on Risk and Return of

the portfolio. The expected return of a portfolio

should depend on the expected return of each of the

security contained in the portfolio. The amount

invested in each security is most important. The

portfolio’s expected holding period value relative is

simply a weighted average of the expected value

relative of its component securities. Using current

market value as weights, the expected return of a

portfolio is simply a weighted average of the

expected return of the securities comprising that

portfolio. The weights are equal to the proportion of

total funds invested in each security.

Tradition security analyses recognize the key

importance of risk and return to the investor.

However, direct recognition of risk and return in

portfolio analysis seems very much a “seat-of-the-

pants” process in the traditional approaches, which

rely heavily upon intuition and insight. The result of

these rather subjective approaches to portfolio

analysis has, no doubt, been highly successfully in

Page 49: Keynote 2011

49

many instances. The problem is that the methods

employed do not readily lend themselves to analysis

by others.

Most investor thus tends to invest in a group of

securities rather than a single security. Such a group

of securities held together as an investment is what

is known as a portfolio. The process of creating

such a portfolio is called diversification. It is an

attempt to spread and minimize the risk in

investment. This is sought to be achieved by

holding different types of securities across different

industry groups

INVESTMENT PORTFOLIO IS AN

IMPORTANT ASSET:

An investment portfolio is an important asset for

your financial well-being. Controlling your

portfolio, keeping it healthy and up-to-date will help

your nest egg to grow so you can afford that great

retirement, vacation home, or college tuition for

your children. A good way to make sure your

portfolio is heading the best direction is with a

professional portfolio analysis.

So what is a portfolio anyway? Your financial

portfolio is all of your investments together. A good

portfolio will usually consist of a mix of stocks,

bonds, precious metals like gold, real estate, and/or

bank accounts. It is the total value of all that you are

worth, financially.

An investment portfolio analysis will take a look at

the health of your investments to make sure your

money is working for you. Most financial portfolio

analysis reports consist of two parts; a risk aversion

assessment and a return analysis. For very simple

portfolios without a whole lot of money or

investments, there are several computer programs

that can help to analyze your portfolio's return rates;

however, these programs are not always as accurate

or complete as a human analysis. Also, most

programs do have an accurate risk aversion

assessment.

The risk aversion takes a look at how "risky" your

portfolio is. The younger a person is when they start

their portfolio, the more risk should be applied to

that portfolio. The closer to retirement the investor

gets, however, more and more of their investments

should be applied to "safe" investments, such as

bonds. The safe investments do not have the

potential to grow as much, but will grow more

steadily then the high-risk investments could.

The risk aversion will also take into account the

amount of risk a client is willing to have to their

portfolio. If your risk aversion is high, for example,

you will choose to move most of your money into

safer bonds. Risk aversion is different for everyone.

The second part of the assessment is to analyze your

returns. This report will give the investor a good

idea of how their portfolio is doing, where it is

going, and how long it will take to mature. Some

specialists will also study your portfolio to find out

the recovery period from a bad market of the

Page 50: Keynote 2011

50

current portfolio, which also gives insight into the

quality of the portfolio.

It is possible for a lot to change in the market on a

daily basis, so it is important to keep track of your

investments and your portfolio. Do not let your

portfolio fall into disrepair; make sure to run an

analysis on your investment periodically to make

sure all of your stocks, bonds, and other investments

are still sound.

Once the analysis is complete, an investor will have

a good idea of which of your investments are

sounds, which are good, and which need to be

changed. This will help the investor to make smart

choices now, and avoid costly mistakes in the

future. Although your portfolio may appear sound,

it may need a few tweaks to be making you the

most money it can make.

Portfolio Risk:

Portfolio risk is the possibility that an investment

portfolio may not achieve its objectives. There are a

number of factors that contribute to portfolio risk,

and while you are able to minimize them, you will

never be able to fully eliminate them.

1. Systemic Risk

Systemic risk is one factor which contributes to

portfolio risk. It is also a risk factor you can never

eliminate. Systemic risk includes the risk associated

with interest rates, recession, war and political

instability. All of these factors can have significant

repercussions for companies and their share prices.

By their nature, these risk factors are somewhat

unpredictable. While you may be able to determine

the long-term trend of interest rates, you cannot

predict the amount they will rise or fall. And in

most cases, the stock market will "price in" the

anticipated change, long before the individual

investor will even consider it.

2. Unsystematic Risk

This is the risk you can control, or at least

minimize. Also known as "specific risk," it

relates to the risk associated with owning the

shares of a specific company in your portfolio.

As you increase the number of different

companies within your portfolio, you essentially

spread the risk across your portfolio, thus

reducing the overall impact of an

underperforming stock

An investment portfolio analysis will take a look at

the health of your investments to make sure your

money is working for you. Most financial portfolio

analysis reports consist of two parts; a risk aversion

assessment and a return analysis. For very simple

portfolios without a whole lot of money or

investments, there are several computer programs

that can help to analyze your portfolio's return rates;

however, these programs are not always as accurate

or complete as a human analysis. Also, most

Page 51: Keynote 2011

51

programs do have an accurate risk aversion

assessment.

The risk aversion takes a look at how "risky" your

portfolio is. The younger a person is when they start

their portfolio, the more risk should be applied to

that portfolio. The closer to retirement the investor

gets, however, more and more of their investments

should be applied to "safe" investments, such as

bonds. The safe investments do not have the

potential to grow as much, but will grow more

steadily then the high-risk investments could.

The risk aversion will also take into account the

amount of risk a client is willing to have to their

portfolio. If your risk aversion is high, for example,

you will choose to move most of your money into

safer bonds. Risk aversion is different for everyone.

The second part of the assessment is to analyze your

returns. This report will give the investor a good

idea of how their portfolio is doing, where it is

going, and how long it will take to mature. Some

specialists will also study your portfolio to find out

the recovery period from a bad market of the

current portfolio, which also gives insight into the

quality of the portfolio.

It is possible for a lot to change in the market on a

daily basis, so it is important to keep track of your

investments and your portfolio. Do not let your

portfolio fall into disrepair; make sure to run an

analysis on your investment periodically to make

sure all of your stocks, bonds, and other investments

are still sound.

Once the analysis is complete, an investor will have

a good idea of which of your investments are sound,

which are good, and which need to be changed.

This will help the investor to make smart choices

now, and avoid costly mistakes in the future.

Although your portfolio may appear sound, it may

need a few tweaks to be making you the most

money it can make

The main aim of portfolio analysis is to give a

caution direction to the risk and return of an

investor on portfolio. Individual securities have risk

return characteristics of their own. Therefore,

portfolio analysis indicates the future risk and return

in holding of different individual instruments. The

portfolio analysis has been highly successful in

tracing the efficient portfolio. Portfolio analysis

considers the determination of future risk and return

in holding various blends of individual securities.

An investor can sometime reduce portfolio risk by

adding another security with greater individual risk

than any other security in the portfolio. Portfolio

analysis is mainly depending on Risk and Return of

the portfolio.. The portfolio’s expected holding

period value relative is simply a weighted average

of the expected value relative of its component

securities. Using current market value as weights,

the expected return of a portfolio is simply a

weighted average of the expected return of the

securities comprising that portfolio. The weights are

equal to the proportion of total funds invested in

each security.

Page 52: Keynote 2011

52

Tradition security analyses recognize the key

importance of risk and return to the investor.

However, direct recognition of risk and return in

portfolio analysis seems very much a “seat-of-the-

pants” process in the traditional approaches, which

rely heavily upon intuition and insight. The result of

these rather subjective approaches to portfolio

analysis has, no doubt, been highly successfully in

many instances. The problem is that the methods

employed do not readily lend themselves to analysis

by others.

Most traditional method recognizes return as some

dividend receipt and price appreciations aver a

forward period. But the return for individual

securities is not always over the same common

holding period nor are he rates of return necessarily

time adjusted. An analyst may well estimate future

earnings and P/E to derive future price. He will

surely estimate the dividend. But he may not

discount the value to determine the acceptability of

the return in relation to the investor’s requirements.

A portfolio is a group of securities held together as

investment. Investments invest their funds in a

portfolio of securities rather than in a single security

because they are risk averse. By constructing a

portfolio, investors attempt to spread risk by not

putting all their eggs into one basket. Thus

diversification of one’s holding is intended to

reduce risk in investment.

Most investor thus tends to invest in a group of

securities rather than a single security. Such a group

of securities held together as an investment is what

is known as a portfolio. The process of creating

such a portfolio is called diversification. It is an

attempt to spread and minimize the risk in

investment. This is sought to be achieved by

holding different types of securities across different

industry groups.

Portfolio Selection:

Portfolio analysis provides the input for the next

phase in portfolio management, which is portfolio

selection. The proper goal of portfolio construction

is to generate a portfolio that provides the highest

returns at a given level of risk. A portfolio having

this characteristic is known as an efficient portfolio.

The inputs from portfolio analysis can be used to

identify the set of efficient portfolios. From this set

of efficient portfolios the optimum portfolio has to

be selected for investment. Harry Markowitz

portfolio theory provides both the conceptual

framework and analytical tools for determining the

optimal portfolio in a disciplined and objective way

such as commission and brokerage..

1. Intrinsic difficulty: Portfolio revision is a

difficult and time-consuming exercise. The

methodology to be followed for portfolio revision is

also not clearly established. Different approaches

may be adopted for the purpose. The difficulty of

carrying out portfolio revision itself may act as a

restriction to portfolio revision.

2. Taxes: Tax is payable on the capital gains arising

from sale of securities. Usually, long term capital

Page 53: Keynote 2011

53

gains are taxed at a lower than short-term capital

gains. To qualify as long-term capital gain, a

security must be held by an investor for a period not

less than 12 months before sale. Frequent sales of

securities in the course of periodic portfolio revision

of adjustment will result in short-term capital gains

which would be taxed at a higher rate compared to

long-term capital gains. The higher tax on short-

term capital gains may act as a constraint to

frequent portfolios.

ADVANTAGES OF PORTFOLIO

MANAGEMENT:

A. 1. Identify Lagging Assets

One of the main benefits of analysis is showing an

investor how his assets stack up. By dividing his

assets into certain categories, such as asset class,

and subdividing these categories further into groups

such as industry, the investor can identify which of

his holdings are performing better and which are

lagging behind. This allows him to prune his

portfolio appropriately.

B. 2. Provide Historical Context

A portfolio analysis will generally break down the

investor's holdings over time, showing how the

assets have performed over the course of various

years. This is most often demonstrated graphically.

The advantage of this method is to allow the

investor to match his portfolio against changes in

the broader economy. For example, if his assets

performed well during a boom, but fell during a

recession, he may want to purchase assets less

affected by economic ups and downs.

C. 3. Benchmark Performance

It's difficult to measure the performance of a

portfolio without knowing the performance of other

comparable groups of assets, with which the

portfolio can be compared. For this reasons, a

portfolio analysis will generally include a

comparison of the performance of the portfolio with

a number of different indexes over time, such as the

Dow Jones Industrial Average, the S & P 500, or a

well-regarded mutual fund.

D. 4. Calculate Risk

A good portfolio analysis will also attempt to

identify the relative risk of any investor's portfolio.

For example, if a large amount of an investor's

holdings are tied up in low-rated bonds, a portfolio

analysis may identify his as susceptible to a severe

depreciation of his assets. By turns, if the investor

has concentrated most of his holdings in

conservative mutual funds, the analysis may show

that he could stand to take on some more risk, so as

to improve his potential upside.

E. Conclusion:

Finally, a risk analysis will generally identify areas

in which he has concentrated his holdings, giving

him an idea of where he is financially exposed. For

example, if a large percentage of the portfolio is

composed of assets that hinge upon a high interest

rate for performance, such as certain types of

interest rate derivatives or variable-rate bonds, he

Page 54: Keynote 2011

54

could be exposed if the interest rate were to experience a severe drop

8. Employee Perception Survey- A Effective tool

for an Entrepreneur in a current scenario

1

*Shubha N, ** Prof S.A Vasantha Kumara., ***Ms Reena Y.A

*PG Student, Department of Industrial Engineering and Management, DSCE, Bangalore-78 **Associate Professor, Department of Industrial Engineering and Management, DSCE, Bangalore-78

***Lecturer, Industrial Engineering and Management, DSCE, Bangalore-78 *ID:[email protected], **ID:[email protected], ***ID:[email protected]

commitment by identifying the root causes of workplace

Abstract— Entrepreneur need to work hard in present attitudes. And further providing the information about the challenging facets to be a successful entrepreneur, which extent to which employees are passionate about their work, requires understanding of how employees pursue about their and commitment of the employees towards their organization employer, therefore Employee Perception Survey (EPS) is an personally. Listening to employees' insight and suggestions for employee attitude survey that provides an important view of improvement provides the entrepreneur with valuable organization through the attitudes of organization’s employees. This survey allows employees to give honest, confidential input about their job and their organization. This discreet feedback acts as a powerful tool for understanding and meeting employee needs. Employees that are satisfied and motivated perform better, promoting customer loyalty and there by leading for betterment of the entrepreneur. The EPS defines for management, in detail, employee needs and concerns. The EPS gives employees the opportunity to evaluate the organization for which they work through numerical ratings, as well as open-ended free- response questions.

Key words:

Employee Perception Survey (EPS), hypothesis

test, interpretation.

I. INTRODUCTION

Entrepreneurship is the act of being an

entrepreneur, which can be defined as "one who undertakes innovations, finance and business acumen in an effort to transform innovations into economic goods". This may result in new organizations or may be part of revitalizing mature organizations in response to a perceived opportunity. The most obvious form of entrepreneurship is that of starting new businesses (referred as Startup Company); however, in recent years, the term has been extended to include social and political forms of entrepreneurial activity. When entrepreneurship is describing activities within a firm or

Page 55: Keynote 2011

55

large organization it is referred to as intra-preneurship and may include corporate venturing, when large entities spin-off organizations. Employee surveys are an important and popular tool that organizations use to solicit employee feedback. Employee attitude survey and graphical interpretation of the survey provides a way to improve levels of productivity and .

information that can be acted upon to increase employees’ satisfaction in the workplace. Further by conducting hypothesis test- the results will allow the entrepreneurs in scrutinizing and further boosting their organizational productivity and motivating the employees of various cadres in his/her organization. There by providing effective tool for measuring and enhancing relationship of employee and employer in the organization.

Perception is defined as the process by which people organize and interpret their sensory impressions in order to give meaning to the world around them [1]. Perception is basically how each individual views the world around them. What one perceives can be very different from actually reality [1]. The perception of one person will vary greatly from that of another person. Perception can have a huge impact on an organization's behavior in whole.

Employee perception surveys can boost the morale of the employees of those who may not have many other opportunities to confidentially express their views. Employee attitude surveys provide a way to improve levels of productivity and commitment by identifying the root causes of workplace attitudes. Employee satisfaction surveys allow for increased productivity, job satisfaction, and loyalty by identifying the root causes of employee satisfaction and targeting these areas. Employee perception surveys measure the extent to which employees are passionate about their work and emotionally committed to their company and to their coworkers. Organizations may also benefit by conducting a more comprehensive organizational assessment survey. Listening to employees' insights and suggestions for improvement provides the organization with valuable information that can be acted upon to increase satisfaction in the workplace.

The information from these surveys will allow you to boost organizational productivity and positively affect

Page 56: Keynote 2011

56

Employee Perception Survey-A Effective Tool for an Entrepreneur in a current scenario

your organization's top and bottom lines. This is very Strengthening supervision. effective tool for measuring and ultimately improving Evaluating customer-service issues. various relationships within organizations. Assessing training needs.

2

An employee perception survey reveals the entrepreneur about their various facets of employees’ attitude towards the organization. Perception is reality, because employees at every organization act on the basis of their perceptions, management must be keenly aware of employees' views. Carrying out an employee perception survey delivers a successful means of measuring and acting upon employees' current beliefs on many job- related subjects. Understanding and working with employees will show improvements in motivation and retention across the organization. Surveys can be carried out online or as paper based documents and tailored to the needs of the organization. Employees that are satisfied and motivated perform better, leading to improved company loyalty. The EPS defines for management, in detail, employee needs and concerns. Employee satisfaction is improved when needs are met and concerns are shared openly. The EPS gives employees the opportunity to evaluate the organization for which they work through numerical ratings, as well as open-ended free-response questions. Costs are dependent on size of organization and extent of survey.

Employee perception surveys are widely used for

gathering and assimilating HR-related data in companies and agencies of all sizes across the world [2]. They have the potential to improve dramatically workplace environments and can be used to identify emerging hotspots and mitigate the downside of organizational change initiatives. They can

also alleviate absenteeism and stress, address issues of bullying and harassment and accurately identify workplace psychosocial risk factors [3][4]. EPS have been in existence since the early 1900s [5]. Developed initially by industrial psychologists, EPS was traditionally underpinned by extensive statistical analyses. This enabled organizations to be confident that the instruments they were using were accurate and assessed stable employee opinions and perceptions, rather than transient feelings and thoughts [4]. In other words, the results are the same irrespective of what time period employees are surveyed and are not distorted by any external factors.

Though many researchers have proposed many tools

for employee perception measures they could not address all the aspects of employee perception. In this context the proposed method assumes special significance.

II. BENEFITS OF EPS TO THE ENTERPRENEUR IN THE

CURRENT SCENARIO

The following are the some of the benefits of EPS:

Identifying cost-saving opportunities. Improving productivity. Reducing turnover. Curbing absenteeism.

Page 57: Keynote 2011

57

Streamlining communication. Benchmarking the organization’s progress in relation

to the industry. Gauging employees' understanding of, and agreement

with, the company mission. Assisting to understand your position with Investors in

People reaccreditation.

III. FACTORS LEADING TO EMPLOYEE

PERCEPTION

These are some critical factors which lead to Employee

Perception. Some of them identified are: FIGURE 1: FACTORS LEADING TO EMPLOYEE

PERCEPTION [6].

Career Development- Opportunities for Personal

Development Organizations with high level of commitment provide employees with opportunities to develop their abilities, learn new skills, acquire new knowledge and realize their potential. Leadership- Clarity of Company Values Employees need to feel that the core values for which their companies stand are unambiguous and clear. Leadership – Respectful Treatment of Employees Successful organizations show respect for each employee’s qualities and contribution regardless of their job level. Leadership – Company’s Standards of Ethical Behavior A company’s ethical standards also lead to engagement of an individual.

Page 58: Keynote 2011

58

mployee Perception Survey-A Effective Tool for an Entrepreneur in a current scenario AND DEVELOPMENT OF EPS

QUESTIONNAIRES

TABLE I: SURVEY QUESTIONNAIRES Employee Perception Survey-A Effective Tool for an Entrepreneur in a current scenario

NOTE:

4

SD Strongly

Disagree

A

Agree

D Disagree N Neutral

SA Strongly

Agree

V. HOW TO MEASURE EMPLOYEE PERCEPTION?

Step I: Listen The employer must listen to his employees and remember that this is a continuous process. The information employee’s supply will provide direction. This is the only way to identify their specific concerns. When leaders listen, employees respond by becoming more perceived. This results in increased productivity and employee retention. Perceived employees

Page 59: Keynote 2011

59

are much more likely to be satisfied in their positions, remain with the company, be promoted, and strive for higher levels of performance. Step II: Measure current level of employee perception. Employee perception needs to be measured at regular intervals in order to track its contribution to the success of the organization. But measuring the perception (feedback through surveys) without planning how to handle the result can lead employees to disengage. It is therefore not enough to feel the pulse—the action plan is just as essential. Step II1: - Identify the problematic areas. Identify the problem areas to see which are the exact areas, which lead to turn over of employees. Step IV: Take action to improve employee perception by acting upon the problematic areas. Nothing is more discouraging to employees than to be asked for their feedback and see no movement toward resolution of their issues. Even the smallest actions taken to address concerns will let the staff know how their input is valued. Feeling valued will boost morale, motivate and encourage future input. Taking action starts with listening to employee feedback and a definitive action plan will need to be put in place finally.

VI. ANALYSIS AND RESULT

The questionnaires which are designed in EPS are divided into 3 categories. i.e.

1. Perception of the employee before joining. 2. Perception in the current working situation. 3. Perception of the expectation from the organization.

Reliability test carried on EPS Questionnaires.

Construct level reliability measures. Correlation of the latent variables. Reliability measures for whole instrument.

Page 60: Keynote 2011

60

Employee Perception Survey-A Effective Tool for an Entrepreneur in a current scenario

According to the reliability measures,

5

Figure III: Snapshot of the Survey questionnaires.

The figure III shows the sample snapshot of the

survey. After getting the feedbacks on EPS questionnaires

from 25 employees of a concern, the reliability measures of the data are as following: Table I: Construct level reliability measures

Composite reliability for the different level of the perception of the employee is as given in the Table I, is high in all levels.

Average Variance Extracted (AVE) is high compared

to the current situation and alpha is also high in all 3 levels.

Reliability measure for the whole instrument is also high and hence the reliability of the perception measurement tool is high.

VII. CONCLUSION

Using EPS Tool consistently helps the entrepreneur to determine employee’s attitude towards their organization which in turn helps in detecting the factors effecting productivity, employee- motivation, turnover rate, work satisfaction etc. Employee Perception lies in the hands of an organization and requires a perfect blend of time, effort, commitment and investment to craft a successful endeavor.

REFERENCES

Construct Composite Reliability

AVE Cronbach Alpha

[1] Research Paper # 93706 ―Perception and Decision Making‖.

Before 0.94227 0.529685 0.928998 [2] Kraut, A. I. (1996). Organizational surveys: Tools for Current Future

0.901247 0.28163 0.863066 0.936817 0.551676 0.923944

assessment and change. Jossey-Bass: San Francisco [3] Carr, J.Z., Schmidit, A.M., Ford, K. & DeShon, R.P. (2003). Climate perceptions matter: A meta-analytic path

Table II: Correlation of the latent variables analysis relating molar climate, cognitive and affective states

Correlations Before Current Future

Before 0.599 0.528

Current 0.284

and individual level work outcomes. Journal of Applied Psychology, 88:4, pp. 605-619. [4] Cotton, P. (2003). Using employee opinion to improve people outcomes, Australian Government Comcare, March 2005. [5] Reichers, A.E. & Schneider, B. (1990). Climate and

Table III: Reliability measures for whole instrument Construct Composite AVE Cronbach

Reliability Alpha Perception 0.954775 0.312036 0.949381 Note: AVE – Average variance Extracted.

The reliability a measure was carried by the software

Page 61: Keynote 2011

called ―visualPLS‖. Table I shows the construct level reliability measures for three different constructs. Table II shows the correlation of the latent variables. i.e. for current and before correlation is 0.599 and for future and before is 0.528. Table III shows the reliability measure for whole instrument.

culture: An evolution of constructs. In B. Schneider (Ed.), Organizational climate and culture. [6] Nitin Vazirani Dean in OB and HR SIES College of Management Studies Nerul- Employee Engagement WPS05. [7]Simon Kettleborough Performance Through Inclusion ―Capturing employee perception. [8] Steve Crabtree – (2004) Getting personnel in the work place – Are negative relationships squelching productivity in your company? – Gallup Management Journal, June 10, 2004. [9] Glenda Goodin, Terry Goodin "Thinking Like an Entrepreneur-The Secret Ingredients of Successful Problem Based Learning Modules"Middle Tennessee State University, USASBE_2011_Proceedings-Page0403. [10]David B. Audretsch and Doga Kayalar-Erdem ―Determinants of Scientist Entrepreneurship: An Integrative Research Agenda‖. [11]Dan Remenyi Proceedings of the 2

nd European

Conference on ― Entrepreneurship and Innovation‖ Pg.no-94.

9. RECRUITMENT AND RETENTION IN A GARMENT INDUSTRY

*Mohammed Ziaulla, ** S A Vasantha kumara, *** G. Mahendra

*M.Tech (MEM) student, Dept of Industrial Engineering & Management,DSCE, Bangalore – 78

Email: [email protected], Contact no: 9535089229

**Associate Professor, Dept of Industrial Engineering & Management,DSCE, Bangalore – 78,

E-Mail: [email protected]

*** HR Manager, Bombay Rayon Fashions Limited, Bangalore

Page 62: Keynote 2011

Abstract - The unpredictable economic environment has forced organizations to include and facilitate best management practices as one of the important tool to win over its employees so as to recruit & retain the skilled & talented resources over the competitive edge.

One of the important problems faced by the organization is in selecting the right candidate from the pool of applicants, motivating employees and also retaining the right kind of employee celebrities in the organization. Companies are forced to function in a world of change and complexity and hence it is more important to have the right employees in order to survive the surrounding competition. Recruitment plays a very crucial role and can be used as a proactive tool in retaining employees by selecting the most stable applicant with a relevant job profile. High turnover affect company in negative way and retention strategies should be high on agenda. One strategy doesn’t fit across all the organizations to recruit and retain employees. Many of the techniques, practices have to be tailored in accord to the organizations culture. Work profile is another major constituent of job satisfaction and motivation. The best practice applied to cultivate a better fostering environment has to be an integral part of the organization so as to meet the employee’s expectation.

Keywords – Motivation, Recruitment, Retention, Skills, Talent.

I. INTRODUCTION OF THE COMPANY

Bombay Rayon Fashions Limited (BRFL)

is an India based company. The Company is an integrated textile company, engaged in the manufacture, export and retailing of high-end designer range of fabrics and garments. The

Company sells its products to other textile manufacturers, exporters as well as retailers. The Company has 32 manufacturing facilities across India with a total capacity to manufacture 220 million meters of fabric per annum, while the garment manufacturing facilities stands at 88.80 million pieces per annum. The Company's fabric that is manufactured is used for captive consumption and sold in the domestic market, as well as exported to Middle East and European countries. During the fiscal year ended March 31, 2010 (fiscal 2010), the Company commenced commercial production of its manufacturing facilities of yarn dyeing, weaving, processing at Tarapur and garmenting at Ichalkaranji, Islampur, Latur and Osmanabad.

Bombay rayon group has fashion fabrics manufacturing units in New Mumbai, Silvassa, Sonale and Bangalore. The company has ultra modern facilities for product development, design studio and efficient sampling infrastructure to provide quality service to its customers in India and abroad.

Bombay Rayon Fashions Limited, together with its subsidiaries, engages in the manufacture and sale of fabrics and apparels. It offers cotton fabrics and cotton fabrics mixed with manmade fibers under the Bombay Rayon brand name to garment manufacturers and retailers. The company’s apparel collections include casual and formal wear for men, women, boys, and girls. Bombay Rayon Fashions also provides various accessories, including buttons. In addition, the company involves in wholesale marketing and distribution of clothing products. Further, it operates approximately 15 retail stores/outlets under the brand name Guru and 9 franchises in Spain, Italy, the Netherlands, and Dubai. It operates in India, The Middle East, Europe, and

Page 63: Keynote 2011

the United States. The company was founded in 1986 and is headquartered in Mumbai, India.

II. COMPANY HISTORY

Our Company was incorporated as Mudra Fabrics Private Limited (MFL) on May 21, 1992. On October 13, 1992 the Company was converted into a public limited company. Subsequently, on September 30, 2004, name of the Company was changed to Bombay Rayon Fashions Limited (BRFL). In March 2005, with a view to consolidate business of our Group, Bombay Rayon Private Limited (BRPL) was amalgamated with MFL Company and two partnership firms of the Group, i.e., B R Exports and Garden City Clothing were taken over by BRPL and MFL respectively

The Bombay Rayon Group

The Bombay Rayon Group started in 1986 with the incorporation of BRPL. The Group is in the textile industry having facilities for manufacture of fabrics, garments, design development, sampling etc. Presently, the Group is engaged in carrying out the activities of manufacture of fabrics for domestic sale and export and manufacture of garments for export. The Bombay Rayon Group was promoted by Mr. Janardan Agrawal. Subsequently Mr. Aman Agrawal and Mr. Prashant Agarwal, sons of Mr. Janardan Agrawal joined the Group and are now in charge of the management of the company as well as our Group companies. Key Events in the history of the Bombay Rayon Group / BRFL.

Year key Events 1986 Bombay Rayon Private Limited, the first company of Bombay Rayon Group incorporated for undertaking the business of manufacture of woven fabric

1990 Start of manufacturing facilities for woven fabric at Navi Mumbai by BRPL

1992 BRFL incorporated for undertaking the business of manufacture of woven fabric

1998 Start of manufacturing facilities for woven fabric at Sonale in Thane district by BRFL

1998 Fabric export started in the Group. B R Exports was formed for undertaking this business segment

2003 group turnover reaches Rs. 100 crores.

* Start of Group's Silvassa unit for manufacture of woven fabric under Reynold Shirting Private Limited * Garment manufacturing for exports started in the Group. Garden City Clothing was formed for undertaking this business 2004 * Decided to set up an integrated facility of yarn dyeing, weaving, process house and garment manufacturing at the apparel park in Doddaballapura near Bangalore * Awarded contact to Gherzi Eastern Limited for technical appraisal of the Expansion Project

* Land allotment by KIADB at the apparel park being developed in Doddaballapura near Bangalore

2005

* Consolidation of business of companies and firms of the Promoters *Group turnover crosses Rs. 140 crores *Company ventures into home furnishing and made ups

Page 64: Keynote 2011

*Incorporates a Wholly Owned Subsidiary at Almere, the Netherlands * EXIM Bank acquired equity in the company.. This was the first time that EXIM Bank has acquired an equity stake in an Indian Company on preferential basis 2007- Bombay Rayon Fashions Ltd bought LNJ Apparels, a unit of Rajasthan Spinning & Weaving Mills Ltd, for a sum of Rs 25.50 crore.

III RECRUITMENT IN GARMENT INDUSTRY

The Garment Industry is standing on the substratum of Fashion Industry. Fashion apparel is nothing but clothing that is in style at a particular time. The concept of fashion implies a process of style change, because fashions in garment have taken very different forms at different times in history.

This has led to an increase in demand of candidates with higher productivity and efficiency and enhanced design capabilities with expertise in garment manufacturing to help increase in productivity. They should not have a sense of creativity, but also a passion to excel in this industry. It is a demanding sector that requires a combination of diligence as well as skill. India has a mature garment industry which in turn ensures that India has a vast body of trained manpower. We have an exclusive database on key personnel in each area, from merchandising, design and manufacturing, import-export government regulations, freight and shipments.

Recruitment is an important part of an organization’s human resource planning and their competitive strength. Competent human resources at the right positions in the organisation are a vital resource and can be a core competency or a strategic advantage for it. The objective of the recruitment process is to obtain the number and quality of employees that can be selected in order to help the organization to achieve its goals and objectives. With the same

objective, recruitment helps to create a pool of prospective employees for the organization so that the management can select the right candidate.

The main challenge in this is competitive global world and increasing flexibility in the labour market, recruitment is becoming more and more important in every business. Therefore, recruitment serves as the first step in fulfilling the needs of organisations for a competitive, motivated and flexible human resource that can help achieve its objectives.their culture do more in order to retain their employees? Or even more important, what do employees want from their employer in order to feel committed and prefer to stay. There may be a difference between how employers try to retain employees with respect to how employees actually would prefer to be retained by their employers and this causes a gap in developing strategies to retain employees.

III. IV EMPLOYEE RETENTION IN

GARMENT INDUSTRY

Employee Retention involves taking measures to encourage employees to remain in the organization for the maximum period of time. Corporate is facing a lot of problems in employee retention these days. Hiring knowledgeable people for the job is essential for an employer. But retention is even more important than hiring. There is no dearth of opportunities for a talented person. There are many organizations which are looking for such employees. If a person is not satisfied by the job he’s doing, he may switch over to some other more suitable job. In today’s environment it becomes very important for organizations to retain their employees.

The top organizations are on the top because they value their employees and they know how to keep them glued to the organization.

Page 65: Keynote 2011

Employees stay and leave organizations for some reasons.

The reason may be personal or professional. These reasons should be understood by the employer and should be taken care of. The organizations are becoming aware of these reasons and adopting In this section we are going to study about various topics related to employee retention, why is it needed, basic practices, myths, etc. in detail

V METHODOLOGY ADOPTED

The unpredictable economic environment has forced organizations to include and facilitate best management practices as one of the important tool to win over its employees so as to recruit & retain the skilled & talented resources over the competitive edge.

One of the important problems faced by the organization is in selecting the right candidate from the pool of applicants, motivating employees and also retaining the right kind of employee celebrities in the organization. Companies are forced to function in a world of change and complexity and hence it is more important to have the right employees in order to survive the surrounding competition. Recruitment plays a very crucial role and can be used as a proactive tool in retaining employees by selecting the most stable applicant with a relevant job profile. High turnover affect company in negative way and retention strategies should be high on agenda.

One strategy doesn’t fit across all the organizations to recruit and retain employees. Many of the techniques, practices have to be tailored in accord to the organizations culture.

Work profile is another major constituent of job satisfaction and motivation. The best practice applied to cultivate a better fostering environment has to be an integral part of the organization so as to meet the employee’s expectation.

Study was conducted at BRFL. It was found that there were noticeable attrition and hence continuous recruitment process was inevitable. Instrument was designed for the workers to query the recruitment process and working condition in the factory, further the supervisors were also questioned about the relationship with the workers, and their opinion in the improving relationship. This paper gives a proposal of the study. The outcome of the study should through light on major issues and hence recommend the management for the corrective actions

• During the survey separate questions were asked for workers and the staff of BRFL.

• Two rounds of survey have been done to know the basic criteria of retention in the industry.

VI CONCLUSION

Finally we can conclude that there should be a proper communication channel between the workers and the staff of the garment industry and even there is a need of bonding system between the management and the workers and the staff. By means of bonding system we can reduce the attrition rate in the garment industry.

VII REFERENCES

Page 66: Keynote 2011

[1] Sources for the data quoted: (a) World

Investment Report, 2002 by U.N.C.T.A.D. (b) http: / www.heureka.clara.net / gaia / global02.

[2] Sources for data used in this section are: ( a ) Oxfam Hong Kong Briefing Paper, April, 2004: Turning The Garment Industry Inside Out. ( b ) Behind The Brand Names, I.C.F.T.U.

[3] References for the history of the garment industry discussed in this section are: (a) ‘Women In The Global Factory’ by Annette Fuentes & Barbara Ehrenrich, 1983, South End Press, U.S.A., Cornerstone Publication, India.

[4] Birnbaum (2000): ‘Birnbaum’s Global Guide To Winning The Great Garment War’, Hong

Kong, Third Horizon Press. Descriptions of activities of Lee & Fung and other trading companies can be had at ‘World Investment Report 2002

[5] Quoted in the report to the Indian Parliament by central minister Sankar Singh Baghela on December 17, 2004, from ‘Cost Benchmarking Study: India vis a vis Bangladesh, Indonesia, Sri Lanka, China, & Pakistan’ by Cotton Textile Export Promotion Council, India.

[6] Ballesteros Rodriguez, Sara “Talents: The Key for Successful Organizations” (2010)

10.EMPLOYEE SATISFACTION

Anju. S. B*, Dr. H. Ramakrishna**, Kaushik. M. K ***

* PG Student, Department Of Industrial Engineering and Management, DSCE, Bangalore-78, India

**Professor & Head, Department of Industrial Engineering & Management, DSCE, Bangalore-78, India.

*** Manager, HDFC SLIC, Bangalore-68, India.

Page 67: Keynote 2011

Abstract: Employees are the basis of every

organization. A study of the employee

satisfaction at company is carried out to assess

the employee satisfaction level in present

competitive environment of Industry to help

knowing and reading of the minds of the current

generation professionals regarding their

Company Culture, Compensation, Work

atmosphere, Management support, Job

satisfaction, performance appraisal and Career

growth opportunities.

Keywords: Job satisfaction, Working conditions,

Company culture, Performance appraisal,

Career growth.

III. INTRODUCTION

A. Introduction:

The "Employee satisfaction” is the terminology used to describe whether employees are happy and contented and fulfilling their desires and needs at work.

An organization focuses all their energy on customer service and their satisfaction. At the same time it is important not to lose sight of people who deliver the end product or service to the customers – the employees. These are the people who deliver the service to the customers. The organization will only thrive and survive when its employees are satisfied.

Keeping morale high among workers can be of tremendous benefit to any company, as happy workers will be more likely to produce more, take fewer days off, and stay loyal to the company.

Employee satisfaction is paramount as this is what

will determine the success or failure of a company.

When employees are satisfied and happy about

working in an organization, the customer is the first

person to notice that.

Raises and bonuses can seriously affect employee

satisfaction, and should be given when possible.

Though money cannot solve all morale issues, and if

a company with widespread problems for workers

cannot improve their overall environment, a bonus

may be quickly forgotten as the daily stress of an

unpleasant job continues to mount.

Employee satisfaction has been defined as a function of performance delivered and

expectations met. It is a persons’ feeling of pleasure or disappointment resulting from

comparing an outcome to his/her expectations.

If the performance falls short of expectations, the employee is dissatisfied and if it matches

the expectations, the employee is satisfied. A high satisfaction implies improvement in

efficiency and performance doing work or service. It is more important for any

organization to offer high satisfaction, as it reflects high loyalty and it will not lead to

switching over once a better offer comes in.

Page 68: Keynote 2011

If possible, provide amenities to your workers to improve morale. Make certain they have a comfortable, clean break room with basic necessities such as running water. Keep facilities such as bathrooms clean and stocked with supplies.

One of the best ways to maintain employee satisfaction is to make workers feel like part of a family or team. Holding office events, such as parties or group outings, can help build close bonds among workers.

Basic considerations like these can improve

employee satisfaction, as workers will feel well

cared for by their employers.

The backbone of employee satisfaction is respect for

workers and the job they perform. In every

interaction with management, employees should be

treated with courtesy and interest. An easy avenue

for employees to discuss problems with upper

management should be maintained and carefully

monitored. Even if management cannot meet all the

demands of employees, showing workers that they

are being heard and putting honest dedication into

compromising will often help to improve morale.

Satisfaction being a continuous process starts from

the day 1 and gets reinforced with time depending

on the importance of the various factors considered

to be important for the individual employee. Loyalty

towards the organization starts to develop when the

employee continues to get the positive

reinforcements on various important aspects for the

duration of the employment.

It becomes important to be aware and understand the

signals that are given out by the employees. The

management should do well to catch them before it

is too late and the employee makes the decision to

quit. This understanding gives the employers an

edge and gives them the time to take corrective

measures if necessary, in order to prevent talent loss.

It could be that the employee is not happy with the

environment or is suffering from a relationship issue

with a colleague or a superior. These issues need to

be handled before they get out of hand.

Employee satisfaction mapping can be the key to a

better motivated and loyal workforce that leads to

better organizational output in the form of better

products and services and results in overall

improvement of an organization.

If a person is not satisfied by the job he is doing, he

may switch over to some other more suitable job. In

today’s environment it becomes very important for

organizations to retain their employees. The

organizations are becoming aware and adopting

many strategies for employee retention.

B. Need for study:

The study of "employee satisfaction" helps the

company to maintain standards & increase

productivity by motivating the employees. This

study tells us how much the employees are capable

& their interest at wok place. What are the things

still to be satisfy to the employees. Although

"human resources" are the most important resources

for any organization, so to study on employee’s

satisfaction helps to know the working conditions &

the things that affect them not to work properly.

Always majority of work is carried out by the

machines/equipments but without any manual

Page 69: Keynote 2011

moments nothing can be done. So to study on

employee satisfaction is necessary.

C. Problem statement:

To assess the satisfaction level of Employees on various work related aspects and provide suggestion for improving the same as a way of tangible expression of concern of employers towards their employee’s welfare and well being.

D. Objectives of study:

To identify the various factors influencing Employee satisfaction

To study the level of satisfaction from various job related aspects

To study the inter-relationship between job aspects and level of satisfaction

Findings can also be considered to assess the training needs of the employees

E. Methodology:

The methodology followed for conducting the study

includes the specification of, questionnaire design,

data collection and statistical tools used for

analyzing the collected data.

Likert scaling techniques has been used for each

question.

1. Strongly agree

2. Agree

3. Neutral

4. Disagree

5. Strongly disagree

F. Sample constituents:

1. Officer/Director/Manager/Supervisor-29

2. Professional-4

3. Sales Representative-10

4. Administrative Support- 2

5. Group Leader-1

6. Customer Service-4

G. Tools of Analysis:

TABLE I GENDER AND LEVEL OF SATISFACTION

Gender Level of Satisfaction

Very

satisfied

Somewhat

satisfied

Neutral Somewhat

Dissatisfied

Very

dissatisfied

Total

Male 8 25 6 - 1 40

Page 70: Keynote 2011

Chi-Square Test: Chi-square test is applied to test

the goodness of fit, to verify the distribution of

observed data with assumed theoretical distribution.

Therefore it is a measure to study the divergence of

actual and expected frequencies; Karl Pearson’s has

developed a method to test the difference between

the theoretical (hypothesis) & the observed value.

Chi – square test (X2) = (O – E) 2 / E

Degrees Of Freedom = V = (R – 1) (C -1)

Where,

‘O’ = Observed Frequency

‘E’ = Expected Frequency

‘R’ = Number of Rows

‘C’ = Number of Columns

For all the chi-square test the table value has taken

@ 5% level of significance.

IV. ANALYSIS AND INTERPRETATION OF DATA

The results of the analysis of the collected data are

presented under different heads.

Gender of the respondents and the level of

satisfaction towards job

The gender-wise classification of the sample

respondents and their level of satisfaction towards

their job are given in Table 1. In order to find out

the association between the gender of the

respondents and their level of satisfaction towards

the job, Chi-square test is applied.

Null hypothesis: The association between the gender

of the respondents and their level of satisfaction

towards job is significant.

As the calculated Chi-square (14.2654) is greater

than the table value (9.488) at 5% level of

significance for 4 degrees of freedom, the null

hypothesis is rejected and it can be concluded that

the association between the gender of the

respondents and their level of satisfaction towards

job is not significant

Education of the respondents and the level of

satisfaction towards job

The education-wise classification of the sample

respondents and their level of satisfaction towards

their job are given in Table 2. In order to find out

the association between the gender of the

respondents and their level of satisfaction towards

the job, Chi-square test is applied.

Null hypothesis: The association between the

education of the respondents and their level of

satisfaction towards job is not significant.

As the calculated Chi-square (4.089492) is less than

the table value (9.488) at 5% level of significance

for 4 degrees of freedom, the null hypothesis is

accepted and it can be concluded that the association

TABLE 2 EDUCATION AND LEVEL OF SATISFACTION

Education Level of Satisfaction

Very

satisfied

Somewhat

satisfied

Neutral Somewhat

Dissatisfied

Very

dissatisfied

Total

Graduation 5 16 6 0 3 30

Page 71: Keynote 2011

between the education of the respondents and their

level of satisfaction towards job is not significant

Age group of the respondents and the level of

satisfaction towards job

The age-wise classification of the sample

respondents and their level of satisfaction towards

their job are given in Table 3. In order to find out

the association between the gender of the

respondents and their level of satisfaction towards

the job, Chi-square test is applied

Null hypothesis: The association between the age of

the respondents and their level of satisfaction

towards job is not significant.

As the calculated Chi-square (6.2845) is less than

the table value (15.507) at 5% level of significance

for 8 degrees of freedom, the null hypothesis is

accepted and it can be concluded that the association

between the age of the respondents and their level of

satisfaction towards job is not significant

Experience of the respondents and the level of

satisfaction towards job

The experience-wise classification of the sample

respondents and their level of satisfaction towards

their job are given in Table 4. In order to find out

the association between the gender of the

respondents and their level of satisfaction towards

the job, Chi-square test is applied

Null hypothesis: The association between the

experience of the respondents and their level of

satisfaction towards job is not significant.

As the calculated Chi-square (12.2094) is less than

the table value (26.296) at 5% level of significance

for 16 degrees of freedom, the null hypothesis is

accepted and it can be concluded that the association

between the experience of the respondents and their

level of satisfaction towards job is not significant

Position of the respondents and the level of

satisfaction towards job

The position-wise classification of the sample

respondents and their level of satisfaction towards

their job are given in Table 5. In order to find out

the association between the position of the

respondents and their level of satisfaction towards

the job, Chi-square test is applied.

TABLE 3 AGE AND LEVEL OF SATISFACTION

Age Level of Satisfaction

Very

satisfied

Somewhat

satisfied

Neutral Somewhat

Dissatisfied

Very

dissatisfied

Total

Below

25

1 5 1 - 0 7

25-30 6 16 2 1 0 25

TABLE 4 EXPERIENCE AND LEVEL OF SATISFACTION

Experience Level of Satisfaction

Very

satisfied

Somewhat

satisfied

Neutral Somewhat

Dissatisfied

Very

dissatisfied

Total

Less than

3 months

2 4 - - - 6

3 month s

to less

than 1

year

3 2 2 - - 7

1 year t o

less than

4 year

5 8 4 6 1 24

Page 72: Keynote 2011

Null hypothesis: The association between the

position of the respondents and their level of

satisfaction towards job is significant.

As the calculated Chi-square (24.5568) is less than

the table value (31.410) at 5% level of significance

for 20 degrees of freedom, the null hypothesis is

accepted and it can be concluded that position and

level of satisfaction is significant

V. RECOMMENDATION

In order to contain and keep a check of the attrition

rate in an organization Employee Satisfaction survey

should be carried out at regular intervals by the

employers as a way of tangible expression of care

and concern of employers towards their employee as

well as to assess the training needs of employees.

Though it may not be possible to meet all the

expectations of an employee, sincere efforts should

be made by an employer in this regard in order to

maintain high morale among the employees

Maintaining good working conditions, availability

of latest tools and technologies, providing proper

training, arranging tours and trips, recognition from

supervisors, frequency of incentives and bonuses,

opportunity to work on interesting projects and

career growth are some of the factors that needs to

be taken care of by the employers in order to

maintain high level of satisfaction of employees

VI. CONCLUSION

From above study we can identify and measure the

various work related elements and their association

with the level of satisfaction of employees,

accordingly the employers should strive to work on

the factors that directly affects the level of

satisfaction amongst employees, thereby increasing

their level of satisfaction to get higher productivity

from employees as well as to contain the increasing

attrition rate in an organization

VII. REFERENCES

K. A shwathappa, “Human Resource Management”

5th Edition, Tata McGraw-Hill Companies, New

Delhi.

S. P. Gupta:”Statistical Methods, 36th Edition,

Sultan Chand & Sons, New Delhi.

Stephen Robbins “Essential Of Organizational

Behaviour” Prentice Hall International Edition

TABLE 5 POSITION AND LEVEL OF SATISFACTION

Position Level of Satisfaction

Very

satisfied

Somewhat

satisfied

Neutral Somewhat

Dissatisfied

Very

dissatisfied

Total

Officers/Director/

Manager/Supervisor

6 18 4 - - 28

Professional 0 0 1 1 - 2

Sales

Representative

4 5 2 1 - 12

Page 73: Keynote 2011

11. IMPACT OF ORGANISATIONS CULTURE ON EMPLOYEE

PERFORMANCE

*Naveen Kumar P, ** S A Vasantha kumara, *** Darshan Godbole * PG Student, Department Of Industrial Engineering and Management, DSCE, Bangalore-78, India

**Associate Professor, Department of Industrial Engineering & Management, DSCE, Bangalore-78, India.

***Branch Manager, Wenger & Watson Inc., Bangalore-94.

*ID:[email protected],**ID:[email protected],***ID:[email protected]

Abstract—Culture is concerned with beliefs and

values on the basis of which people interpret experiences and behave, individually and in groups. The economy is always a roller coaster ride. The organization is prone to the changes of its marketing and economic environment. The trend in the organization has to be continuously monitored and balanced so as to leverage itself to survive.

Culture determines the make or break of the company.

The unnecessary flavours in the organization have to be removed and newer ingredients have to be introduced to achieve the desired change within the boundary of the culture. The performance, satisfaction level and incremental transitional changes don’t happen within a day and the process requires time and monitoring. Motivation and training has to be included so as to foster a comfortable organizational environment. The orchestration among the various departments, teams or individuals has to be precisely managed - atleast, so the resistance to the change in the culture can be minimized.

A survey was conducted in an organization to determine the

impact of the organization culture on the employee performance & productivity level and also the role played by organization culture to retain the employee. Demographic parameters like gender, age and work experience were considered. The analysis of the survey suggests that the organization culture plays a very crucial role in determining the employee performance and productivity level and also contributes to understand that it has a role to play in retaining

the employees Keywords – Culture, demography, performance, retention

I. INTRODUCTION

A. Culture - The foundation: Culture plays a very important role in establishing a base

for the smooth functioning of the organization. Culture determines the make or break of the company. No two companies have the same culture. The most successive companies too have to change and adjust their culture with the time so as to manage and sustain itself in the ongoing competition around. Culture of an organization reflects itself on the employee‟s performance and productivity.

B. The environment to live: Company culture is the one that promotes and fosters a

better working environment where every employee can feel a sense of life. Promoting transparency in the companies and its procedure to certain extent can really make a difference. Every employee has to be considered as an asset who can

Page 74: Keynote 2011

perform and deliver a better result. Company culture is the distinctive personality of the organization. It determines how members act, how energetically they contribute to teamwork, problem solving, innovation, customer service, productivity, and quality. It is a company's culture that makes it safe (or not safe) for a person, division or the whole company to raise issues and solve problems, to act on new opportunities, or to move in new, creative directions. C. Emphasizing the values

The values are the core beliefs of the employees about their job – how they do it. The job performance is greatly affected and reflected by company‟s culture. Culture and personality are similar. When people describe a national, regional, or organizational culture they use words that can apply to a person like „tough or friendly‟, „supporting or offensive‟ etc.

II. DEFINING THE STRUCTURE Given to consideration, any company or every company

is classified or categorized into different functional levels. The culture is applicable to all the levels of the organization but may be slightly tuned to match appropriate levels. For example, the culture may provide liberty to the management side where as it can act bit tough to the working level. This may not be same across all the company but they are tailored accordingly.

A company may undergo several cultural changes over a

period of time with the invasion of varying marketing trend, economic changes, technological advancements etc. The culture needs to be restructured periodically and appropriately so as to make a close fit for a stable working environment. Defining a new structure or modifying the existing structure is not an easy task. There are several constraints that really downturn the restructuring ambition. The culture can be structured periodically-

1. One predefined structure exists in a company and

the employees have to adjust to the existing culture

2. Based on the changes in the work management and

leadership styles, the culture can be fine tuned to fit the chord.

No matter what changes are made or what styles are followed in the organization culture the bottom line is to keep employees happy and result oriented.

Page 75: Keynote 2011

The problem – Defining the new culture: Paradigm shift in the working style and in the culture is

not an easy game to play on. Resistance is the one opposing force that really acts as a barricade to any cultural changes. This resistance comes from every organizations greatest asset – “Employees”. Keeping the employees satisfied is not an easy task to handle. Motivation is also another factor that drives employee‟s performance and productivity. Organization culture has to stress more on motivating employees and should also retain their employee. If the culture can make any employee committed to their respective organizations then the culture has played a very positive influence on the employees mindset. What do employees want from their organization in order to feel committed? The primary reason for working is to obtain money (Jackson and Carter, 2007), but could employers and their culture do more in order to retain their employees? Or even more important, what do employees want from their employer in order to feel committed and prefer to stay. There may be a difference between how employers try to retain employees with respect to how employees actually would prefer to be retained by their employers and this causes a gap in developing strategies to retain employees. Job satisfaction:

High levels of absenteeism and staff turnover can affect

your organization. Recruitment and retraining take their toll. Few practices (in fact, few organizations) have made job satisfaction a top priority, perhaps because they have failed to understand the significant opportunity that lies in front of them. Satisfied employees tend to be more productive, creative and committed to their employers. The job of a manager in the workplace is to get things done through employees. To do this the culture should be able to

motivate employees. Motivation practice and theory are difficult subjects, touching on several disciplines. Human nature can be very simple, yet very complex too. An understanding and appreciation of this is a prerequisite to effective employee motivation in the workplace and therefore effective management and leadership style and a part of organizations culture

[3] Job satisfaction is the one key factor that

drives an employee to work and work.

Leadership style: Is leadership style or style of managing people such an

important thing? Does it influence employees in any way? Is it the culture of the organization that sprouts the leadership style? We do come across many organizations and the style they have adopted to manage people. Definitely, yes is the answer for the above questions. Organization culture plays a very important role in developing and implementing leadership style so as to manage people very effectively and efficiently and also avoid micro management and emphasize more on employee‟s welfare. Motivating employees is one of important yet uneasy task to manage along with leadership. Leadership style is a part of the organizations culture and accounts to either create a positive or negative impact on employee‟s performance level.

Page 76: Keynote 2011

Figure 1: Organization culture & elements

III. MOTIVATION THEORIES Different researchers carried out different researches on motivation applicable in general as well as for the employees. In spite of enormous research, basic as well as applied, the subject of motivation is not clearly understood and more often than not poorly practiced. To understand motivation one must understand human nature itself There are various motivating theories and practice that concentrate on various theories regarding human nature in general and motivation in particular. Included are articles on the practical aspects of motivation in the workplace and the research that has been undertaken in this field, notably by Frederick Herzberg (two factor motivation hygiene theory,) Abraham Maslow (theory z, hierarchy of needs), David McClelland (achievement motivation.)[4] A. Herzberg two factor theory In the late 1950s, Frederick Herzberg, considered by many to be a pioneer in motivation theory, interviewed a group of employees to find out what made them satisfied and dissatisfied on the job. He asked the employees essentially two sets of questions:

1. Think of a time when you felt especially good about

your job. Why did you feel that way? 2. Think of a time when you felt especially bad about

your job. Why did you feel that way?

From these interviews Herzberg went on to develop his theory that there are two dimensions to job satisfaction: motivation and "hygiene" Hygiene issues, according to Herzberg, cannot motivate employees but can minimize dissatisfaction, if handled properly. In other words, they can only dissatisfy if they are absent or mishandled. Hygiene topics include company policies, supervision, salary, interpersonal relations and working conditions. They are issues related to the employee's environment. Motivators,

Page 77: Keynote 2011

on the other hand, create satisfaction by fulfilling

individuals' needs for meaning and personal growth. They are issues such as achievement, recognition, the work itself, responsibility and advancement. Once the hygiene areas are addressed, said Herzberg, the motivators will promote job satisfaction and encourage production. B. Abraham Maslow Theory of needs: Maslow has set up a hierarchic theory of needs. All of his basic needs are instinctual, equivalent of instincts in animals. Humans start with a very weak disposition that is then fashioned fully as the person grows. If the environment is right, people will grow straight and beautiful, actualizing the potentials they have inherited. If the environment is not "right" (and mostly it is not) they will not grow tall and straight and beautiful. Maslow has set up a hierarchy of five levels of basic needs. Each stage of needs have to understood and then the motivating techniques have to be used accordingly [5]

Figure 2: Maslow hierarchy of needs

Physiological Needs These are biological needs. They consist of needs

for oxygen, food, water, and a relatively constant body temperature. They are the strongest needs because if a person were deprived of all needs, the physiological ones would come first in the person's search for satisfaction. Safety Needs

When all physiological needs are satisfied

and are no longer controlling thoughts and behaviors, the needs for security can become active. Adults have little awareness of their security needs except in times of emergency or periods of disorganization in the social structure (such as widespread rioting). Children often display the signs of insecurity and the need to be safe. Needs of Love, Affection and Belongingness

When the needs for safety and for physiological

well-being are satisfied, the next class of needs for love, affection and belongingness can emerge. Maslow states that people seek to overcome feelings of loneliness and alienation. This involves both giving and receiving love, affection and the sense of belonging. Needs for Esteem

Page 78: Keynote 2011

When the first three classes of needs are

satisfied, the needs for esteem can become dominant. These involve needs for both self-esteem and for the esteem a person gets from others. Humans have a need for a stable, firmly based, high level of self-respect, and respect from others. When these needs are satisfied, the person feels self-confident and valuable as a person in the world. When these needs are frustrated, the person feels inferior, weak, helpless and worthless. Needs for Self-Actualization

When all of the foregoing needs are satisfied, then

and only then are the needs for self-actualization activated. Maslow describes self-actualization as a person's need to be and do that which the person was "born to do." "A musician must make music, an artist must paint, and a poet must write." These needs make themselves felt in signs of restlessness. The person feels on edge, tense, lacking something, in short, restless. If a person is hungry, unsafe, not loved or accepted, or lacking self-esteem, it is very easy to know what the person is restless about. It is not always clear what a person wants when there is a need for self- actualization. C. David Mcclellands motivation theory [http://www.netmba.com/mgmt/ob/motivation/mcclelland/] David McClelland proposed that an individual's specific needs are acquired over time and are shaped by one's life experiences. Most of these needs can be classed as achievement, affiliation, or power. A person's motivation and effectiveness in certain job functions are influenced by these three needs. McClelland's theory sometimes is referred to as the three need theory or as the learned needs theory. Achievement

People with a high need for achievement (nAch)

seek to excel and thus tend to avoid both low-risk and high- risk situations. Achievers avoid low-risk situations because

the easily attained success is not a genuine achievement. In high-risk projects, achievers see the outcome as one of chance rather than one's own effort. High nAch individuals prefer work that has a moderate probability of success, ideally a 50% chance. Achievers need regular feedback in order to monitor the progress of their acheivements. They prefer either to work alone or with other high achievers. Affiliation

Those with a high need for affiliation (nAff) need

harmonious relationships with other people and need to feel accepted by other people. They tend to conform to the norms of their work group. High nAff individuals prefer work that provides significant personal interaction. They perform well in customer service and client interaction situations. Power

A person's need for power (nPow) can be one of two

types - personal and institutional. Those who need personal power want to direct others, and this need often is perceived as undesirable. Persons who need institutional power (also known as social power) want to organize the efforts of others to further the goals of the organization. Managers with a high need for institutional power tend to be more effective than those with a high need for personal power.

Page 79: Keynote 2011

The motivation theory has to be practically

implemented and use to understand employee‟s mindset and try to provide job satisfaction in their domain. Job satisfaction has a direct impact on employee‟s retention in an organization. Too often employee retention is viewed as a process or function of the human resources department. Somehow there is an expectation that the recruiting staff should not only identify and hire employees, but that they should also ensure their retention through some sort of strategy or program. The reality is that employee retention is everyone's responsibility. According to experts, while most managers believe employees leave due to money issues, in actuality it is an employee‟s relationship with their supervisor that has the greatest impact on whether they stay or go. This means that not only do organizations need a performance management system that recognizes and rewards supervisors for meeting objectives that reduce employee turnover, supervisors need to understand what steps they can take to meet their responsibility in employee retention and job satisfaction [6]

IV.WHAT THE STUDY WAS ABOUT?

A research survey was conducted in an organization to know how the organization‟s culture has an influence on employee‟s performance and productivity level and also its influence on retaining its employees. Demography was based on gender, years of work experience and age. Questionaries:

1. Does organization culture have an influence on job

performance? 2. Can the productivity level of the employee

determined by company‟s culture? 3. Does employee‟s age and experience play a crucial

role in adapting to company‟s culture? 4. Does organization‟s culture play a crucial role in

retaining its employees? Methodology

1. Conducting a survey 2. Collecting data 3. Applying hypothesis & Analysis

Conducting a survey:

As said before a survey was conducted in an

organization to assess the culture and its influence on employee‟s performance. Responses were collected using five point grading scale or Likert scale. Collecting data

Sources used in research can be classified into two

types: primary data and secondary data. The primary data is the data collected by researcher in the intention to directly use in the study. Secondary data is the material collected and cultivated by others, often with another purpose than the one of the actual study. The practical way is to combine both

Page 80: Keynote 2011

primary data and secondary data. Secondary data can

also be used to analyse gathered primary data.

The primary data was collected by conducting a

survey with a set of questionnaires given to employees and also for students who had just joined the company. The questionnaires were common to both of them to get the varying results from experienced employee to fresh employee.

The secondary data were the reference made to the

other research papers and journals confining to organization culture and retention strategies. Questionnaires were also added from few previous research papers and the methodology was followed appropriately. Questionnaires were developed and attached to emails and was mailed to respondents. The participants were presented to be anonymous so that frank opinion can be obtained about their organization. Analysis: The gathered data were analysed to get the actual and a common problem of an employee towards their organization. Demography was based on age, gender, and work experience in an organization. Hypothesis test used was chi square method to decide the questionnaires asked were valid or not. Each of the questions was hypothesised and analysis was made to know the impact of the organization culture and its importance in retaining employees.

The result proved that all the questionnaires asked

had the same output proving that the organization culture has an impact on the employee performance & productivity level. Corrective actions: Corrective action greatly varies from one organization to other and also from one employee to the other. It‟s the same way how one motivating technique cannot be applied

to all the employees. Every individual is different, his needs are different, and his emotions, his problems are different. The corrective actions applied or strategy taken to account has to be appropriate to that particular organization. Culture has to be developed and structured so that the result should make a positive impact on the employee as well as on the overall organization. Again the leaders and their leadership qualities make a huge difference on the psychology of an employee along with culture.

V. CONCLUSION

Take a look at your organization Are you doing your best to retain your top talent? Make sure the work place is a happy one, which every employee would love to spend time. Human resources department along with senior management must take steps to make sure of this. It not the about organizations big infrastructure and campus, it‟s all about understanding employees and their purpose. Before a manager can think he can make lot of money, let him know to get respect from many. People feel good to organization

only when organization feels good to them. Money is

part of strategy to retain an employee but it is not everything to an employee.

Page 81: Keynote 2011

VI FIGURES

[1] Organisation Culture & Elements [2] Maslow hierarchy of needs

VII REFERENCES [1] Linda Lindgren, Sanna Paulson “Retention” (2008) [2] Ballesteros Rodriguez, Sara “Talents: The Key for Successful

Organizations” (2010) [3] http://www.aafp.org/fpm/991000fm/26.html [4] http://www.accel-team.com/motivation/index.html [5] http://honolulu.hawaii.edu/intranet/committees/FacDevCom/guidebk/t

eachtip/maslow.htm [6] http://www.suite101.com/content/retention-and-job-satisfaction-

a29812 [7] http://www.kickstarthr.com/thekit/retention/retention-strategy-

checklist.php [8] http://www.ventureblog.com/2010/09/advice-on-building-company-

culture.html [9] http://management.about.com/cs/generalmanagement/a/companycultur

e.htm [10]http://www.companyculture.com/basics/fivelevels.htm [11]Piyali Ghosh, Geetika, “Retention strategy in Indian IT industry”,

Indian Journal of economics and business (Dec 2006) http://findarticles.com/p/articles/mi_m1TSD/is_2_5/ai_n25012647/

12. Analysis on The Effectiveness Of Performance Appraisal Asha D.B1 and Associate Prof. S. A. Vasantha Kumara 2

1 Research Scholar, Department of Industrial Engineering and Management, Dayananda Sagar College of Engineering,

Bangalore, India. 2 Associate Professor , Department of Industrial Engineering and Management, Dayananda Sagar College of Engineering, Bangalore, India.

Abstract

Performance Appraisal is a Systematic evaluation of the performance of the employees. It is the process of estimating or judging the value of excellence, equalities and status of an employee. Performance appraisal of employee is important in managing the human resource of an

organization. With the change towards knowledge-based capitalism, maintaining talented knowledge workers is critical. This paper describes a Comparative Study on Effectiveness of Performance appraisal. The objective of the study , methodology and Broad framework of analysis/ tools and

Page 82: Keynote 2011

techniques to be used is highlighted.

Introduction

People differ in their abilities and their aptitudes. There is always some difference between the quality and quantity of the same work on the same job being done by two different people. Performance appraisals of Employees are necessary to understand each employee‟s abilities, competencies and relative merit and worth for the organization. Performance appraisal rates the employees in terms of their performance. Performance Appraisal is a Systematic evaluation of the performance of the employees. It is the process of estimating or judging the value of excellence, equalities and status of an employee and the organization to check the progress towards

the desired goals and aims. ACCORDING to Moon, C. et al. Performance appraisal of candidates in relation to a particular position, is a key task towards managing the human resources of an organization. Supervisors are concerned with performance appraisal judgments and evaluations that they have to make on their subordinates. On the other hand, subordinates are increasingly realizing the importance of performance appraisal since it would very much affect their rewards and future career path. As the world began to shift towards knowledge based capitalism, it reminds all organizations on the importance of maintaining their talented knowledge workers. Therefore, discovering and promoting the most qualified candidates is essential because valuable human expertise is the main source of competitive advantage for the organizations. Thus, the creation of performance criteria is an important requirement towards performance appraisal. Performance appraisal is usually conducted periodically within an organization to examine and discuss the work performance of subordinate so as to identify the strengths and weaknesses as well as opportunities for improvement among employees. Following this, most of the employers use the performance appraisal result to determine if a particular staff should be terminated or reinforced; as an employee development and coaching tool; to give a practical evaluation of an employee‟s readiness for promotion;

Page 83: Keynote 2011

and to serve as the foundation for giving

In his book, “Essentials of merit bonus. Organizational Behavior”, Stephen

Literature review

The following literature review was compiled from journals and published books. The purpose of this literature review is to establish a foundation based in theory for this study. The findings of the literature review examined different aspects of performance appraisals. Performance Appraisal is increasingly considered one of the most important Human Resource practices (Boswell and Boudreau 2002). It is crucial for employees to know what is expected of them, and the yardsticks by which their performance will be assessed. Borman and Motowidlo (1993) identified two types of employee behavior related to performance appraisal. “Task Performance” and “Contextual Performance”. Task performance includes technical aspects of performance, while contextual performance includes volunteering for extra activities, completing job requirements with enthusiasm and contributing in organizational building process. Boice and Kleiner (1997) suggested that overall purpose of performance appraisal is to let an employee know how his or her performance compares with the manager‟s expectations.

Performance appraisal is the system by which a formal review of an employee‟s work performance is conducted. it is the process of evaluating the contribution of each employee in a factory or organization.

“Performance appraisals are management tools that may be used to direct and control employee behavior, distribute organizational rewards, improve employee work performance, or develop employee capabilities” (Tompkins, 1995, p.250).

P.Robbins (1994) states, “Performance appraisals serve a number of purposes in organizations. First, management uses appraisals for general personnel decisions such as promotions, rewards, transfers, and terminations. Second, appraisals identify training and development needs, not only for individual employees, but also the organization as a whole. Third, performance appraisals can be used to validate selection and development programs. Fourth, appraisals provide feedback to the employees on how the organization views their performance” (p.228).

Goals and objectives are methods by which job expectations can be measured. “Managers must be able to clearly explain the differences between goals and standards to their employees so that both parties know how they will be used during the appraisal process” (Maddux, 1987, p.169). A goal is a statement of expected results in the performance appraisal process.

“Goals can describe: (1) conditions that will exist at the end of a period, (2) the time frame required for the desired results and (3) the resources required to achieve the results. Goals should be established with employee participation and designed to reflect their abilities and training”(Maddux, 1987, p.170). This setting of goals and objectives is important because employees may not understand that their current behavior is not producing desired results.

In his article Succession Planning, Coleman (1988) states, “The performance evaluation system utilized by an agency should include an assessment of an employees potential for promotion” (p.24). Performance appraisal system gives employee‟s feedback in relation to their work performance. Without feedback, employees are left to their own opinions

Page 84: Keynote 2011

regarding performance. There are two potential concerns when employees are not provided with timely feedback. They may feel they are performing at the department standards, when in fact they are not. Or, they may feel their performance is substandard, when in fact their performance is exceptional. Most employees want to do a respectable job and if their performance is unknown to them it may add unneeded stress.

One of the biggest potential benefits of performance appraisals ,lies in the belief , that the lines of communication are further opened between the executives and the employees.” Effective communication during the performance review process can only enhance the executive –worker relation”. Even if both parties don‟t agree on the outcome of the performance review, an opportunity exists for the dialog.

Another prospective aspect of performance appraisal is that training issues can be explored.”Training is an essential component of quality assurance it will help reduce some of the inconsistencies that may occur in ratings, and will provide a framework for executives to use in conducting performance appraisals”. If executives are not given the proper training to conduct the appraisal process , then the executives are left to their own opinions with regard to the content of the program.

The problem revealed with several

performance appraisal systems in existence today are: that it is extremely difficult to develop a program which is completely objective.

However ,there are some long standing concerns with regards to performance appraisals. The fact is “No one is free from bias and prejudice. “The fact is that no matter how objective the performance appraisals are conducted there is always the

possibility of subjective reviews. Gomex- Mejia, Page and Tornow (1985) propounded four basic components which should be included while designing and evaluating the effectiveness of PAS. These components are as follows- (i) Job related appraisal form (ii) An appraisal model (iii) A support system (iv) Monitoring and tracking network. In the analysis done by them it was found that if the aforesaid features were incorporated as total package, then PAS would be most effective.

Objectives of the study

1. To study and analyze the effectiveness of performance Appraisal system.

2. To know whether the employees are satisfied with the present performance appraisal system.

3. To know whether the Performance Appraisal identifies the training needs of the employees.

4. To study the impact of Performance Appraisal in employee‟s motivation , promotion and productivity

5. To know the degree of fairness/ impartiality in performance appraisal for promotions.

6. To study whether the incentives are directly proportionate to the performance of the employees in the organization.

Scope of the study :This study focuses on employee feedback, training , motivation , productivity and also suggests new methods of Performance Appraisal.

Procedures

Tools for data collection

The research was carried out through survey method. A well structured, close ended and

Page 85: Keynote 2011

well designed questionnaire was utilized to get clear idea of respondents‟ perception. The respondents were asked to respond on „Likert Scale‟ (Five Point Scale) ranging rfrom “Strongly Disagree” to “Strongly Agree”. Feedback form

Two feedback forms were developed A & B to determine the effectiveness of performance appraisal. One for Executives (A) and the other one for Non -Executives (B).

The purpose of this research was to determine : the effectiveness of performance Appraisal system ,t o know whether the employees are satisfied with the present performance appraisal system, to know whether the Performance Appraisal identifies the training needs of the employees, to study the impact of Performance Appraisal in employee‟s motivation, promotion and productivity, to know the degree of fairness or impartiality in performance appraisal , to know what employees think about performance appraisal system and determining if performance appraisal system could be improved.

There were two reasons for two forms. The first reason was to determine the different views of both executives and non executives concerning performance appraisals. The second one was to determine if there was a disparity between the two groups of respondents concerning about the amount of feedback given. Potentially the executives may feel that they are providing feedback on a consistent basis while the workers may feel otherwise.

Sample size

It is made up of 15% of Executives and 10% of Non- Executives.

Data Analysis

After gathering the questionnaires, the necessary data were obtained from them. Various statistical tools used in order to analyze the data, includes:

1.Descriptive statistics: Simple percentage analysis, frequency tables, and the calculation of the central indices .

2.Inferential statistics: to test the hypothesis, Z-test is used.

Limitations and assumptions

While not every employee was surveyed it was assumed that this random selection would represent a fairly even statistical analysis of the department overall. It is assumed that all members answered honestly and openly.

Conclusion

Performance appraisal is one of the important functions of HRM .it is conducted by the top management in order to assess the performance of the employee and motivate him to work better by providing the training. Performance appraisal is directly proportionate to the monetary benefits. The organization has a peaceful working environment in which the employees will be motivated to work. Employees are the biggest asset of any organization and therefore the asset has to be taken care of, to accomplish the organizational goal. The findings of the study need to be implemented on a stepwise approach within a particular timeframe for the greater success and efficiency in the performance appraisal system in the organization.

Page 86: Keynote 2011

References

[1] Boswell, W.R. & Boudreau, J. W. (2002) Separating the development and evaluative performance appraisal uses. Journal of Business and Psychology. Vol 16, pp 391- 412. [2] Borman, W. & Motowidlo, S. (1993). Expanding the criterion domain to include elements of contextual performance. In N. Schmitt & W. Borman (Eds.). Personnel Selection in Organizations. 71-98; New York; Jossey- Bass. [3] Moon, C., Lee, J., Jeong, C., Lee, J., Park, S. and Lim, S. (2007), “An Implementation Case for the Performance Appraisal and Promotion Ranking”, in IEEE International Conference on System, Man and Cybernetics, 2007.

Page 87: Keynote 2011

[4] Coleman, R.J. (1988,February). Succession Planning. Fire Chief, 23-26. [5] Maddux, R.B. (1987). Effective Performance Appraisals. Los Altos, CA: Crisp Publications

6] Gomex-Mejia, L. R., Page, R. C. and Tornow, W. (1985). Improving the effectiveness of performance appraisals. Personnel Journal. Vol 360 (1). pp 74-78. [7] Tompkins, T. (1995). Human Resource Management in Government. New York, NY:Harper Collins

Websites

http://WWW.Performance_appraisal.com

http://en.wikipedia.org/wiki/performance_ap praisal

http://www.business- marketing.com/store/appraisals.html

http://www.businessballs.com/performancea ppraisals.html

Page 88: Keynote 2011

13. A Study on the perception of Entrepreneurs’ towards

Entrepreneurship Development Programme organised by SISI

Dr. B.BALAMURUGAN, PhD

Director - Hallmark Business School,

(Affiliated to Anna University)

Somarasempettai post - Trichy 620 102

Introduction:

The concept of entrepreneurship is a complex phenomenon. Broadly, it relates to entrepreneur, his vision and its implementation. The key player is the entrepreneur. Entrepreneurship refers to a process of action an entrepreneur (person) undertakes to establish his/her enterprise. It is a creative and innovative response to the environment. Entrepreneurship is thus a cycle of actions to further the interests of the entrepreneur. In this chapter, the concept of entrepreneurship and the related issues are analyzed, discussed and deliberated

One of the qualities of entrepreneurship is the ability to discover an investment opportunity and to organize an enterprise, thereby contributing to real economic growth. It involves taking of risks and making the necessary investments under conditions of uncertainty and innovating, planning and taking decisions

so as to increase production in agriculture, business, industry, etc. Entrepreneurship is a composite skill, the resultant of a mix of many qualities and traits. These include imagination, readiness to take risks, ability to bring together and put to use other factors of production, capital, labor, land, as also intangible factors such as the ability to mobilize scientific and technological advances.

Above all, entrepreneurship today is the product of team work and the ability to create, build and work as a team: The entrepreneur is the maestro of the business orchestra, wielding his baton to which the band is played.

Definition of an Entrepreneur:

Page 89: Keynote 2011

The term "entrepreneur" is defined in a variety of ways. Yet no consensus has been arrived at on the precise skills and abilities that make a person a successful entrepreneur. The term "entrepreneur" was applied to business initially by the French economist, Cotillion, in the 18th century, to designate a dealer who purchases the means of production for combining them into marketable products.

The New Encyclopedia Britannica

considers an entrepreneur as "an individual who bears the risk of operating a business in the face of uncertainty about the future conditions”. Peter Ducker has aptly observed that, “Innovation is the specific tool of entrepreneurs, the means by which they exploit changes as an opportunity for a different business or a different service. It is capable of being presented as a discipline, capable of being learned and practiced. Entrepreneurs need to search purposefully for the sources of innovation, the changes and their symptoms that indicate opportunities for successful innovation. And they need to know and to apply the principles of successful innovation”.

Importance of Entrepreneurship in

Economic Development

The role of entrepreneurship in economic development varies from economy to economy depending upon its material resources, industrial climate and the responsiveness of the political system to the entrepreneurial function.

Further, India which itself is an under-developed country aims at decentralized industrial structure to militate the regional imbalances in levels of economic development, small-scale entrepreneurship in such industrial structure plays an important role to achieve balanced regional development. It is unequivocally believed that small-scale industries provide immediate large-scale employment, ensure a more equitable distribution of national income and also facilitate an effective resource mobilization of capital and skill which might otherwise remain unutilized. Lastly, the establishment of Entrepreneurship Development Institutes and alike by the Indian Government during the last decades is a good testimony to her strong realization about the primum mobile role of entrepreneurship played in economic development. Thus, it is clear that entrepreneurship serves as a catalyst of economic development. On the whole, the role of entrepreneurship in economic development of a country can best be put as "an economy is the effect for which

entrepreneurship is the cause”.

Objectives of the Study

1. To know the entrepreneurs perceptiontowards entrepreneurship development Programme.

2. To find out the usefulness of Entrepreneurship Development Programme (EDP)

3. To suggest suitable policy measures for improvement.

Page 90: Keynote 2011

RESEARCH METHODOLOGY

Statement of the Problem:

The task before the national leadership today is to industrialize a predominantly agricultural society, where capital is scarce and labor is plentiful.

The gap between the indigenous arts and even the improved crafts, on the one hand, and the imported and also the indigenous technology, on the other, is wide enough and is further widening day after day with the explosion of innovative research. Furthermore, the economy has been historically and unavoidably polarized between a few mighty industrial centers and the far-flung rural areas. With all the progress during the two decades and a half, the national economy is beset with all kinds of shortages and scarcities of inputs, most vital to new entrepreneurs, when they launch their small or tiny units. Besides, they have, at times, to contend and compete with the well-established small, medium or large industrial units in the same lines of production. Newly emerging industries, particularly of the first-generation entrepreneurs, even in the underdeveloped economies are slowly but certainly tending to become high investment industries and the entrepreneurs have to battle against formidable force of hostile elements, such as (i) erratic shortages of raw materials, (ii) flourishing black market, (iii) rigged up prices by monopolists, (iv) administrate misdistribution, (v) gaps between official

promise and performance, (vi) inefficiency of industrial' management, (vii) irresponsible attitude to work, (viii) rising cost of capital and credit, (ix) deficient and arrogant institutional banking, (x) inadequacy of common services, (xi) primitive wholesale and retail trade outlets and others. New entrepreneurship in India has to be cradled in this environment.

The present stage of entrepreneurship development programmes as a factor contributing to the industrialization of backward and other areas needs a proper direction and organization for making it more effective and purposeful. The contribution of entrepreneurship development programmes is very uneven among different regions and definite programmes need to be chalked out to bring about some degree of uniformity and up gradation.

Area of the study

The participants of the Entrepreneurship Development Programme organized by SISI Chennai at ITCOT centre in Thanjavur District were belongs to Thanjavur, Thiruvarur, Nagapattinam Districts. It is purposively selected due to proximity & availability of factual data.

Period of the study

Page 91: Keynote 2011

The cross section data obtained from the participants. The primary data collection was done during Dec 2011.

Scope of the study:

This study would provide the guidelines to the organizers of Entrepreneurship Development Programme (EDP). This could help in formulating EDP suitable course content & methodology in to improve the efficiency of entrepreneurs.

Method of sampling:

In order to study the entrepreneurs’ perception towards EDP, all the 38 participants were selected. Hence, Census method was followed to collect the data. The sample size was 38.

Data collection

Description research was conducted. A specially designed interview schedule was used to collect information from the respondents.

Limitations

1. The study is restricted to the participants of entrepreneurship development Programme Conducted by Small Industries Service Institute, Chennai at ITCOT Centre at Thanjavur.

2. The study is not exhaustive but only practical to serve the purpose

3. Recall bias of the respondents 4. The conclusion arrived at in this

study needs modification for application elsewhere.

ENTREPRENEURSHIP

DEVELOPMENT PROGRAMME (EDP)

Entrepreneurial Development Programme is defined as a programme designed to help an individual in strengthening his entrepreneurial motive and in acquiring skills and capabilities necessary for playing his entrepreneurial role effectively. It is necessary to promote this understanding of motives and their impact on entrepreneurial values and behavior for this purpose. EDP means a programme designed to help a person in strengthening his entrepreneurial motive and in acquiring skills and capabilities necessary for playing his entrepreneurial role effectively. It is necessary to promote his understanding of motives, motivation pattern, their impact on behavior and entrepreneurial value.

Analysis of data & interpretation

This chapter discusses analysis of data and interpretation of the information collected through primary data collection.

Table No. 1

Gender wise Classification

Gender No of Percent

Page 92: Keynote 2011

respondents

Male 3 7.9

Female 35 92.1

Total 38 100

From the above table it has been inferred that Majority of the respondents are female. In the present scenario women are not only part of the family but also part of the Economic development of the society. It is clearly showed. Out of 38 participants 35 of them female participants. Women want to contribute more in the growth of the family income, by showing much interest in starting Small Scale Units. To gain more knowledge on Entrepreneurship they are showing high interest in attending Entrepreneurship Development.

Table No. 2

Community wise Classification

Community No of

respondents Percent

FC 1 2.6

BC 4 10.5

MBC 10 26.3

SC/ST 23 60.5

Total 38 100

The Table clearly shows the classification of the participants on the basis of community. Community playing major role in one’s development and interest in a particular field. In this study majority of the participants are belongs to scheduled caste and scheduled tribe. It is very good sign to the developing country like India, that scheduled caste and scheduled tribe community people are showing much interest in starting small scale units.

Table No. 3

Age wise Classification

Age No of

respondents Percent

Upto-35 years 13 34.2

35 to 50 25 65.8

Above 50 0 0

Total 38 100

Age is an important factor in taking good decision in the one’s life. Generally aged persons are expected to be better judges in selecting appropriate decision during the course of the business. The analysis showed that majority of the respondents 65.8 percentages belonged to middle – aged group category.

Table No. 4

Literacy wise Classification

Literacy No of

respondents Percent

Page 93: Keynote 2011

Illiterate 0 0

Secondary 5 13.2

Higher Secondary

23 60.5

Collegiate 9 23.7

Professional 1 2.6

Total 38 100

Education plays an important role in

improving knowledge, attitude and practices in doing a business. Education fine tunes one’s ability to become successful in one’s life. Among the total sample respondents, majority of them 60.5 percentage had studied up to Higher secondary level.

Table No. 5

Opinion about Entrepreneurship

Opinions No of

respondents Percentage

Profit oriented

0 0

Risk taking 26 68.4

Independency 0 0

Self-esteem & prestige

12 31.6

Any other specify

0 0

Total 38 100

Opinions are always differed to person to person. It is very rare to see the

same kind opinion. In our analysis majority of the respondents are agreeing that entrepreneurship is a risk taking. Without risk we can’t achieve any thing. Each and every work is having risk, only those who are ready to face to risk alone can be succeeding. Though there is lot of characteristics to become successful entrepreneurs, the first and foremost is risk taking ability. The participants of the Entrepreneurship Programme are clearly understands this feature.

Table No. 6

Factors Influenced to go for the

Entrepreneurship

No of

respondents Percent

Self 6 15.8

Family members 16 42.1

Friends 11 28.9

Govt schemes/incentives

5 13.2

Any other specify 0 0

Total 38 100

Starting a Small Scale Units is

involved in risk taking. It is a practical law of the Entrepreneurship. In spite of this, one person entering into a business means that he/she is really taking high risk, that too family members are motivating one self to start a business clearly shows the attitude of the family. In this study majority of them (42.1 %) are attending Entrepreneurship Development Programme because of their family members motivation. 28.9% of them

Page 94: Keynote 2011

are motivated by their friends and 15.8 percent of them are motivated by self. It should be noted here that this 15.8 percent of the participants will do the business for long period of time. Because they are motivated by self and they will be in the race without expecting anything from any body else.

There should be some basic reason for one’s activity. Attending an Entrepreneurship Development Programme is based on lot of factors or reasons. In this present study majority of the respondents are agreed that they attend the EDP programme to know the new projects and products. In the globalization era one can’t do the business with same set of projects and products. To reach the customers’ needs and desires the entrepreneurs has to change the design, style and quality of the products regularly. Attending Entrepreneur Development Programme will be helpful to the Entrepreneurs to know the new projects / product and the recent trends in the business. That is why majority of the respondents are given first rank and second rank to these factors.

FINDINGS AND SUGGESTIONS

1. Socio-Economic Background of

the respondents:

• Majority of the respondents (92.1%) are female.

• Majority of the 60.5% of the respondents are belonging to the SC/ST.

• 65.8% of the Respondents belong to the age group of 35 to 50 years.

• 60.5% of the Respondents are Higher secondary level of education

• 68.4% of the Respondents are earning income up to 1 lakh per annum

• Out of 38 participants attended the Entrepreneurship Development Programme, majority of them (92.1%) are doing business.

2. General Information about

Entrepreneurship:

• 68.4% of the Respondent’s opinion about Entrepreneurship is Risk taking.

• 42.1% of the Respondents are influenced by their family members

• 78.9 % of the Respondents are came to know about the Entrepreneurship

• Development Programme (EDP) through their friends

• 97.4 % of the Respondents have attended the Entrepreneurship development

• Programme (EDP) for the first time. • The respondents were asked to rank the

factors regarding the reason for attending the EDP programme. It is clear from the analysis that, the first rank goes to the factor ‘to select new projects / products’, the second rank was assigned to the factor ‘to know the recent trends/opportunities’, and the third rank was secured by the factor ‘to increase the contacts with Govt Agencies’.

3. Level of Perception of

Entrepreneurship Development Programme as revealed by the

respondents: 1. From the analysis it is concluded that

there is no significant difference between the gender and level of perception on EDP

Page 95: Keynote 2011

2. From the analysis it is disclosed that there is no significant difference between the Community and level of perception on EDP.

3. The study reveals that there is no significant difference between the age and level of perception on EDP.

4. The study brings out that there is no significant difference between the Literacy and level of perception on EDP.

5. The study sheds light on the fact that there is no significant difference between the family income and level of perception on EDP

Suggestions On the basis of the study the following suggestions can be stated to improve the Quality of Entrepreneurship Development Programme. 1. The Organizers of the

Entrepreneurship Development Programme has to concentrate more on giving the complete outline about the EDP at the beginning of the programme. Giving complete outline or information about the EDP in the beginning of the programme enables the participants to attend the programme confidently.

2. Since the EDP is conducted for one month, the course design should be more effective.

3. The study materials / Handouts can be given not at the end of the session, but at the beginning of the session.

4. The selection of resource persons of the programme should have expertise knowledge in the area of Entrepreneurship.

5. Since Government agencies like DIC, SIDO, TIIC & Commercial banks are

playing major role in promoting Entrepreneurship, more interaction with the officials from these agencies should be arranged.

6. The certificate of the programme should be given at the end of the programme.

Conclusion

The study titled “Entrepreneurs’ Perception towards Entrepreneurship Development Programme” provides vital information regarding EDP. The objectives of the study were framed and various findings were evaluated. The study throws light on several factors. From the study the researcher came to know that organizing Entrepreneurship Development Programme is highly useful to the Entrepreneurs in improving their Entrepreneurial quality.

The conduct of the EDP is good. The organizers have reached the participants in improving the skills of the Entrepreneurship among the participants. However there are some areas where the organizers look into, to improve the quality of the programme. The future for the Entrepreneurship Development Programme will be very good if the programme caters to the needs and wants of the Entrepreneurs. In order to cater the needs and wants of the Entrepreneurs the organizers has to adopt radical steps, in designing the course materials for the programme, inviting the resource persons, and the overall conduct of the programme.

Page 96: Keynote 2011

In this study the perception of the Entrepreneurs’ about the Entrepreneurship Development Programme (EDP) and suggestion for further improvement are given. It will help to improve the quality of the EDP further in future if the suggestions are implemented.

REFERENCES

Entrepreneurship Development, 1998, Prepared by Colombo Plan Staff College for Technician Education, Manila, Tata McGraw Hill, New Delhi Hisrich, 2001, Entrepreneurship, Tata McGraw Hill, New Delhi.

Khanka S.S. 2001, Entrepreneurial

Development, S.Chand Company Ltd, New Delhi. Murthy.C.S.V 2003, Small Scale Industries

and Entrepreneurial Development,

Himalaya Publishing House, Mumbai Saravanavel, P 1997, Entrepreneurial

Development, Ess Pee Kay Publishing House, Chennai.. Vasant Desai, 2002, Small-Scale Industries

and Entrepreneurship, Himalaya Publishing House, Mumbai. Vasand Desai, 2002, Dynamics of Entrepreneurial Development and Management, Himalaya Publishing House, Mumbai.

14.Profile of a Global Entrepreneur in the present business scenario

Prof. V C Malarmannan Srinivasan Engineering College

Abstract

The developmental reality of the 21st century is

plagued with complex socio-cultural, economic,

political and technological milestones and

challenges. During the last 100 years, the globe

has witnessed extraordinary entrepreneurial

novelty, higher frequency of ground breaking

technological innovations and discoveries.

These phenomena have changed the way we

live, think, plan, act and function.

Due to rapid globalisation, the world business

community is no longer ignorant ,It is indeed

the epoch of increased global integration of

national economies at least at continental and

regional levels. The current global scenario has

heightened competition in areas such as

manufacturing, trade finance, construction and

technology among other business activities. 7

out of 10 Globally richest men are First

generation entrepreneurs and now the

smartest enterprises are challenging the

Page 97: Keynote 2011

biggest. The study analysis various

entrepreneur qualities and weights for various

entrepreneurial qualities so that Global

entrepreneurial level score could be assigned to

every one and it provides scope for

improvements and study. The study is an

exploratory study and analysis various

entrepreneurial qualities and also the qualities

of 27 Globally reputes entrepreneurs were

analysed. The study was able to analyse the

profile of 10 top entrepreneurs in India and also

17 top entrepreneurs in the Global market and

also the quality of 10 greatest entrepreneurs of

the century. The study considers various

qualities and factors that are required to

empower the modern Entrepreneurs like

personal dedication and tenacity in taking

advantage of emerging business opportunities,

well versed with the nitty gritty of global trade

regimes; regional and bilateral trade and

economic partnerships; national socio-

economic settings; industrial and economic

productivity ambience and corporate business

practices. Embracing an “international attitude”

is important in shaping the degree of success in

the vital international business opportunity

hunt. The study could provide a model for

Global entrepreneurship Prism and also would

provide a scale for measuring the

entrepreneurship capabilities, Orientations and

Qualities. The study could be extended to

measurement of Entrepreneurship qualities

specific to various Industries and Markets.

Introduction

The Entrepreneurs for over a century, began

their activities by focusing on their home

markets. More and more, however, are now

being born global - chasing opportunities

created by distance, learning to manage

faraway operations, and hunting for the

planet's best manufacturing locations,

brightest talent, most willing investors, and

most profitable customers wherever they

may be - from day one. First are the

logistical problems and psychic barriers

created by distance and by differences in

culture, language, education systems,

religion, and economic development levels.

Even something so basic as accommodating

the world's various workweek schedules can

put a strain on a small start-up's staff.

Second is managing the challenges (and

opportunities) of context - that is, the

different nations' political, regulatory,

judicial, tax, and labor environments. Third,

like all new ventures, global start-ups must

find a way to compete with bigger

incumbents while using far fewer resources.

To succeed, Isenberg has found, global

Page 98: Keynote 2011

entrepreneurs must cultivate four

competencies: They must clearly articulate

their reasons for going global, learn to build

alliances with more powerful partners, excel

at international supply chain management,

and create a multinational culture within

their organization. Think global, act local

Rudimentary and easy, but a rule we most

often forget to implement. The various

qualities like ‘Be specific, be succinct and

be direct’ , ‘Deliver on your commitments’,

‘Learn to do business on voice mail’ and

‘Talk the local talk—Let’s face facts’. The

study has analysed 27 Entrepreneurs (Global

and Local) and also 10 Entrepreneurs in the

history to establish the profile of a global

entrepreneur. The dimensions of

Entrepreneurial profile were listed in 9

facets and each facet consists of 7

parameters for evaluation. The 9 facets are

1. Individual Personality 2. Social

personality 3. Functional expertise 4.

Managerial Qualities 5. Vision & Goal

setting 6. Professional Culture

7. Organisational building 8.Environment

Monitoring & Exploitation and 9.Support

building.

Individual personality characters of a

Global Entrepreneur

‘Personality’ can be defined as a dynamic

and organized set of characteristics

possessed by a person that uniquely

influences his or her cognitions,

motivations, and behaviors in various

situations. The word "personality" originates

from the Latin persona, which means mask.

Significantly, in the theatre of the ancient

Latin-speaking world, the mask was not

used as a plot device to disguise the identity

of a character, but rather was a convention

employed to represent or typify that

character. Jung postulated that people

differed from one another depending on the

degree to which they developed the

conscious use of four functions: thinking,

feeling, sensation, and intuition. Thinking

enables us to recognize meaning, feeling

helps us to evaluate, sensation provides us

with perception, and intuition points to

possibilities available to us. Jung's typology

of the Extrovert/Introvert, Intuitive/Sensing

and Feeling/Thinking model. The Zamora

Personality Test provides a characterization

of ten Individual Attributes and ten Social

Attributes that incorporate the five central

factors in personality: 1) Extroversion

versus Introversion, 2) Neuroticism, 3)

Agreeableness, 4) Conscientiousness, and 5)

Openness to experience. The identified

factors contributing to Individual personality

characters are 1. Global achievement

Page 99: Keynote 2011

Perspective 2. Emotional and Sensational

driven and Control 3. Energy and

Enthusiasm 4. Intellectual capability 5. Risk

taking ability 6. Endurance and 7.

Creativity and Innovativeness. These

personality factors could establish Individual

personality of a Global entrepreneur.

Social personality characters of a Global

Entrepreneur

Social and Interpersonal skills are the skills

that a person uses to interact with other

people. also referred to as people skills or

communication skills. Interpersonal skills

involve using skills such as active

listening.Interpersonal skills refers to mental

and communicative algorithms applied

during social communications and

interaction to reach certain effects or results.

Few social qualities are ‘Think positive and

maintain good relationships’, ‘Do not

criticise others or yourself’, ‘ Learn to

listen’, ‘Have a sense of humor’, ‘Treat

others and their experience with respect’,

‘Praise and compliment people when they

deserve it’, ‘Look for solutions’, ‘Don’t

complain’, ‘Learn to appreciate’ and ‘Treat

your team members and colleagues as

friends and not as strangers or subordinates’.

Abstract— R&D is one of essential factorsof technological innovation system. Quality in

research and development (R&D) work has become increasingly important as companies commit

themselves to quality improvement programmes in all areas of their activity. Quality

improvement forms an important part of their competitive strategy. Quality management systems

15. Quality Management System in R&D Environment

*Krishna S Gudi, **Dr N S Kumar, ***R.V.Praveena Gowda

Page 100: Keynote 2011

have been successfully designed and implemented for manufacturing and service sectors; but so

far the quality principles and systems have been difficult to translate to the R&D sector and

implementation of quality assurance in R&D laboratories is still the domain of a few pioneers.

Look at the challenge of effective implementation of quality management and total quality

principles in R&D and with the help of questionnaire to highlight weakness in quality system, by

suggesting the problems and the benefits, and how to overcome the shortcomings through a more

comprehensive view of TQM that employs innovation as much as improvement and

maintenance. This study finally suggests how to evaluate the capabilities of an R&D

organization in terms of total quality management.

Keywords: TQM, R&D, Quality

INTRODUCTION

Quality is a perceptual, conditional and somewhat subjective attribute and may be understood

differently by different people. Consumers may focus on the specification quality of a

product/service, or how it compares to competitors in the marketplace. Producers might measure

the conformance quality, or degree to which the product/service was produced correctly.

Numerous definitions and methodologies have been created to assist in managing the quality-

affecting aspects of business operations. Many different techniques and concepts have evolved to

improve product or service quality.

A. QUALITY MANAGEMENT SYSTEM

A fully documented QMS will ensure that two important requirements are met:

• The customers’ requirements – confidence in the ability of the organization to deliver the

desired product and service consistently meeting their needs and expectations.

Page 101: Keynote 2011

• The organization’s requirements – both internally and externally, and at an optimum cost

with efficient use of the available resources – materials, human, technology and

information

These requirements can only be truly met if objective evidence is provided, in the form of

information and

data, to support the system activities, from the ultimate supplier to the ultimate customer.

A QMS enables an organization to achieve the goals and objectives set out in its policy

and strategy. It provides consistency and satisfaction in terms of methods, materials,

equipment, etc., and interacts with all activities of the organization, beginning with the

identification of customer requirements and ending with their satisfaction, at every

transaction interface. It can be envisaged as a “wedge” that both holds the gains achieved

along the quality journey, and prevents good practices from slipping:

Fig. 1 Quality Journey

Page 102: Keynote 2011

B. RESEARCH AND DEVELOPMENT

The phrase research and development (also R and D or, more often, R&D), according to

the Organization for Economic Co-operation and Development, refers to "creative work

undertaken on a systematic basis in order to increase the stock of knowledge, including

knowledge of man, culture and society, and the use of this stock of knowledge to devise new

applications.

Fig. 2 Cycle of Research and Development

New product design and development is more often than not a crucial factor in the survival of a

company. In an industry that is changing fast, firms must continually revise their design and

range of products. This is necessary due to continuous technology change and development as

well as other competitors and the changing preference of customers.

Page 103: Keynote 2011

A system driven by marketing is one that puts the customer needs first, and only produces goods

that are known to sell. Market research is carried out, which establishes what is needed. If the

development is technology driven then it is a matter of selling what it is possible to make. The

product range is developed so that production processes are as efficient as possible and the

products are technically superior, hence possessing a natural advantage in the market place.

R&D has a special economic significance apart from its conventional association with scientific

and technological development. R&D investment generally reflects a government's or

organization's willingness to forgo current operations or profit to improve future performance or

returns, and its abilities to conduct research and development.

In general, R&D activities are conducted by specialized units or centers belonging to companies,

universities and state agencies. In the context of commerce, "research and development"

normally refers to future-oriented, longer-term activities in science or technology, using similar

techniques to scientific research without predetermined outcomes and with broad forecasts of

commercial yield.

On a technical level, high tech organizations explore ways to re-purpose and repackage advanced

technologies as a way of amortizing the high overhead. They often reuse advanced

manufacturing processes, expensive safety certifications, specialized embedded software,

computer-aided design software, electronic designs and mechanical subsystems.

The workflow of the research and development department is defined depending on the functions

the department is associated with. There are several main functions such as follows:

Page 104: Keynote 2011

• Researches for and development of new products.

• Product maintenance and enhancement.

• Quality and regulatory compliance.

Researches for and development of new products

Usually, the primary function of the R&D department is to conduct researches for new products

and develop new solutions. Each product has a finite commercial life. In order to be

competitive, the company continuously needs finding ways for new technological development

of product range. When researching and developing new products, both the R&D managers and

their staff take responsibility of performing the following key tasks:

• Ensuring the new product meets the product specification.

• Researching the product according to allocated budget.

• Checking if the product meets production costs.

• Delivering products in time and in full range.

• Developing the product to comply with regulatory requirements and specified quality

levels.

Product maintenance and enhancement

Probably, this is the most important secondary function of R&D department. It helps to keep the

company product range ahead of the competition and enhance the life of products. Existing

products should be maintained ensuring that they can be manufactured according to

Page 105: Keynote 2011

specification. For instance, an element required for an existing product may become obsolete.

When this situation happens, the department is expected to discover an alternative quickly so that

the product manufacturing will not be postponed. At the same time, the commercial life of a

product may be extended through enhancing it in some way like giving it extra features,

improving its performance, or making it cheaper to manufacture, etc. Many companies maintain

and enhance their product range, especially those ones which are engaged in microelectronics

sector.

Quality and regulatory compliance

Quality is a major issue and R&D department is deeply involved in ensuring quality of new

products and attaining the required levels of regulatory requirements. In cooperation with the

quality assurance department, R&D department develops a quality plan for new products.

C. LITERATURE SURVEY

Extensive literature survey has been done as follows:

1. General R&D Environment

2. Defence R&D Environment

3. Requirements

i. Feasibility Study

ii. Conceptualization

iii. Preliminary Design Review (PDR)

Page 106: Keynote 2011

iv. Quality Assurance (QA)

v. Verification and Validation

vi Critical Design Review (CDR)

vi. Trail

vii. Transfer of Technology (ToT)

i. Feasibility Study

Feasibility studies aim to objectively and rationally uncover the strengths and weaknesses of the

existing business or proposed venture, opportunities and threats as presented by the environment,

the resources required to carry through, and ultimately the prospects for success. In its simplest

term, the two criteria to judge feasibility are cost required and value to be attained

As such, a well-designed feasibility study should provide a historical background of the business

or project, description of the product or service, accounting statements, details of

the operations and management, marketing research and policies, financial data, legal

requirements and tax obligations. Generally, feasibility studies precede technical development

and project implementation.

ii. Conceptualization

A conceptualization can be defined as an intentional semantic structure that encodes implicit

knowledge constraining the structure of a piece of a domain.

Page 107: Keynote 2011

An explicit representation of the objects and relations among them existing the world of

interested.

One mark of efficient organizations is their ability to look beyond areas of expertise to see the

larger picture to understand the problems that may arise, to suggest a fool proof alternative that

addresses the core of issue and to make the project work without hindrances. We lay lot of

emphasis to the different stages of conceptualization and as much possible we make it sure to

involve the clients and stimulate real time situations to go deep into the subject.

iii. Preliminary Design Review (PDR)

The preliminary design phase may also be known as conceptual design or architectural design.

During this phase, the high-level design concept is created, which will implement the complex

electronics requirements. This design concept may be expressed as functional block diagrams,

design and architecture descriptions, sketches, and/or behavioral HDL (hardware description

language).

The PDR (Preliminary Design Review) shall be a formal technical review of the basic design

approach for the item. It shall be held after the technical specification, the software design

document, the software test plan, hardware test plan are available, but prior to the start of

detailed design. The PDR may be collectively done for an item as a system or individually for

different sub-systems and assemblies, spread over several events. The over-all technical

programme risks associated with each sub-system or assembly shall also be reviewed on a

technical, cost and schedule basis. For software, a technical understanding shall be reached on

the validity and degree of completeness of the software design document.

Page 108: Keynote 2011

iv. Quality Assurance (QA)

Quality is the ongoing process of building and sustaining relationships by assessing, anticipating,

and fulfilling stated and implied needs.

Quality assurance is the process of verifying or determining whether products or services meet or

exceed customer expectations. Quality assurance is a process-driven approach with specific steps

to help define and attain goals. This process considers design, development, production, and

service.

The most popular tool used to determine quality assurance is the Shewhart Cycle, developed by

Dr. W. Edwards Deming. This cycle for quality assurance consists of four

steps: Plan, Do, Check, and Act. These steps are commonly abbreviated as PDCA.

Page 109: Keynote 2011

Fig. 3 PDCA Cycle

v. Verification and Validation

Verification and validation is the process of checking that a product, service, or system meets

specifications and that it fulfills its intended purpose. These are critical components of a quality

management system such as ISO 9000. Sometimes preceded with "Independent" (or IV&V) to

ensure the validation is performed by a disinterested third party.

Fig. 4 Verification and Validation Process

Page 110: Keynote 2011

Verification is a Quality control process that is used to evaluate whether or not a product, service,

or system complies with regulations, specifications, or conditions imposed at the start of a

development phase. Verification can be in development, scale-up, or production. This is often an

internal process.

Validation is a Quality assurance process of establishing evidence that provides a high degree of

assurance that a product, service, or system accomplishes its intended requirements. This often

involves acceptance of fitness for purpose with end users and other product stakeholders.

vi. Critical Design Review (CDR)

This review is held when a major product deliverable—or the entire integrated project

deliverable—has reached a point in design and prototyping work where "viability" of the design

can be judged, and by extension, the project can be considered to have reached a state of

significantly reduced risk.

The "design review" terminology (as opposed to project review) arose from standard use of this

design checkpoint on technology-oriented projects. However, the concept is applicable to any

project. Based on what the project is charged with creating—whether a technical system, a new

process, marketing literature, written documents, etc.—we want to reach a point where we know

our solution is truly viable. We want to reach a point where we can confidently say, yes, as

designed, we know this is going to work, meet customer needs, and meet project goals, because

we have built something real, reviewed it against the goals, and conquered all the major related

risks.

Page 111: Keynote 2011

CDR shall be conducted for each item when detailed design is essentially complete. For

complex/large systems CDR may be conducted on an incremental basis, that is, progressive

reviews are conducted for different sub systems and software, instead of a single CDR. The

purpose of this review will be to:

1. Determine that the detailed design of each item under review satisfies the performance

and engineering requirements of the design specification.

2. Establish the detailed design compatibility among the item (sub system, assembly,

equipment) under review and other items of the system, facilities, computer software and

personnel.

3. Assess risk areas (on a technical, cost and schedule basis) for each item of the system.

4. Assess productivity of each item of system hardware.

vii. Trail

Any particular performance of a random experiment is called a trial. By Experiment or Trial in

the subject of probability, we mean a Random experiment unless otherwise specified. Each trial

results in one or more outcomes.

A user trial focuses on two specific areas:

1. Operational Effectiveness.

Page 112: Keynote 2011

2. Operational Suitability.

Both operational effectiveness and operational suitability go hand in hand and are virtually

inseparable, as elements within one are readily inclusive in either test area.

Operational effectiveness is determined by how well a system performs its mission when used by

service personnel in the planned operational environment relative to organization, doctrine,

tactics, survivability, vulnerability, and electronic, nuclear or non-nuclear threats.

Operational suitability is determined by the degree to which a system can be placed satisfactorily

in the field, with consideration given to availability, compatibility, transportability,

interoperability, reliability, wartime usage rates, maintainability, safety, human factors,

manpower and logistics supportability, documentation and training requirements.

viii. Transfer of Technology:

Technology transfer is the process of sharing of skills, knowledge, technologies, methods of

manufacturing, samples of manufacturing and facilities among governments and other

institutions to ensure that scientific and technological developments are accessible to a wider

range of users who can then further develop and exploit the technology into new products,

processes, applications, materials or services. It is closely related to (and may arguably be

considered a subset of) knowledge transfer.

Technology Transfer Activities include:

Page 113: Keynote 2011

Processing and evaluating invention disclosures; filing for patents; technology marketing;

licensing; protecting intellectual property arising from research activity; and assisting in creating

new businesses and promoting the success of existing firms. The result of these activities will be

new products, more high-quality jobs, and an expanded economy.

D. OBJECTIVES OF STUDY:

1. The current status of quality in R&D.

2. The perceived knowledge of the participants (employees, scientists).

3. Quality programs that have been used.

E. METHODOLOGY

1. Interaction with all members in R&D department.

2. Questionnaire

A questionnaire is a research instrument consisting of a series of questions and other prompts for

the purpose of gathering information from respondents. Although they are often designed

for statistical analysis of the responses.

F. ANALYSIS

Page 114: Keynote 2011

1. Delphi Method

The Delphi method is a structured communication technique, originally developed as a

systematic, interactive forecasting method which relies on a panel of experts.

In the standard version, the experts answer questionnaires in two or more rounds. After each

round, a facilitator provides an anonymous summary of the experts’ forecasts from the previous

round as well as the reasons they provided for their judgments. Thus, experts are encouraged to

revise their earlier answers in light of the replies of other members of their panel. It is believed

that during this process the range of the answers will decrease and the group will converge

towards the "correct" answer. Finally, the process is stopped after a pre-defined stop criterion

(e.g. number of rounds, achievement of consensus, and stability of results) and the mean or

median scores of the final rounds determine the results.

2. Statistical Methods

In displaying data of result the following statistical tools is to be used:

To

Show

Use Data Needed

Trend

Over

- Line graph

Measurements

Page 115: Keynote 2011

Category - Run chart taken in

Chronological

order

CONCLUSION:

As Quality in research and development (R&D) work has become increasingly important as

companies commit themselves to quality improvement programmes, but so far the quality

principles is difficult to translate to indian based R&D sectors. The quality system can be

enhanced by R&D professionals actively involved in total quality improvement.

ACKNOWLEDGEMENT

I would like to express my deep and sincere gratitude to my external guide, Dr N S Kumar,

Scientist ‘F’ of DEBEL, Bangalore and internal guide R V Praveena Gowda, Assistant Professor

Dayananda sagar engineering college bangalore. Their wide knowledge and their logical way of

thinking have been of great value for me. Their understanding, encouraging and personal

guidance have provided a good basis for the present paper.

REFERENCES:

Page 116: Keynote 2011

[1] Pande, K.B., In-house R&D and Innovation in Indian Industry: Under Market Economy,

Technology Transfer and In-house R&D in Indian Industry (in the later 1990s, Volume II,

Pattnaik, B.K. (ed.), Allied Publishers, New Delhi, 1999:422-426

[2] Takahashi, T., (1997) Management for enhanced R&D Productivity, International

Journal of Technology Management, 14: 789-803.

[3] Rose, Kenneth H. (July, 2005). Project Quality Management: Why, What and How. Fort

Lauderdale, Florida: J. Ross Publishing. p. 41. ISBN 1-932159-48-7.

16. Microfinance is but one strategy battling an immense problem

M.Samtha Holy Cross College Trichy

Abstract:

Microfinance is the provision of financial services to low-income clients or solidarity

lending groups including consumers and the self-employed, who traditionally lack access

to banking and related services.

More broadly, it is a movement whose object is "a world in which as many poor and

near-poor households as possible have permanent access to an appropriate range of high

quality financial services, including not just credit but also savings, insurance, and fund

transfers." Those who promote microfinance generally believe that such access will help

poor people out of poverty.

Page 117: Keynote 2011

As these financial services usually involve small amounts of money - small loans, small

savings, etc. - the term "microfinance" helps to differentiate these services from those

which formal banks provide

Why are they small? Someone who doesn't have a lot of money isn't likely to want or be

able to take out a $50,000 loan, or be able to

open a savings account with an opening balance of $1,000.

It's easy to imagine poor people don't need financial services, but when you think about it

they are using these services already, although they might look a little different.

Page 118: Keynote 2011

Micro – Finance

Microfinance is the provision of financial services to low-income clients or solidarity

lending groups including consumers and the self-employed, who traditionally lack access

to banking and related services.

More broadly, it is a movement whose object is "a world in which as many poor and

near-poor households as possible have permanent access to an appropriate range of high

quality financial services, including not just credit but also savings, insurance, and fund

transfers."[1] Those who promote microfinance generally believe that such access will

help poor people out of poverty

Microfinance is a broad category of services, which include microcredit. Microcredit can

be defined as the provision of credit services to poor clients. Although microcredit by

definition can achieve only a small portion of the goals of microfinance, conflation of the

two terms is epidemic in public discourse. In other words, critics attack microcredit while

referring to it indiscriminately as either 'microcredit' or 'microfinance'. Due to the broad

range of microfinance services, it is difficult to assess impact, and no studies to date have

done so.

Meaning of Empowerment:

Empowerment implies expansion of assets and capabilities of people to influence control

and hold accountable institution that affects their lives (World Bank Resource

Book).Empowerment is the process of enabling or authorizing an individual to think,

behave, take action and control work in an autonomous way. It is the state of feelings of

self-empowered to take control of one’s own destiny. It includes both controls over

resources (Physical, Human, Intellectual and Financial) and over ideology (Belief, values

and attitudes) (Batliwala, 1994)

Empowerment can be viewed as a means of creating a social environment in which one

can take decisions and make choice either individually or collectively for social

Page 119: Keynote 2011

transformation. It strength innate ability by way of acquiring knowledge power and

experience

Empowerment is a multi-dimensional social process that helps people gain control over

their own lives communities and in their society, by acting on issues that they define as

important. Empowerment occurs within sociological psychological economic spheres and

at various levels, such as individual, group and community and challenges our

assumptions about status quo, asymmetrical power relationship and social dynamics.

Empowering women puts the spotlight on education and employment which are an

essential element to sustainable development.

1) EMPOWERMENT: FOCUS ON POOR WOMEN:

In India, the trickle down effects of macroeconomic policies have failed to resolve the

problem of gender inequality. Women have been the vulnerable section of society and

constitute a sizeable segment of the poverty-struck population. Women face gender

specific barriers to access education health, employment etc. Micro finance deals with

women below the poverty line. Micro loans are available solely and entirely to this target

group of women. There are several reason for this: Among the poor , the poor women are

most disadvantaged –they are characterized by lack of education and access of resources,

both of which is required to help them work their way out of poverty and for upward

economic and social mobility. The problem is more acute for women in that groups of

women are better customers than men, the better managers of resources. If loans are

routed through women benefits of loans are spread wider among the household. having

countries like India, despite the fact that women’s labour makes a critical contribution to

the economy. This is due to the low social status and lack of access to key resources.

Evidence shows women’s component .They are the Indira Awas Yojona (IAJ), National

Social Assistance Programme (NSAP), Restructured Rural Sanitation Programme,

Accelerated Rural Water Supply programme (ARWSP) the (erstwhile) Integrated Rural

Development Programme (IRDP), the (erstwhile) Development of Women and Children

in Rural Areas (DWCRA) and the Jowahar

Since women’s empowerment is the key to socio economic development of the

community; bringing women into the mainstream of national development has been a

Page 120: Keynote 2011

major concern of government. The ministry of rural development has special components

for women in its programmes. Funds are earmarked as “Women’s component” to ensure

flow of adequate resources for the same. Besides Swarnagayanti Grameen Swarazgar

Yojona (SGSY), Ministry of Rural Development is implementing other scheme Rozgar

Yojana (JRY)

(a) WOMEN’S EMPOWERMENT AND MICRO FINANCE: DIFFERENT

PARADIGMS

Concern with women’s access to credit and assumptions about contributions to women’s

empowerment are not new. From the early 1970s women’s movements in a number of

countries became increasingly interested in the degree to which women were able to

access poverty-focused credit programmes and credit cooperatives. In India organizations

like Self- Employed Women’s Association (SEWA) among others with origins and

affiliations in the Indian labour and women’s movements identified credit as a major

constraint in their work with informal sector women workers.

The problem of women’s access to credit was given particular emphasis at the first

International Women’s Conference in Mexico in 1975 as part of the emerging awareness

of the importance of women’s productive role both for national economies, and for

women’s rights. This led to the setting up of the Women’s World Banking network and

production of manuals for women's credit provision. Other women’s organizations world-

wide set up credit and savings components both as a way of increasing women’s incomes

and bringing women together to address wider gender issues. From the mid-1980s there

was a mushrooming of donor, government and NGO-sponsored credit programmes in the

wake of the 1985 Nairobi women’s conference (Mayoux, 1995a).

The 1980s and 1990s also saw development and rapid expansion of large minimalist

poverty-targeted micro-finance institutions and networks like Grameen Bank, ACCION

and Finca among others. In these organizations and others evidence of significantly

higher female repayment rates led to increasing emphasis on targeting women as an

efficiency strategy to increase credit recovery. A number of donors also saw female-

targeted financially-sustainable micro-finance as a means of marrying internal demands

Page 121: Keynote 2011

for increased efficiency because of declining budgets with demands of the increasingly

vocal gender lobbies.

The trend was further reinforced by the Micro Credit Summit Campaign starting in 1997

which had ‘reaching and empowering women’ as it’s second key goal after poverty

reduction (RESULTS 1997). Micro-finance for women has recently been seen as a key

strategy in meeting not only Millennium Goal 3 on gender equality, but also poverty

Reduction, Health, HIV/AIDS and other goals.

(i) FEMINIST EMPOWERMENT PARADIGM:

The feminist empowerment paradigm did not originate as a Northern imposition, but is

firmly rooted in the development of some of the earliest micro-finance programmes in the

South, including SEWA in India. It currently underlies the gender policies of many

NGOs and the perspectives of some of the consultants and researchers looking at gender

impact of micro-finance programmes (e.g. Chen 1996, Johnson, 1997).

Here the underlying concerns are gender equality6 and women’s human rights. Women’s

empowerment is seen as an integral and inseparable part of a wider process of social

transformation. The main target group is poor women and women capable of providing

alternative female role models for change. Increasing attention has also been paid to

men's role in challenging gender inequality.

Micro-finance is promoted as an entry point in the context of a wider strategy for

women’s economic and socio-political empowerment which focuses on gender awareness

and feminist organization. As developed by Chen in her proposals for a sub sector

approach to micro credit, based partly on SEWA's strategy and promoted by UNIFEM,

microfinance must be:

Part of a sectoral strategy for change which identifies opportunities, constraints and

bottlenecks within industries which if addressed can raise returns and prospects for large

numbers of women. Possible strategies include linking women to existing services and

infrastructure, developing new technology such as labour-saving food processing,

Page 122: Keynote 2011

building information networks, shifting to new markets, policy level changes to

overcome legislative barriers and unionization.

Based on participatory principles to build up incremental knowledge of industries and

enable women to develop their strategies for change (Chen, 1996). Economic

empowerment is however defined in more than individualist terms to include issues such

as property rights, changes intra-household relations and transformation of the macro-

economic context. Many organisations go further than interventions at the industry level

to include gender-specific strategies for social and political empowerment. Some

programmes have developed very effective means for integrating gender awareness into

programmes and for organizing women and men to challenge and change gender

discrimination. Some also have legal rights support for women and engage in gender

advocacy. These interventions to increase social and political empowerment are seen as

essential prerequisites for economic empowerment.

MICRO FINANCE INSTRUMENT FOR WOMEN’S EMPOWERMENT:

Micro Finance is emerging as a powerful instrument for poverty alleviation in the new

economy. In India, micro finance scene is dominated by Self Help Groups (SHGs) –

Bank Linkage Programme, aimed at providing a cost effective mechanism for providing

financial services to the “unreached poor”. Based on the philosophy of peer pressure and

group savings as collateral substitute , the SHG programme has been successful in not

only in meeting peculiar needs of the rural poor, but also in strengthening collective self-

help capacities of the poor at the local level, leading to their empowerment.

Micro Finance for the poor and women has received extensive recognition as a strategy

for poverty reduction and for economic empowerment. Increasingly in the last five years ,

there is questioning of whether micro credit is most effective approach to economic

empowerment of poorest and, among them, women in particular. Development

practitioners in India and developing countries often argue that the exaggerated focus on

micro finance as a solution for the poor has led to neglect by the state and public

institutions in addressing employment and livelihood needs of the poor.

Page 123: Keynote 2011

Credit for empowerment is about organizing people, particularly around credit and

building capacities to manage money. The focus is on getting the poor to mobilize their

own funds, building their capacities and empowering them to leverage external credit.

Perception women is that learning to manage money and rotate funds builds women’s

capacities and confidence to intervene in local governance beyond the limited goals of

ensuring access to credit. Further, it combines the goals of financial sustainability with

that of creating community owned institutions.

Before 1990’s, credit schemes for rural women were almost negligible. The concept of

women’s credit was born on the insistence by women oriented studies that highlighted

the discrimination and struggle of women in having the access of credit. However, there

is a perceptible gap in financing genuine credit needs of the poor especially women in the

rural sector.

There are certain misconception about the poor people that they need loan at subsidized

rate of interest on soft terms, they lack education, skill, capacity to save, credit worthiness

and therefore are not bankable. Nevertheless, the experience of several SHGs reveal that

rural poor are actually efficient managers of credit and finance. Availability of timely and

adequate credit is essential for them to undertake any economic activity rather than credit

subsidy.

The Government measures have attempted to help the poor by implementing different

poverty alleviation programmes but with little success. Since most of them are target

based involving lengthy procedures for loan disbursement, high transaction costs, and

lack of supervision and monitoring. Since the credit requirements of the rural poor cannot

be adopted on project lending app roach as it is in the case of organized sector, there

emerged the need for an informal credit supply through SHGs. The rural poor with the

assistance from NGOs have demonstrated their potential for self help to secure economic

and financial strength. Various case studies show that there is a positive correlation

between credit availability and women’s empowerment.

CONCLUSIONS AND SUGGESTIONS:

Page 124: Keynote 2011

Numerous traditional and informal system of credit that were already in existence before

micro finance came into vogue. Viability of micro finance needs to be understood from a

dimension that is far broader- in looking at its long-term aspects too .very little attention

has been given to empowerment questions or ways in which both empowerment and

sustainability aims may be accommodated. Failure to take into account impact on income

also has potentially adverse implications for both repayment and outreach, and hence also

for financial sustainability. An effort is made here to present some of these aspects to

complete the picture.

A conclusion that emerges from this account is that micro finance can contribute to

solving the problems of inadequate housing and urban services as an integral part of

poverty alleviation programmes. The challenge lies in finding the level of flexibility in

the credit instrument that could make it match the multiple credit requirements of the low

income borrower without imposing unbearably high cost of monitoring its end use upon

the lenders. A promising solution is to provide multipurpose lone or composite credit for

income generation, housing improvement and consumption support. Consumption loan is

found to be especially important during the gestation period between commencing a new

economic activity and deriving positive income. Careful research on demand for

financing and savings behavior of the potential borrowers and their participation in

determining the mix of multi-purpose loans are essential in making the concept work.

The organizations involved in micro credit initiatives should take account of the fact

that:

• Credit is important for development but cannot by itself enable very poor

women to overcome their poverty.

• Making credit available to women does not automatically mean they have control

over its use and over any income they might generate from micro enterprises.

• In situations of chronic poverty it is more important to provide saving services

than to offer credit.

• A useful indicator of the tangible impact of micro credit schemes is the number of

additional proposals and demands presented by local villagers to public authorities.

Page 125: Keynote 2011

Nevertheless ensuring that the micro-finance sector continues to move forward in

relation to gender equality and women’s empowerment will require a long-term strategic

process of the same order as the one in relation to poverty if gender is not to continue to

‘evaporate’ in a combination of complacency and resistance within donor agencies and

the micro-finance sector. This will involve:

• Ongoing exchange of experience and innovation between practitioners

• Constant awareness and questioning of ‘bad practice’

• lobbying donors for sufficient funding for empowerment strategies

• bringing together the different players in the sector to develop coherent policies and for

gender advocacy.

India is the country where a collaborative model between banks, NGOs, MFIs and

Women’s organizations is furthest advanced. It therefore serves as a good starting point

to look at what we know so far about ‘Best Practice’ in relation to micro-finance for

women’s empowerment and how different institutions can work together.

It is clear that gender strategies in micro finance need to look beyond just increasing

women’s access to savings and credit and organizing self help groups to look

strategically at how programmes can actively promote gender equality and women’s

empowerment. Moreover the focus should be on developing a diversified micro finance

sector where different type of organizations, NGO, MFIs and formal sector banks all

should have gender policies adapted to the needs of their particular target

groups/institutional roles and capacities and collaborate and work together to make a

significant contribution to gender equality and pro-poor development.

REFERENCES:

Page 126: Keynote 2011

Fisher, Thomas and M.S. Sriram ed., 2002, Beyond Micro-credit: Putting Development

Back into Microfinance,

New Delhi: Vistaar Publications; Oxford: Oxfam.

Harper, Malcolm, 2002, “Promotion of Self Help Groups under the SHG Bank Linkage

Program in India”, Paper presented at the Seminar on SHG-bank Linkage Programme at

New Delhi, November 25-26, 2002.

Kabeer N (2001),“Conflicts Over Credit: Re-evaluation the Empowerment Potential of

Loans to Women in Rural Bangladesh”: World Development,Vol.29,No.1.

Mayoux, L. 1998a. Women's Empowerment and Micro-finance programmes :

Approaches, Evidence and Ways Forward. The Open University Working Paper No 41.

Ackerley, B. (1995). Testing the Tools of Development: Credit Programmes , Loan

Involvement and Women's Empowerment. World Development, 26(3), 56-68.

17. To study the role government supportive that affects

the

growth of women entrepreneur in small scale sector.

-Dr.Chitra Devi MBA.,M.Phil.,Ph.D.,Assistant professor MBA dept,Vel sri Ranga Sanku

College, avadi.

Introduction:

The government of India announced a new policy for small and tiny sector and raised the

investment to Rs.5 lakhs, irrespectively of location of the unit. The government has accepted the

recommendation of the Abid Husain Committee regarding enhancement in investment ceiling

and raised the same for the small scale sector.

Page 127: Keynote 2011

The government raised the investment limit in plant and machinery of the small scale

sector to Rs. 3 crores, from the prevailing Rs.60 lakhs with effect from February 7, 1997. The

investment limited for the ancillary sector and for export oriented units had been raised from Rs.

75 lakhs to 3 crores. The investment ceiling for the tiny sector had also been increased five fold

to Rs. 25 lakhs from the prevailing Rs.5 lakhs.

The scope of small scale industries as recommended by the small scale industries board

in 1996 is defined as Small scale Industries will include all industrial units with a capital

investment of not more than Rs. 7.5 lakhs irrespective of the number of person employed.

Capital investment for this purpose will mean investment in plant and machines only.

Women in several industries:

Women entrepreneurs have been making a significant impact in all segments of business

like retail trade, restaurants, hotels, education, culture, cleaning, insurance and manufacturing.

Women and their business skills are not properly utilized across the world, Even though a large

number of women set up their business, they are still facing a lot of problems from the society

They have to put up more efforts to prove their efficiency rather than men. The hidden

opportunities of women should identify themselves and adopt the best entrepreneurial style to

run their business. Different styles of entrepreneurial and leadership skills are adopted by

successful Women entrepreneur of India. Ekta Kapoor of Balaji Teleflims, Kiran Mazumdar

Shaw of Deacon and Shahnaz Husain and her beauty products business and Lijat papads are

shinning some examples of women’s business power.

Today we find women in different types of industries, traditional as well as non

traditional such as engineering, electronics, readymade garments, fabrics, eatables, handicrafts,

doll-making poultry, plastics, soap, ceramics, printing, toy making, nurseries, crèches, drugs

,textile designing, dairy, canning, knitting ,Jewellery design ,solar cooker etc. According to

Page 128: Keynote 2011

Mclelland and winter, motivation is a critical factor that leads one towards entrepreneurship.

This apart, the challenge and adventure to do something new, liking for business and wanting to

have an independent occupation, deeper commitment to entrepreneurial ventures propel women

to business.

Defining of Small Scale Units:

• In order to boost the development of small scale industries and to ensure their rapid

growth, Government has decided:

• To increase the limit of investment in the case of tiny units from

Rs.1 lakh to Rs.2 lakhs;

• To increase the limit of investment in the case of small scale units from Rs. 10 lakhs to

Rs. 20 lakhs; and

• To increase the limit of investment in case of ancillaries from Rs.15

lakhs to Rs. 25 lakhs.

Objective:

1. To analysis the supportive factor of government with women entrepreneur in

small scale sectors.

2. To study the views of women entrepreneur about the government role

Review of literature:

A study by Madhuri Modekurti (20071) analyzes the participation of women in

entrepreneurship and examines the impact of specific norms of different countries that support

women entrepreneurship. The study reveals the contributing push factors to female

entrepreneurship are the lack of opportunities, economic and institutional deficiencies,

dissatisfaction with one’s wage, unemployment etc. The lucrativeness of greater returns to skill

is also instrumental in contributing in this direction. Besides these push factors there are certain

pull factors, which are encouraging women to venture into entrepreneurship like greater

1 Madhuri Modekhurti, (2007). “The Normative Context for Women’s Participation in Entrepreneurship”, The ICFAI Journal of Entrepreneurship

Development Vol.IV No.1 (Pp 95).

Page 129: Keynote 2011

independence, initiative, creativity and flexible working hours which provide more time for the

family.

Tagoe et al (20052) has examined the financial challenged facing by urban SMEs under

financial sector liberalization in Ghana. Main challenges faced by urban SMEs are access to

affordable credit over a reasonable period. To manage this challenge SMEs should manage

record keeping in an effective manner. Moreover, availability of collateral improves SMEs

access to formal credit. But better availability of investments avenues further reduces the

accessibility.

While women were considered less willing to delegate responsibility and control to

others, the evidence from the agencies and providers suggested that, unlike men, women valued

the flexibility derived from owning the business more than the opportunity to be their own boss.

They also preferred collegial rather than hierarchical managerial relationships inherent in

conventional models of business expansion, a finding supported by Clifford (19963).

Sabbagh.Z (19964) Women on family lies at core of society playing the major role in

political, economic, social and religious behaviour. People are conscious of each others family,

membership. Family links facilities that access to institutions jobs and government services.

Data analysis

Risk is the major factor in the growth of entrepreneurs:

2 Tagoe et al.(2005) ‘financial challenges facing urban SMEs under financial sector liberalization in Ghana;.Journal of small business

management, 43,pp331-343. 3 Cliff, J.E. (1998), ``Does one size fit all ± exploring the relationship between attitudes towards growth, gender and business size'', Journal of

Business Venturing, Vol. 13 No. 6, pp. 523-42.

4 Sabbagh.Z pp194-95 (1996) Arab women between defiance and restraint, olive branch press New York.

Page 130: Keynote 2011

Most important in promoting small and medium enterprises, which generates more jobs

and often use Labour intensive methods of production. Industries and business of smaller size

also work towards promoting better income distribution and development of entrepreneurship by

expansion diversification and modification the business to increase the profit earnings. This table

shows the following aspects of the growth of business.

Table 1: Risk is the major factor in the growth of entrepreneurs

Factors Frequ

ency Valid Percent

Valid

Expansion 76 31.7

Diversification 92 38.3

Modification 72 30.0

Total 240 100.0

Source: Primary data

Interpretation:

The above table shows the diversification of business 38.3% required factor on

government policy, 31,7% on expansion , 30% on modification of business venture.

Government policy for small scale enterprises:

Development of small scale sector has been imbued with multiplicity objective.

Important among these are 1) the generation of immediate employment opportunities with

relatively low investment, 2) the promotion of more equitable distribution of national income, 3)

effective mobilization of untapped capital and human skills and 4) dispersal of manufacturing

Page 131: Keynote 2011

activities all over the country, leading growth of women entrepreneur the government plays the

important role in developing such industries. This table shows the women entrepreneur attitudes

towards the government.

Table 2 :Government policy for small scale enterprises

Government policy Frequency Valid Percent

Valid

high satisfy 33 13.8

moderate 81 33.8

satisfy 81 33.8

partial 40 16.7

not 5 2.1

Total 240 100.0

Source: Primary data

Interpretation:

The above table shows the high satisfaction on 13.8% on government policy, moderate

satisfaction and satisfied with government policy 33.8%, partial satisfaction with 16.7% and not

satisfied with supportive factor of government policy with 2.1%.

Factor analysis for factors of Motivation focused on Government role:

Table: 3(a) Factor analysis for factors of Motivation focused on Government role

Page 132: Keynote 2011

KMO and Bartlett's Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .076

Bartlett's Test of Sphericity Approx. Chi-Square 923.811

Degree of freedom 10

Sig. .000

Source: Primary data

Table :3 (b) Total Variance for motivation factor

Component Initial Eigen values Rotation Sums of Squared Loadings

Total

%

of Variance

Cumulative

% Total

%

of Variance

Cumulative

%

1 1.625 32.503 32.503 1.496 29.919 29.919

2 1.369 27.377 59.879 1.331 26.625 56.545

3 1.047 20.941 80.820 1.214 24.276 80.820

4 .950 18.998 99.818

5 .009 .182 100.000

Extraction Method: Principal Component Analysis.

Interpretation:

From the above table it is found that the KMO measure of sample accuracy 0.076. The

Chi square value is 923.111 is statistical significance of 5% level. The 5 variables of motivation

Page 133: Keynote 2011

explained 80.820% of variance with three components and Eigen values 1.496, 1.331 & 1.214.

Individually the 3 factors explained 29.919%, 26.625% & 24.276% respectively. This really

shows that 5 variables are suitable for the application of factor analysis with high variance of

opinion of women entrepreneur. It further implies the women entrepreneur vary in their opinion

about motivational factors which influence them to be a successful women entrepreneur. The 5

variables are converted into 3 components with following loadings.

Table :3 (c) Ranking components for the variables

Ranking factor Component

1 2 3

Seminar\

conference -.924

Private agency .761

Newspaper -.832

Association .792

Government .997

Interpretation:

Hence this factor is called ‘Basic Motivation’ which is based on the seminar and

conference & private agency with the variables 0.924 & 0.761.The second variables comprises of

Newspaper and Women Association. The respective variables loading 0.832 & 0.792. Therefore

this factor predominately named as ‘Public Motivation’. The factor consists of unique variables

government loading 0.997. Hence the named of the factor can be given as ‘Government

Motivation’.

Page 134: Keynote 2011

Conclusion :

The factor analysis over 5 variables of business related technology polices and methods are

completely based on good acquaintance of women entrepreneur with properly accelerated ideas

and indispensable motivation from the public sources. Both central and state government are

playing vital role in motivating the successful women entrepreneur in their business venture.

Findings:

• High satisfaction on 13.8% on government policy was represented by the respondent.

• Majority of respondents views that diversification of business 38.3% required factor

on government policy,

• Hence the factor is called ‘Basic Motivation’ which is based on the seminar and

conference & private agency with the variables 0.924 & 0.761.

Suggestion:

• The NGOs, psychologists, managerial experts and technical personnel should be provided

assistance to induce the self confidence and lack of success in existing and emerging

women entrepreneurs.

• The Women Entrepreneur's Guidance Cell set up to handle the various problems like

social, psychological, financial factors of women entrepreneurs all over the state and

central government.

• The study focus to the Involvement of Non Governmental Organizations and

governmental organizational in women entrepreneurial training Programmes and

counseling should be given to overcome the factors affecting the entrepreneurial while

starting their business venture.

• Central and state government should come forward to arrange seminars and meeting

should arrange regularly by the entrepreneur cell to develop the women in all sorts of

their business venture.

• The study focused the governmental rules in small and medium enterprises of women

entrepreneur to promote the training programmes in suitable managing capabilities in the

sources of risk in developing the infrastructural activities and scarcity of resources.

Page 135: Keynote 2011

Conclusion:

. Majority of women entrepreneurs, especially in small sized industries, choose to

manufacture food based, beauty, computer centre, departmental stores, book stall products

because their technical knowledge comes in low factor and in other factor there is necessary to

depend on the male support for the technical and technological needs. Most state governments

and their appointed agencies offer financial and technological, technical support as development

Page 136: Keynote 2011

of women entrepreneurship is important for women's development as well as the development of

the women entrepreneur in the economy.

18. COMPONENT ANALYSIS BASED FACIAL EXPRESSION

RECOGNITION

1Naresh.Allam 2Punithavathy Mohan 3B.V.Santhosh Krishna

Department of Electronics and Communication Engineering

Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India

Abstract

The Intelligence of Human-Computer

Interaction is one of the hot researching areas.

Facial expression recognition is an important part

of human-computer interaction. At present, the

research of facial expression recognition has

entered an era of a new climax. In real-time facial

expression recognition system, the paper presents

an expression feature extraction method that

combined canny operator edge detection with the

AAM (active appearance model) algorithm.

During the Canny edge detection, the adaptively

generated high and low thresholds, increased the

capability of noise suppression, and the time

complexity of the algorithm is less than the

traditional canny operator. Finally, by using leas

squares method, we can classify and identify the

feature information

Key words : AAL, canny detector

1. Introduction

Besides fingerprints and iris, faces are

currently the most important and most popular

biometric characteristics observed to recognize

individuals in a broad range of applications such

as border control, access control and surveillance

scenarios A facial recognition system is a

computer application for automatically identifying

or verifying a person from a digital image or a

video frame from a video source. One of the ways

to do this is by comparing selected facial features

from the image and a facial database. It is

Page 137: Keynote 2011

typically used in security systems and can be

compared to other biometrics such as fingerprint

or eye iris recognition systems.[The method is

combining the AAM and adaptive Canny operator

edge detection to carry out the expression feature

extraction. When using the canny operator to

detect the edge of the facial image, the algorithm

can adaptively generate the dynamic threshold

according to the edge gradient information of

sub-image and combined the feature information

of global edge gradient. So it needn't human

intervention.

2. Face Recognition Process

"The variations between images of the same

face due to the illumination and viewing direction

are almost always larger than image variations

due to change in face identity”. Face recognition

scenarios can be classified into two types:

(i) Face verification (or authentication)

and

(ii) Face identification (or recognition).

Enrollment process:

Enrollment process is the process of

collecting the faces of persons. For this purpose,

the system uses sensors or cameras to capture the

images (faces), which were placed at railway

stations, airports, police stations and passport

offices etc. The captured image features are

extracted and compared with the available

database. This process is shown in figure1.

Figure 1: Enrollment process

The template is retrieved when an

individual needs to be identified. Depending on

the context, a biometric system can operate either

in verification (authentication) or an identification

mode.

Face identification:

In an identification application, the

biometric device reads a sample and compares

that sample against every record or template in the

database. This type of comparison is called a

“one-to-many” search (1: N). This is illustrated in

figure2

Figure

2: Identification process

Face verification: Face verification is a one to

one match that compares a query face image

against a template face image whose identity is

claimed. Verification occurs when the biometric

Page 138: Keynote 2011

system asks and attempts to answer the question

“Is this X?”

Figure 3: Verification process

After the user claims to be X. In a

verification application, the biometric system

requires input from the user, at which time the

user claims his identity via a password, token, or

user name (or any combination of the three). The

system also requires a biometric sample from the

user. It then compares the sample to or against the

user-defined template. This is illustrated in figure

3.

3. Related works

In recent years face recognition has

received substantial attention from both research

communities and the market, but still remained

very challenging in real applications. A lot of face

recognition algorithms, along with their

modifications, have been developed during the

past decades. While this growth largely is driven

by growing application demands, such as

identification for law enforcement and

authentication for banking and security system

access, advances in signal analysis techniques,

such as wavelets and neural networks. At present

many of the face recognition methods have good

results on the static image recognition [1, 2].

Recognition methods for the image sequence have

also emerged [3]. Another biologically inspired

algorithm that is commonly used for face

recognition is the genetic algorithm (GA). While a

neural network mimic the function of a neuron,

genetic algorithms mimic the function of

chromosomes.

4. Principle Components Analysis Method

Principle Components Analysis:

PCA is an algorithm that treats face

recognition as a two dimensional recognition

problem. The correctness of this algorithm relies

on the fact that the faces are uniform in posture

and illumination. PCA can handle minor

variations in these two factors, but performance is

maximized if such variations are limited. The

algorithm basically involves papering a face onto

a face space, which captures the maximum

variation among faces in a mathematical form.

Next, the algorithm finds the eigenvectors of the

covariance matrix of normalized faces by using a

speedup technique that reduces the number of

multiplications to be performed. Lastly, the

recognition threshold is computed by using the

maximum distance between any two face paper

ions. In the recognition phase, a subject face is

normalized with respect to the average face and

Page 139: Keynote 2011

then papered onto face space using the

eigenvector matrix. Next, the Euclidean distance

is computed between this paper ion and all known

paper ions. The minimum value of these

comparisons is selected and compared with the

threshold calculated during the training phase.

Based on this, if the value is greater than the

threshold, the face is new. Otherwise, it is a

known face.

Principal component analysis technique

can be used to simplify a huge data set consisting

of a number of correlated data values into a

smaller set of uncorrelated values by reducing the

dimension of the data set while still retaining most

of the inherent data. The process begins with a

collection of 2-D facial images into a database

called the face space. Using the Eigen feature

method, the similar and dissimilar variations of

the faces are obtained as a mean image of all the

faces in the face space. This gives the training set

of images. The difference image is computed by

subtracting the mean image from every image in

the training set, which provides the covariance

matrix. The weight vector for the test image is

also calculated and the distance between them

determines the presence or absence of a similar

face in the face space.

4.1 Mathematical Modeling

Let f (x, y) be a two-dimensional function,

where x and y are spatial coordinates and the

amplitude of f at any pair of coordinates (x, y) is

called the intensity of the image at that point.

Every image I can be expressed as a matrix of f

(x, y)s of dimension m X n. Let F be the set of M

training images of same dimension expressed as a

array of dimension (m X n) X M

F = (I1 I2 I3………….I M) _______ (1)

These M images are converted as vectors

X i, 1 i M of dimension N (= m X n) where

X i is an N X 1 vector corresponding to the image

Ii in the image space F. Now F becomes

F = (X1 X2 ………..XM) _______ (2)

The mean image is calculated by summing

all the training images and dividing by the number

of images with dimension (N X 1) as follows

X = ∑=

M

iiX

1

____ (3)

The difference image is obtained by

subtracting the mean image from all the training

images X i is stored in a vector i

XX ii−=Φ ____(4)

The main idea behind the eigenfaces

technique is to exploit the similarities between the

different images. For this purpose the covariance

matrix Сv with dimension N X N is defined as

C v = Φ∑Φ=

T

i

M

iiM.

1

1

Τ Τ __ (5)

Where Τ = ( 1 2 3 ...M) is of

dimension N X M. This covariance matrix

Page 140: Keynote 2011

dimension will normally be huge matrix, and full

eigenvector calculation is impractical

Figure 4: Training set of face images

4.2 Dimensionality Reduction in Covariance

Matrix

We assume that without loss of generality

of the whole training set, we can reduce the

dimensionality of the covariance matrix B with

dimension M X M defined as

B = Τ Τ = Φ∑Φ=

T

i

M

iiM.

1

1

__ (6)

The eigenvectors of the covariance matrix

С v are computed by using the matrix B. Then the

eigenvector Υ i and the eigenvalue of B are

obtained by solving the characteristic eigenvalue

problem | B - I | = 0

B. Υ i = . Υ i _____ (7)

Substituting the value of B in eqn. (7)

Τ Τ. Υ i = . Υ i _____ (8)

Multiplying both sides with Τ in eqn. (8),

we obtain

Τ. Τ Τ. Υ i = Τ. . Υ i ___(9)

Since is a scalar quantity,

Τ Τ . Τ Υ i = . Τ Υ i _____(10)

Substituting the value of Сv = Τ Τ in

eqn.(10), we obtain

С v. Τ Υ i = . Τ Υ i___ (11)

Now, let = Τ Υ i and are M

eigenvectors and eigen values of С v. In practice,

a smaller set of ((M – 1) < M) eigenfaces is

sufficient for face identification because the

subspace is the basis for the face space, i.e., we

can represent the original face as a linear

combination of these vectors. Hence, only

significant eigenvectors of Сv, corresponding

to the largest Eigen values are selected for

the eigenfaces computation thus resulting in a

further data compression.

4.3 Recognition

Now, the training images are papered into

the Eigen face space and the weight of each

Page 141: Keynote 2011

eigenvector to represent the image in the Eigen

face space is calculated. The weight is simply the

dot product of each image with each of the

eigenvectors.

Wk = . i = . (Xi - X ) _____(12)

Where k = 1, 2 …. . All the weights

are converted in the form of a matrix with

dimension X 1

= [w1, w2, w 3………..wk] ____ (13)

To test an unknown image , we evaluate

its weight (wi) by multiplying the eigenvector (

i) of the covariance matrix (С v ) with

difference image ( – X )

W i = ( – X )_______(14)

Now the weight matrix of the unknown

image becomes

= [w1, w2, w 3………..wM’] __

(15)

The Euclidean distance k between

unknown image and each face class is defined by

k2 = || - k || 2; k = 1, 2,…….,Nc

_____ (16)

Calculating the Euclidean distance

between two data points involves computing the

square root of the sum of the squares of the

differences between corresponding values. These

Euclidean distances are compared with an

empirical threshold value. If the threshold value is

less than the Euclidean distance ( k) and greater

than , then it recognizes the face as close as to

the face space but do not represent a known face

class. If the threshold value is less than both the

Euclidean distances, the system recognizes it as

unknown object.

5 .Algorithm of PCA method

Figure 5 :Algorithm of PCA method

6. Results and Discussions

The database used in this context contains

a set of face images of 130 individuals each with

3 different expressions and poses under similar

illumination. The dimension of each image is 200

X 150 pixels in gray mode. PCA technique is

Page 142: Keynote 2011

applied to the database with a test set of random

images. Eigen faces represent prominent features

of face images of the training set. The highest

eigenvalues representing the Eigen faces of 80

individuals with a training set of one face per

individual is plotted in figure 7. In order to find

out the proper value of M ’, the eigen values of

the covariance matrix is ranked in descending

order. Suppose there are M ’ eigenvalues, i.e.,

1, 2, 3, … M’, (it is convenient if these are

appropriately arranged such that 1 2 3

… M’ ).where 1 is the largest eigen value

while M’ is the smallest one. The eigen value

range of the training data provides useful

information for the choice of the reduced PCA

space M’. It is also observed from Figure 7, that

the Eigen value variation of the training face

images increases gradually as the face space

grows from 1 to 80 with 10. In the figure 7, the

number of As the number of the Eigen faces in the

database increases, the Eigen value is observed to

have exponential decay till it reaches the value

zero.

. From the figure 7, it is observed that as

the distance goes on increasing, the recognition

rate drops correspondingly until it reaches zero,

which means that the recognition rate is inversely

proportional to the Euclidean distance with the

Eq.5.18 is presented in figure 7.

Original and with perwitt operator

Figure 6 Edge images with different operators

Figure 7: Eigen faces VS Eigen value

variation

(a)

Page 143: Keynote 2011

(b)

Figure 7: Euclidean distance VS Recognition

rate

For our experiments we have considered

200 images from the entire face database for

which the timing diagram is illustrated in the

figure 7 The figure shows timing plots for both

the cases – with covariance matrix reduction and

without covariance matrix reduction. It is

observed from the graph that the timing for face

recognition with covariance matrix reduction is

around 34 seconds while without eduction of the

covariance matrix, it is around 176 seconds.

Figure 8: Comparison of with and without

canny operator

Figure9: Number of Eigen faces VS

Recognition rate

Figure 10: Execution time with and

without covariance matrix dimension

reduction

This shows that without covariance matrix

reduction, the execution time for the face

recognition algorithm is almost five times that of

with covariance matrix reduction.

6 Conclusion and future work

This paper gives an elaborate discussion

about modeling and results of face recognition

algorithm using principal components analysis

with edge detection algorithms.. Dealing with

Page 144: Keynote 2011

Eigen faces, Eigen values and recognition rates,

we plotted the corresponding graphs that are

obtained using MATLAB 7p0. By reducing the

dimension of the covariance matrix, we could

achieve faster implementation of the algorithm,

which is brought down to a few seconds over a

database of 200 images. By using the canny edge

detection algorithm it is found that recognition

rate is improved with less matching time. It is

also possible in enhancing the reduced

dimensional images without filtering artifacts.

This can be accomplished by wavelet principal

components analysis approach with Hoff field

network. The scheme refers to computing

principal components for a modified set of

wavelet coefficients to find Eigen spectra in

spectral domain and then papering the original

image on to wavelet principal comments analysis.

In this way the features are indirectly emphasized.

References:

[1] Pantic M, Rothkrantz L, “Facial action

recognition for facial expression analysis from

static face images” , IEEE Transactions on

Systems, Man, and Cybernetics-Part B,

vol.14,no.3,pp.1449-1461, 2004.

[2] Kotsia I, Zafeiriou S, Pitas L1. “Texture and

shape information fusion for facial exp ression

and facial action unit recognition”, Pattern

Recognition, vol.41, no.3, pp.833-851, 2008.

[3] Jin Hui, GaoWen. “The human facial

combined expression recognition system”,

Chinese Journal of Computers, vol.23,no.6,pp.

602-608, 2000.

[4] H. Rowley, S. Baluja, and T. Kanade, .Neural

Network- Based Face Detection,. IEEE Trans.

Pattern Analysisand Machine Intelligence, Vol.

20, no. 1, pp. 23-38, Jan.

1998.

[5] M. Ogawa, K. Ito, and T. Shibata, .A

generalpurpose vector-quantization processor

employing twodimensional bit-propagating

winner-take-all,. in IEEE Symp. on VLSI Circuits

Dig. Tech. Papers, pp. 244-247, Jun. 2002.

[6] Lades M., Vorbruggen J., Buhmann J.,Lange

J., Von Der Malsburg C., Wurtz R., Konen W.:

Distortion invariant object recognition in the

dynamic link architecture. In IEEE Trans.

Computers, 42 (1993), pp. 300–311.

Page 145: Keynote 2011

145

19. A Hybrid Cooperative Quantum-Behaved PSO Algorithm for Image Segmentation Using Multilevel Thresholding

J. Manokaran#1, K. Usha Kingsly Devi*2

#1Applied Electronics

Anna University of Technology Tirunelveli

Tirunelveli, India

[email protected]

*2Assistant Professor

Anna University of Technology Tirunelveli

Tirunelveli, India

Abstract- Segmentation is one of the major steps

in image processing. Image Segmentation is the

process of partitioning a digital image into

multiple segments. The goal of segmentation is

to simplify and change the representation of an

image something that is more meaningful and

easier to analyze. Multilevel thresholding is

one of the most popular image segmentation

techniques because of his simplicity, robustness,

and accuracy. In this project, by preserving the

fast convergence rate of particle swarm

optimization (PSO), the quantum - behaved

PSO employing the cooperative method

(CQPSO) is proposed to save computation time

and to conquer the curse of dimensionality.

Maximization of the measure of separability on

the basis of between class variance method

(often called the OTSU method), which is one of

the classic thresholding technique, for image

segmentation. The experimental results show that, compared with the existing population-

based thresholding methods, the proposed PSO algorithm gets more effective and efficient

results. It also shortens the computation time of the classic method. Therefore, it can be applied

in complex image processing such as automatic target recognition, biomedical images.

Keywords-cooperative method, multilevel

thresholding, OTSU, particle swarm

optimization (PSO), quantum behavior.

VIII. INTRODUCTION

The goal of image segmentation is to extract meaningful objects from an input image. It is useful in discriminating an object from objects that have distinct gray levels or different colors. Proper segmentation of the region of interest increases the accuracy in the processing result. In recent years, many segmentation methods have been developed—segmentation based on fuzzy C means and its variants, mean shift filters, and nonlinear diffusion [1-5]. Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy [6]. Many thresholding techniques have been proposed. Thresholding then becomes a simple but effective tool to separate objects from the background. Examples of thresholding applications are document image analysis, where the goal is to extract printed characters, logos, graphical content, or musical scores: map processing, where lines, legends, and characters are to be found.

We categorize the thresholding methods in six groups according to the information they are exploiting. These categories are histogram shape-based methods, where, for example, the peaks, valleys and curvatures of the smoothed histogram are analyzed. Clustering-based methods, where the gray-level samples are clustered in two parts as background and

Page 146: Keynote 2011

146

foreground (object), or alternately are modeled as a mixture of two Gaussians. entropy-based methods result in algorithms that use the entropy of the foreground and background regions, the cross-entropy between the original and binaries image, etc. object attribute-based methods search a measure of similarity between the gray-level and the binaries images, such as fuzzy shape similarity, edge coincidence, etc. The spatial methods use higher-order probability distribution and/or correlation between pixels. Local methods adapt the threshold value on each pixel to the local image characteristics. OTSU proposed a nonparametric and unsupervised method of automatic threshold selection for picture segmentation is presented. An optimal threshold is selected by the discriminant criterion, namely, so as to maximize the separability of the resultant classes in gray levels[7]. The procedure is very simple, utilizing only the zeroth- and the first-order cumulative moments of the gray-level histogram. In general, the OTSU method has proved to be one of the best thresholding techniques for uniformity and shape measures. However, the computation time grows exponentially with the number of thresholds due to its exhaustive searching strategy, which would limit the multilevel thresholding applications. Optimization plays an important role in computer science, artificial intelligence, operational research, and other related fields. It is the process of trying to find the best possible solution to an optimization problem within a reasonable time limit. The goal of an optimization problem is to find a valuation for the variables that satisfies the constraints and that maximizes, or minimizes, the fitness (objective) function. The type of fitness function to optimize determines the mathematics model that may be used to solve the problem. At first Ant colony optimization (ACO) method to obtain the optimal parameters of the entropy-based object segmentation approach [8]. The experiment showed that the ACO algorithm is more acceptable than other comparable algorithms. Distributed computation avoids premature convergence. Limitation is No centralized processor to guide the ant towards good solution. Genetic algorithms (GAs) can

accelerate the optimization calculation but suffer drawbacks such as slow convergence and easy to trap into local optimum [9]. Extracting from several highest performance strings, a strongest scheme can be obtained. With the low performance strings learning from it with a certain probability, the average-fitness of each generation can increase and the computational time will improve. On the other hand, the learning program can also improve the population diversity. This will enhance the stability of the optimization calculation. It is a application dependent one particularly applicable for specific application.

In this paper, propose a new optimal

multilevel thresholding algorithm particularly suitable for multilevel image segmentation, employing the particle swarm optimization (PSO) technique. Compared with other population-based stochastic optimization methods, such as GA and ACO, PSO has comparable or even superior search performance for many hard optimization problems, with faster and more stable convergence rates [10]. It has been proved to be an effective optimum tool in thresholding [11], [12]. Lack of population diversity in PSO algorithm leads local optima which means it cannot guarantee that the global optima. In QPSO, the particle can appear any where during iteration, thus enhancing the population diversity. It is easier to control than the PSO algorithm for it only has one parameter. However, for QPSO, updating the position of the particle as a whole item, like PSO, also has the problem of the curse of dimensionality. The performance of QPSO deteriorates as the dimensionality of the search space increases. Hence, a new hybrid QPSO algorithm with the cooperative method (CQPSO) is proposed in this paper for solving this problem. The cooperative method is specifically employed to conquer the “curse of dimensionality,” by splitting a particle with composite high dimensionality into several 1-D subparts. Then, the particle in CQPSO makes contribution to the population not only as a whole item but also in each dimension. The OTSU method is employed to evaluate the

Page 147: Keynote 2011

147

performance of CQPSO. The experimental results show that the proposed algorithm is more efficient than the traditional OTSU algorithm; it also outperforms the compared population based segmentation algorithms both from the point of view of maximizing the fitness function as well as the stability.

The rest of this paper is organized as follows. Section II presents the proposed hybrid cooperative approach QPSO algorithm in detail. Section III presents the OTSU-based measure employed for optimal thresholding. In Section IV, we evaluate the performance of the proposed algorithm. Finally, Section V concludes the paper.

II. HYBRID COOPERATIVE LEARNING QPSO ALGORITHM

A. PSO Algorithm

PSO is a recently proposed population-based stochastic optimization algorithm [13], which is inspired by the social behavior of animals, like bird flocking. The algorithm searches a solution space by adjusting the trajectories of individual vectors, called “particles.” The target of the particles is to find the best result of the fitness function.

The position of the particles is randomly initialized from the search space Ω. At each iteration, each particle is assigned with a velocity also randomly initialized in Ω. The velocity of each particle is updated using the best position that it visited so far and the overall best position visited by its companions. Then, the position of each particle is updated using its updated velocity per iteration. The achievement of each particle is evaluated by the fitness function. At each iteration, the best position of the particle visited so far and the overall best position by its companions will be saved as personal and population experiences. From these experiences, all the particles will converge to the solution. Thus, as compared to other population-based methods, e.g., GA or differential evolution, PSO has a memory that gives it a fast convergence rate.

In PSO, the position and velocity vectors of the ith particle in the d-dimensional search space can be represented as Xi = (xi1, xi2. . . xid) and Vi = (vi1,

vi2, . . . , vid), respectively. According to a user-defined fitness function, let us say that the best position of each particle (which corresponds to the best fitness value obtained by that particle at iteration t) is Pi = (pi1, pi2, . . . , pid), and that the fittest particle found so far at iteration t is Pg = (pg1, pg2, . . . , pgd). Then, the new velocities and the positions of the particles for the next fitness evaluation are calculated using the following two equations:

Vi(t+1) = ώVi(t)+c1 rand1 *(Pi-Xi)+c2 rand2*(Pg-Xi) (1) Xi(t+1) = Xi(t)+Vi(t+1) (2)

Where c1 and c2 are positive constants, and rand1 and rand2 are two separately generated uniformly distributed random functions in the interval (0, 1). The variable ώ is the inertia weight that ensures the convergence of the PSO algorithm.

Since the introduction of the PSO algorithm, several improvements have been suggested, many of which have been incorporated into the equations shown in (1) and (2). Each particle must converge to its local attractor Q = (Q1,Q2, . . . , Qd), of which the coordinates are given by

Qd = rand ∗ Pid + (1 − rand) ∗ Pgd (3)

One problem found in the standard PSO is that it could easily fall into local optima in many optimization problems. According to (1), particles are largely influenced by Pi and Pg. Once the best particle has no change in a local optimum, all the remaining particles will quickly converge to the position of the best particle found so far. Therefore, PSO is not guaranteed to find the global optima of the fitness function (often called premature).

The searching space of particles having a quantum behavior mainly focuses on the local area of Q, but differs from PSO that it can also appear far away from Q. This enables QPSO to have a more powerful global searching ability than PSO. In QPSO, the velocity equation in the PSO

Page 148: Keynote 2011

148

algorithm is neglected. Further, each individual particle moves in the search space with a δ potential well on each dimension to prevent the particle from explosion, of which the center is the point Q. Then, the particle’s probability distribution in search space Ω is set as follows:

D(Xi-Q) = 1 ⁄ Li e−2|Xi−Q| / Li (4)

where Li, which is relevant to the mass of the particle and the intensity of the potential well, is the characteristic length of the distribution function.

The particle in the QPSO equation:

Xi(t+1)=Q ± Li(t+1) ⁄ 2* ln(1/u) (5)

Q, is the main moving direction of the particles, and its equation is presented in (3). The region near Q is the valuable searching area in PSO. Therefore, this term enables QPSO to have fast convergence, like PSO. The second term gives the step size for the particle’s movement. u is a random number uniformly distributed in [0, 1]. Li

is evaluated by the gap between the particle’s current position and Q for getting the experience from the other particles.

Li(t+1)=2* a |Q-Xi(t)|. (6)

As described in (4), we can see that during the iteration, the particles with a quantum behavior can appear anywhere the searching space, even far away from Q. It effectively enhances the population diversity. Hence, QPSO shows the excellent ability for conquering the problem of being premature. Furthermore, only one parameter a controls the position of particles, it is easier to control than PSO. a linearly decreases from 1.0 to 0.5, from the beginning of the iteration to the end.

QPSO Algorithm:

Initialize population: random Xi repeat: for each particle i€[1,S] if f(Xi)<f(pi) Pi = Xi end

if f(pi)< f(pg) pg = pi

end Calculated using equation (3) if rand >0.5 Xi = Q + a*|Q-Xi|* ln(1/u)

else Xi = Q-a*|Q-Xi|* ln(1/u)

end end Until termination criterion is satisfied. Here formulate each particle as a candidate

solution to the multilevel thresholding Xi = (t_1, t_2, . . . , t_M−1)

(7) B. Cooperative Method

Weakness of QPSO, each update step is performed on a full d-dimensional particle. This leads to the possibility of some components in the particle having been moved closer to the solution. While other have actually been moved away from the solution. Cooperative learning is employed to overcome the ‘‘curse of dimensionality’’ by decomposing a high-dimensional swarm into several one-dimensional swarms.

CQPSO Algorithm

Initialize population: random Xi

repeat: for each particle i €[1,S] Performing particle updates using QPSO algorithm b=pg

for each dimension j €[1,d] if f(b(j,Xij))<f(b(j,Pij)) Pij = Xij end if if f(b(j,Xij))<f(b) Pgi = Xij end if end for end for Until termination criterion is satisfied

By applying the cooperative method into the thresholding segmentation field, each particle in CQPSO contributes to the population not only

Page 149: Keynote 2011

149

as a whole item, but also in each dimension. Therefore, it is not limited to the dimension of optimal thresholds and is particularly suitable for multilevel thresholding. Another benefit from using the cooperative method is that each particle in CQPSO can potentially make a contribution to the optimal thresholds in each dimension; this ensures that the search space is searched more thoroughly and that the algorithm’s chances of finding better results is improved.

III. OTSU CRITERION BASED MEASURE

The OTSU criterion, as proposed by Otsu, has been popularly employed in determining whether the optimal thresholding method can provide histogram-based image segmentation with satisfactory desired characteristics. The original algorithm in OTSU can be described as follows. In the beginning, we consider an image containing N pixels of gray levels from 0 to L. h(i) represents the number of the ith gray level pixel,and PRi is the probability of i. Then, we have

PR1 = h (i)/N, N= ∑ hi

Assuming that there are M-1 thresholds, t_1, t_2 , . . , t_M−1, that divide the original image into M classes.

C1 for [0, . . . . . . , t_1]

C2 for [t_1 + 1, . . . , t_2]

CM for [t_M−1, . . . . , L]

Thus M-1 thresholds, separate the input gray level values into M classes. The optimal thresholds, t∗_1, t∗_2, . . . , t∗_M−1, chosen by the OTSU method is depicted as follows:

t∗_1, t∗_2, . . . , t∗_M−1 =arg max σ2B(t_1, t_2, . . …..t_M−1),

Range

0 ≤ t_1 ≤ t_2 ≤ · · · ≤ t_M−1 ≤ L (8)

where σB 2 = ∑ k * (uk - utk*) 2,

With

k =∑ PRi,

uk=∑ iPRi / k , k=1,2,……M

IV. PERFORMANCE EVALUATION

For evaluating the performance of the proposed algorithm, to implemented it on a wide variety of images. These images include the popular tested image, which is also used in existing technique, and the other 100 images are provided by the Berkeley segmentation data set [14]. The segmented results of two images are shown in this paper. The performance of the proposed algorithm is evaluated by comparing its results with some other popular algorithms developed in the literature so far, e.g., traditional OTSU method, QPSO algorithm. A. Multilevel thresholding results and efficiency

of different methods, with M -1=3,4,5

Fig. 1 presents these two input images. To

employ (8) as the fitness function to provide a comparison of performances. Since the classical OTSU method is an exhaustive searching algorithm, we regarded its results as the “standard.” Table I shows the optimal thresholds by OTSU method. Table II shows computation time by OTSU method. Therefore, regardless of which population-based algorithms used for multilevel thresholding, we strive to obtain the correct threshold values and bigger fitness values with fast computation ability. Using CQPSO and other population-based algorithms, the average fitness function values and the corresponding optimal thresholds obtained by the test algorithms (with M −1 = 3,4,5) are shown in Tables III and IV. The bigger value shown in table III means that the algorithm gets the better results

.

Page 150: Keynote 2011

150

(a) (b)

Fig. 1 Test images. (a)LENA (b)PEPPER

TABLE I

OPTIMAL THRESHOLDS BY THE OTSU METHOD

Image Three No of

segments

Four No of

segments

Five No of

segments

Optimal thresholds Optimal

thresholds

Optimal thresholds

LENA 70,128 58,103,146 50,88,121,156 PEPPER 68,136 61,118,165 52,92,128,170

TABLE II

COMPUTATION TIME BY THE OTSU METHOD

Image Three No of

segments

Four No of

segments

Five No of

segments

Computation time Computation time Computation time

LENA 0.39314 0.43071 0.539767 PEPPER 0.11013 0.19589 0.254489

TABLE III

COMPARATIVE STUDY OF OBJECTIVE FUNCTION VALUES FOR CQPSO AND QPSO ALGORITHMS

Image No of segments Objective value

CQPSO QPSO

LENA 3 49146 43563 4 61933 57895 5 71026 68567

PEPPER 3 57958 51341 4 73067 68346 5 83993 81049

TABLE IV

THRESHOLDS FOR TESTING IMAGE DERIVED BY CQPSO AND QPSO ALGORITHMS

Page 151: Keynote 2011

151

Image No of

segments

Optimal thresholds

CQPSO QPSO

LENA 3 13,141,206 13,115,206 4 48,93,159,211 48,93,144,211 5 38,75,127,169,205 38,75,115,169,205

PEPPER 3 14,152,221 14,124,221 4 52,100,171,227 52,100,154,227 5 41,80,137,182,220 41,80,124,182,220

TABLE V

COMPUTATION TIME FOR TESTING IMAGE DERIVED BY CQPSO AND QPSO ALGORITHMS

Image No of segments Computation time

CQPSO QPSO

LENA 3 0.03817 0.22987 4 0.04036 0.34483 5 0.06580 0.44756

PEPPER 3 0.15860 0.19433 4 0.15351 0.19555 5 0.15345 0.20313

(a) (a’) (b) (b’) Fig. 2 Thresholded images by the OTSU method and the CQPSO based algorithm (Four number of

segments)

Page 152: Keynote 2011

152

From the result shown in Table I-V, and figure 3-6 compared with the other population-based algorithm, the CQPSO-based algorithm gets results equal to or nearest the traditional OTSU method with less computation time. Therefore, CQPSO is more efficient than the OTSU method. Furthermore, among the population-based algorithms tested in the paper,

Fig. 3 objective value comparison for Lena image

Fig. 4 objective value comparison for pepper image

Fig. 5 computation time comparison for Lena image

Fig. 6 computation time comparison for pepper image

Fig. 2 shows the segmented images by the traditional OTSU method and the CQPSO-based algorithm, with M − 1 = 4. We can see that the CQPSO-based algorithm gets very similar results with those of OTSU method. Furthermore, except for being evaluated by the benchmark images shown in this paper, the proposed algorithm is also tested using 100 images. The results prove that the CQPSO based algorithm is more efficient and effective than the other population-based algorithms.

V.CONCLUSION

In this paper, we have described an

optimal multilevel thresholding algorithm that employs an improved variant of QPSO. By combining the merits of QPSO and the cooperative method, this algorithm is employed for several benchmark images, and the performance of each image shows a significant improvement compared to several other popular contemporary methods. As a fast population-based method, the CQPSO algorithm can also be incorporated to other popular thresholding segmentation methods based on optimizing the fitness function. In our future work, we aim at finding a simpler and more efficient PSO algorithm and then apply it to complex image processing and computer vision problems.

REFERENCES

[1]J. C. Bexdek, “A convergence theorem for the fuzzy ISODARA clustering algorithms,” IEEE

Trans. Pattern Anal. Mach. Intell., vol. PAMI-2, no. 1, pp. 1–8, Jan. 1980. [2] S. R. Kannan, “A new segmentation system for brain MR image based on fuzzy techniques,” Appl. Soft Comput., vol. 8, no. 4, pp. 1599–1606, Sep. 2008. [3] M. Li and R. C. Staunton, “A modified fuzzy C-means image segmentation algorithm for use with uneven illumination patterns,” Pattern

Recognit., vol. 40, no. 11, pp. 3005–3011, Nov. 2007.

0

0.5

1

seg=3 seg=4 seg=5com

puta

tion

time

no:of segments

LENA

OTSU

QPSO

CQPSO

0

0.1

0.2

0.3

SEG=3 SEG=4 SEG=5

OTSU

QPSO

CQPSO

Page 153: Keynote 2011

153

[4] D. Comaniciu and P. Meer, “Mean shift analysis and zpplications,” in Proc. 7th Int. Conf.

Comput. Vis., Kerkyra, Greece, 1999, pp. 1197–1203. [5] P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE

Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7,pp. 629–639, Jul. 1990. [6]N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognit., vol. 26, no. 9, pp. 1277–1294, Sep. 1993. [7] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst., Man,

Cybern., vol. SMC-9, no. 1, pp. 62–66, Jan. 1979. [8] W. B. Tao, H. Jin, and L. M. Liu, “Object segmentation using ant colony optimization algorithm and fuzzy entropy,” Pattern Recognit.

Lett., vol. 28, no. 7, pp. 788–796, May 2008. [9]L. Cao, P. Bao, and Z. K. Shi, “The strongest schema learning GA and its application to

multilevel thresholding,” Image Vis. Comput., vol. 26, no. 5, pp. 716–724, May 2008. [10] J. Kennedy and R. C. Eberhart, Swarm

Intelligence. San Mateo, CA: Morgan Kaufmann, 2001. [11]P. Y. Yin, “Multilevel minimum cross entropy threshold selection based on particle swarm optimization,” Appl. Math. Comput., vol. 184, no. 2, pp. 503–513, Jan. 2007. [12] E. Zahara, S. K. S. Fan, and D. M. Tsai, “Optimal multi-thresholding using a hybrid optimization approach,” Pattern Recognit. Lett., vol. 26, no. 8, pp. 1082–1095, Jun. 2005. [13] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proc. IEEE Int. Conf.

Neural Netw., Perth, Australia, 1995, vol. 4, pp. 1948– 1972. [14] [Online]. Available: http://www.eecs.berkeley.edu/Research/Project

s/CS/ vision/bsds//

20. BIOMETRIC AUTHENTICATION IN PERSONAL IDENTIFICATION

Rajendiran Manojkanth 1, B.Mari Kesava Venkatesh2 Electronics and Communication Engineering Department

PSNA College of Engineering and Technology,Dindigul-624 622, India

1 [email protected] 2 [email protected]

Abstract

Advances in the field of Information

Technology also make Information

Security an inseparable part of it. In order

to deal with security,Authentication plays

an important role.This paper presents a

review on the biometric authentication

techniques and some future possibilities in

this field. In biometrics, a human being

needs to be identified based on some

characteristic physiological parameters. A

wide variety of systems require reliable

personal recognition schemes to either

confirm or determine the identity of an

individual requesting their services. The

purpose of such schemes is to ensure that

the rendered services are accessed only by

a legitimate user, and not anyone else. By

using biometrics it is possible to confirm or

establish an individual’s identity. The

position of biometrics in the current field of

Security has been depicted in this work. We

have also outlined opinions about the

usability of biometric authentication

systems, comparison between different

techniques and theiradvantages and

disadvantages in this paper. Keywords: biometric, pattern, IRIS,

authentication.

1. Introduction Information security is concerned with the assurance of confidentiality, integrity and availability of information in all forms. There are many tools and techniques that can support the management of information security. But system based on biometric has

Page 154: Keynote 2011

154

evolved to support some aspects of information security. Biometric authentication supports the facet of identification, authentication and non-repudiation in information security. Biometric authentication has grown in popularity as a way to provide personal identification. Person’s identification is crucially significant in many application and the hike in credit card fraud and identity theft in recent years indicate that this is an issue of major concern in wider society. Individual passwords, pin identification or even token based arrangement all have deficiencies that restrict their applicability in a widely-networked society. Biometric is used to identify the identity of an input sample when compared to a template, used in cases to identify specific people by certain characteristics. Possession based: using one specific "token" such as a security tag or a card and knowledge-based: the use of a code or password. Standard validation systems often use multiple inputs of samples for sufficient validation, such as particular characteristics of the sample. This intends to enhance security as multiple different samples are required such as security tags and codes and sample dimensions. So, the advantage claimed by biometric authentication is that they can establish an unbreakable one-to-one correspondence between an individual and a piece of data. In this paper, we present a detail survey on Biometric Authentication and we hope that this work will definitely provide a concrete overview on the past, present and future aspects in this field. 2. Overview

Biometrics (ancient Greek: bios ="life", metron ="measure") refers to two very different fields of study and application. The first, which is the older and is used in biological studies, including forestry, is the collection, synthesis, analysis and

management of quantitative data on biological communities such as forests. Biometrics in reference to biological sciences has been studied and applied for several generations and is somewhat simply viewed as "biological statistics" [1].Authentication is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the thing are true. A short overview in this field can be divided into three parts and they are Past, Present and Future. 2.1. Past European explorer Joao de Barros recorded the first known example of fingerprinting, which is a form of biometrics, in China during the 14th century. Chinese merchants used ink to take children's fingerprints for identification purposes. In 1890, Alphonse Bertillon studied body mechanics and measurements to help in identifying criminals. The police used his method, the Bertillonage method, until it falsely identified some subjects. The Bertillonage method was quickly abandoned in favor of fingerprinting, brought back into use by Richard Edward Henry of Scotland Yard. Karl Pearson, an applied mathematician studied biometric research early in the 20th century at University College of London. He made important discoveries in the field of biometrics through studying statistical history and correlation, which he applied to animal evolution. His historical work included the method of moments, the Pearson system of curves, correlation and the chi-squared test. In the 1960s and '70s, signature biometric authentication procedures were developed, but the biometric field remained fixed until the military and security agencies researched and developed biometric technology beyond fingerprinting. 2.2. Present Biometrics authentication is a growing and controversial field in which civil liberties groups express concern over privacy and

Page 155: Keynote 2011

155

identity issues. Today, biometric laws and regulations are in process and biometric industry standards are being tested. Face recognition biometrics has not reached the prevalent level of fingerprinting, but with constant technological pushes and with the threat of terrorism, researchers and biometric developers will stimulate this security technology for the twenty-first century. In modern approach, Biometric characteristics can be divided in two main classes: a. Physiological are related to the shape of the body and thus it varies from person to person Fingerprints, Face recognition, hand geometry and iris recognition are some examples of this type of Biometric. b. Behavioral are related to the behavior of a person. Some examples in this case are signature, keystroke dynamics and of voice . Sometimes voice is also considered to be a physiological biometric as it varies from person to person.

Recently, a new trend has been developed that merges human perception to computer database in a brain-machine interface. This approach has been referred to as cognitive biometrics. Cognitive biometrics is based on specific responses of the brain to stimuli which could be used to trigger a computer database search. 2.3. Future A biometric system can provide two functions. One of which is verification and the other one is Authentication. So, the techniques used for biometric authentication has to be stringent enough that they can employ both these functionalities simultaneously. Currently, cognitive

biometrics systems are being developed to use brain response to odor stimuli, facial perception and mental performance for search at ports and high security areas. Other biometric strategies are being developed such as those based on gait (way of walking), retina, Hand veins, ear canal, facial thermogram, DNA, odor and scent and palm prints. In the near future, these biometric techniques can be the solution for the current threats in world of information security. Of late after a thorough research it can be concluded that approaches made for simultaneous authentication and verification is most promising for iris, finger print and palm vain policies. But whatever the method we choose, main constraint will be its performance in real life situation. So, application of Artificial System can be a solution for these cases. We have given emphasis on the Iris recognition. According to us, after detection of an iris pattern, the distance between pupil and the iris boundary can be computed. This metric can be used for the recognition purposes because this feature remains unique for each and every individual. Again, an artificial system can be designed which will update the stored metric as the proposed feature may vary for a particular person after certain time period. After doing the manual analysis of the above discussed method, we have got a satisfactory result. Due to the dynamic modification of the proposed metric, the rejection ration for a same person reduces by a lot. The work is being carried out to make the system viable. 3. Detail, Techniques and Technologies We have already stated that there exist two kind of biometric characteristics. So, techniques for biometric authentication have been developed based on these characteristics. Details of different techniques are discussed below. 3.1. Finger Print Technology A fingerprint is an impression of the friction ridges of all or any part of the finger. A friction ridge is a raised portion of the on the

Page 156: Keynote 2011

156

palmar (palm) or digits (fingers and toes) or plantar (sole) skin, consisting of one or more connected ridge units of friction ridge skin. These ridges are sometimes known as "dermal ridges" or "dermal ". The traditional method uses the ink to get the finger print onto a piece of paper. This piece of paper is then scanned using a traditional scanner. Now in modern approach, live finger print readers are used .These are based on optical, thermal, silicon or ultrasonic principles [22, 23, 27]. It is the oldest of all the biometric techniques. Optical finger print reader is the most common at present. They are based on reflection changes at the spots where finger papilar lines touch the reader surface. All the optical fingerprint readers comprise of the source of light, the light sensor and a special reflection surface that changes the reflection according to the pressure. Some of the readers are fitted out with the processing and memory chips as well. The finger print obtained from an Optical Fingerprint Reader is shown in figure 1. The size of optical finger is around 10*10*15. It is difficult to minimize them much more as the reader has to comprise the source on light reflection surface and light sensor. Optical Silicon Fingerprint Sensor is based on the capacitance of finger. Dc-capacitive finger print sensor consists of rectangular arrays of capacitors on a silicon chip. One plate of the capacitors is finger, other plate contains a tiny area of metallization on the chips surfaces on placing finger against the surfaces of a chip, the ridges of finger print are close to the nearby pixels and have high capacitance to them. The valleys are more distant from the pixels nearest them and therefore have lower capacitance. Ultrasound finger print is newest and least common. They use ultrasound to monitor the figure surfaces, the user places the finger

on a piece of glass and the ultrasonic sensor moves and reads whole finger print. This process takes 1 or 2 seconds. Finger print matching techniques can be placed into two categories. One of them is Minutiae based and the other one is Correlation based. Minutiae based techniques find the minutiae points first and then map their relation placement on the finger. Correlation based techniques require the precise location of a registration point and are affected by image translation and rotation [2, 3, 19, 24]. 3.2. Face Recognition Technology A facial recognition technique is an application of computer for automatically identifying or verifying a person from a digital image or a video frame from a video source. It is the most natural means of biometric identification [6]. Facial recognition technologies have recently developed into two areas and they are Facial metric and Eigen faces. Facial metric technology relies on the manufacture of the specific facial features (the system usually look for the positioning of eyes, nose and mouth and distances between these features), shown in figure 2 and 3. 3.2. Face Recognition Technology A facial recognition technique is an application of computer for automatically identifying or verifying a person from a digital image or a video frame from a video source. It is the most natural means of biometric identification [6].Facial recognition technologies have recently developed into two areas and they are Facial metric and Eigen faces. Facial metric technology relies on the manufacture of the specific facial features (the system usually look for the positioning of eyes, nose and mouth and distances between these features), shown in figure 2 and 3. The face region is rescaled to a fixed pre-defined size (e.g. 150-100 points). This normalized face image is called the canonical image. Then the facial metrics are

Page 157: Keynote 2011

157

computed and stored in a face template. The typical size of such a template is between 3 and 5 KB, but there exist systems with the size of the template as small as 96 bytes. The figure for the normalized face is given below. The Eigen Face method (figure 4) is based on categorizing faces according to the degree of it with a fixed set of 100 to 150 eigen faces. The eigen faces that are created will appear as light and dark areas that are arranged in a specific pattern. This pattern shows how different features of a face are singled out. It has to be evaluated and scored. There will be a pattern to evaluate symmetry, if there is any style of facial hair, where the hairline is, or evaluate the size of the nose or mouth. Other eigen faces have patterns that are less simple to identify, and the image of the eigen face may look very little like a face. This technique is in fact similar to the police method of creating a portrait, but the image processing is automated and based on a real picture. Every face is assigned a degree of fit to each of 150 eigen faces, only the 40 template eigen faces with the highest degree of fit are necessary to reconstruct the face with the accuracy of 99 percent. The whole thing is done using Face Recognition softwares [24, 25, 32, 39].

3.3. IRIS Technology This recognition method uses the iris of the eye which is colored area that surrounds the pupil. Iris patterns are unique and are obtained through video based image acquisition system. Each iris structure is featuring a complex pattern. This can be a combination of specific characteristics known as corona, crypts, filaments, freckles, pits, furrows, striations and rings [7]. An IRIS Image shown in figure 5.

The iris pattern is taken by a special gray scale camera in the distance of 10- 40 cm of

camera. Once the gray scale image of the eye is obtained then the software tries to locate the iris within the image. If an iris is found then the software creates a net of curves covering the iris. Based on the darkness of the points along the lines the software creates the iris code. Here, two influences have to take into account. First, the overall darkness of image is influenced by the lighting condition so the darkness threshold used to decide whether a given point is dark or bright cannot be static, it must be dynamically computed according to the overall picture darkness. Secondly, the size of the iris changes as the size of the pupil changes. Before computing the iris code, a proper transformation must be done. In decision process, the matching software takes two iris codes and compute the hamming distance based on the number of different bits. The hamming distances score (within the range 0 means the same iris codes), which is then compared with the security threshold to make the final decision. Computing the hamming distance of two iris codes is very fast (it is the fact only counting the number of bits in the exclusive OR of two iris codes). We can also implement the concept of template matching in this technique. In template matching, some statistical calculation is done between a stored iris template and a produced. Depending on the result decision is taken [27, 30, 34]. 3.4. Hand Geometry Technology It is based on the fact that nearly every person’s hand is shaped differently and that the shape of a person’s hand does not change after certain age. These techniques include the estimation of length, width, thickness and surface area of the hand. Various method are used to measure the hands- Mechanical or optical principle [8, 20]. There are two sub-categories of optical scanners. Devices from first category create

Page 158: Keynote 2011

158

a black and white bitmap image of the hand’s shape. This is easily done using a source of light and a black and white camera. The bitmap image is processed by the computer software. Only 2Dchararcteristics of hand can be used in this case. Hand geometry systems from other category are more complicated. They use special guide marking to portion the hand better and have two (both vertical and horizontal) sensors for the hand shape measurements. So, sensors from this category handle data of all 3D features [5, 24, 33]. Figure 6 and 7 shows the hand geometry system. Some of hand geometry scanners produce only the video signal with the hand shape. Image digitalization and processing is then done in the computer to process those signals in order to obtain required video or image of the hand [14, 30]. 3.5. Retina Geometry Technology It is based on the blood vessel pattern in the retina of the eye as the blood vessels at the back of the eye have a unique pattern, from eye to eye and person to person (figure 8). Retina is not directly visible and so a coherent infrared light source is necessary to illuminate the retina. The infrared energy is absorbed faster by blood vessels in the retina than by the surrounding tissue. The image of the retina blood vessel pattern is then analyzed. Retina scans require that the person removes their glasses, place their eye close to the scanner, stare at a specific point, and remain still, and focus on a specified location for approximately 10 to 15 seconds while the scan is completed. A retinal scan involves the use of a low-intensity coherent light source, which is projected onto the retina to illuminate the blood vessels which are then photographed and analyzed. A coupler is used to read the blood vessel patterns. A retina scan cannot be faked as it is currently impossible to forge a human retina. Furthermore, the retina of a deceased person decays too rapidly to be used to deceive a retinal scan. A retinal scan has an

error rate of 1 in 10,000,000, compared to fingerprint identification error being sometimes as high as 1 in 500 [9, 30]. 3.8. Other Techniques Some other available techniques for biometric authentication are described below. 3.8.1. Palmprint: Palmprint verification is a slightly different implementation of the fingerprint technology. Palmprint scanning uses optical readers that are very similar to those used for fingerprint scanning, their size is, however, much bigger and this is a limiting factor for the use in workstations or mobile devices [40, 15]. 3.8.2. Hand Vein: Hand vein geometry is based on the fact that the vein pattern is distinctive for various individuals Vein pattern matching involves scanning the vein patterns on the back of a user’s hand. The user places their hand into a reader. Inside a small black and white camera and LED array are used to capture the digital image. The LED array, combined with filters also inside the reader, is used to magnify the contrast of the veins under the skin. This results in a vein distribution pattern that may be used for authentication. Certain vein aspects or points, as well as the whole distribution, are used for verification.. One such system is manufactured by British Technology Group. The device is called Vein check and uses a template with the size of 50 bytes [4, 13, 20]. 3.8.3. Ear Shape: Identifying individuals by the ear shape is used in law enforcement applications where ear markings are found at crime scenes. Whether this technology will progress to access control applications is yet to be seen. An ear shape verifier (Optophone) is produced by a French company ART Techniques. It is a telephone type handset within which is a lighting unit and cameras which capture two images of the ear [4, 18]. 4. Applications

Page 159: Keynote 2011

159

Biometric authentication is highly reliable, because physical human characteristics are much more difficult to forge then security codes ,passwords ,hardware keys sensors ,fast processing equipment and substantial memory capacity , so the system are costly. Biometric based authentication applications include workstation and network access, single sign-on, application logon, data protection, remote access to resources, transaction security, and Web security. The promises of e-commerce and e-government can be achieved through the utilization of strong personal authentication procedures. Secure electronic banking, investing and other financial transactions, retail sales, law enforcement, and health and social services are already benefiting from these technologies. Biometric technologies are expected to play a key role in personal authentication for large scale enterprise network authentication environments, Point-of-Sale and for the protection of all types of digital content such as in Digital Rights Management and Health Care applications. Utilized alone or integrated with other technologies such as smart cards, encryption keys and digital signatures, biometrics is anticipated to pervade nearly all aspects of the economy and our daily lives. For example, biometrics is used in various schools such as in lunch programs in Pennsylvania, and a school library in Minnesota. Examples of other current applications include verification of annual pass holders in an amusement park, speaker verification for television home shopping, Internet banking, and users’ authentication in a variety of social services [4]

5. Evaluation When it is time to use the biometric authentication, the degree of security is concerned. In this paper, we have discussed the various types of biometric authentication techniques. In this section, we will evaluate

different techniques and find degree of security. There are various parameters with the help of which we can measure the performance of any biometric authentication techniques. These factors are described below [28, 29, 30]. Table 1 shows the evaluated vales of various evaluation techniques. 5.1. Factors of Evaluation 5.1.1. False Accept Rate (FAR) and False Match Rate (MAR): The probability that the system incorrectly declares a successful match between the input pattern and a non matching pattern in the database. It measures the percent of invalid matches. These systems are critical since they are commonly used to forbid certain actions by disallowed people. 5.1.2. False Reject Rate (FRR) or False Non-Match Rate (FNMR): The probability that the system incorrectly declares failure of match between the input pattern and the matching template in the database. It measures the percent of valid inputs being rejected. 5.1.3. Relative Operating Characteristic (ROC): In general, the matching algorithm performs a decision using some parameters (e.g. a threshold). In biometric systems the FAR and FRR can typically be traded off against each other by changing those parameters. The ROC plot is obtained by graphing the values of FAR and FRR, changing the implicitly. A common variation is the Detection Error Tradeoff (DET), which is obtained using normal deviate scales on both axes. This more linear graph illuminates the differences for higher performances (rarer errors) . 5.1.4. Equal Error Rate (EER): The rates at which both accept and reject errors are equal. ROC or DET plotting is used because how FAR and FRR can be changed, is shown clearly. When quick comparison of two systems is required, the ERR is

Page 160: Keynote 2011

160

commonly used. Obtained from the ROC plot by taking the point where FAR and FRR have the same value. The lower the EER, the more accurate the system is considered to be. 5.1.5. Failure to Enroll Rate (FTE or FER): The percentage of data input is considered invalid and fails to input into the system. Failure to enroll happens when the data obtained by the sensor are considered invalid or of poor quality. 5.1.6. Failure to Capture Rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric characteristic when presented correctly is generally treated as FTC. 5.1.7. Template Capacity: It is defined as the maximum number of sets of data which can be input in to the system. 5.2 Results of Evaluation The evaluations of various techniques using the above parameters are presented in a tabular format. 5.2.1. Finger Print Technology: The finger print bit map obtained from the reader is affected by the finger moisture as the moisture significantly influences the capacitance .This means that too wet or dry fingers do no produce bitmaps with sufficient quality and so people with unusually wet or dry figures have problems with these silicon figure print readers. 5.2.2. Face Recognition Technology: The accuracy of face recognition systems improves with time, but it has not been very satisfying so far. There is need to improve the algorithm for face location.. The current software often doesn’t find the face at all or finds “a face” at an incorrect place .This makes result worse. The systems also have problems to distinguish very similar person like twins and any significant change in hair

or beard style requires re – enrollment .glasses also causes additional difficulties .It doesn’t require any contact with person and cab be fooled with a picture if no countermeasures are active The liveness detection is based most commonly on facial mimics. The user is asked to blink or smile .If the image changes properly then the person is considered “live”. 5.2.3. Iris Technology: The artificial duplication of the iris is virtually impossible because of unique properties .The iris is closely connected to the human brain and it is said to be one of the first parts of the body to decay after the death. It should be therefore very difficult to create an artificial iris to fraudulently bypass the biometric systems if the detection of the iris liveness is working properly. 5.2.4. Hand Geometry Technique: Its condition to be used is hand must be placed accurately, guide marking have been incorporated and units are mounted so that they are at a comfortable height for majority of the population. The noise factors such as dirt and grease do not pose a serious problem, as only the silhouette of the hand shape is important. Hand geometry doesn’t produce a large data set. Therefore, give a large no. of records, hand geometry may not be able to distinguish sufficiently one individual from another. The size of hand template is often as small as 9 bytes. Such systems are not suitable for identification at all. It shows lower level security application.5.2.5. Retina Geometry: The main drawbacks of the retina scan are its intrusiveness. The method of obtaining a retina scan is personally invasive. A laser light must be directed through the cornea of edge. Also the operation of retina scanner is not easy. A skilled operator is required and the person being scanned has to follow his

Page 161: Keynote 2011

161

or her direction. However, retina scanning systems are said to be accurate, It is used where high security is concerned.

6.Considerations

How well a system performs its stated purpose is probably the most important fact to consider when looking at any authentication solution. For example, does it have a suitable and realistic false acceptance rate that meets your organization’s needs? These performance measurements were discussed in detail earlier. However, when an organization is making the decision on whether to use biometrics or not, there are some additional considerations that should be made. Some of these considerations may be more or less important depending on the need of the organization. Additionally, there may be other facts not listed here that need to be taken into account when implementing a biometric solution. In general, the following should be considered any time it is decided that biometrics are a viable solution for your organization. 6.1. User Acceptability : Next to performance, and in some cases before, this is usually the main concern of users. Is the system intrusive and hard to use? Will users feel that their privacy is being violated or will they feel like criminals (e.g. fingerprinting)? Will the users feel that they are potentially exposing themselves to harm (e.g. light scanning their retina)? In most cases user acceptability can be increased in some degree by education and information. 6.2. Uniqueness of the Biometric : The key to the strength and security of any biometric is its uniqueness. Fingerprinting and iris and retinal scans are typically considered the most unique. This does not necessarily mean that they cannot be replicated; rather it means that they have a sufficient number of complex patterns or traits that can be used to build a strong template for authentication.

6.3. Resistance to Counterfeiting : Can the biometric be easily replicated by an intruder? Again, this is different from #2 above. Fingerprints, for example, are very unique. However, there have been many detailed successful attempts at replicating fingerprints. Iris scans on the other hand, are considered to be almost impossible to replicate as well as extremely unique. 6.4. Reliability : The system should experience minimal downtime. If the system where to go down, it should be fairly easy to bring it back up with minimal loss of productivity and data. In order for a biometric system to perform its stated purpose it is imperative that it be reliable. If users cannot authenticate or gain access, they most likely will be unable to do their jobs. If consumers or customers are unable to authenticate they will be angry and revenue and faith may be lost. 6.5. Data Storage Requirements : How big are the templates being stored on the system? New hardware may be required if there is not sufficient storage space. This may also have an impact on processing speeds and network traffic. 6.6. Enrollment Time : How long will it take the organization to enroll its users? Some systems make require several hours. An organization will have to look at this lost “productivity” and see if it is justified. 7. Conclusion While biometric authentication can offer a high degree of security, they are far from perfect solution. Sound principles of system engineering are still required to ensure a high level of security rather than the assurance of security coming simply from

Page 162: Keynote 2011

162

the inclusion of biometrics in some form. The risks of compromise of distributed database of biometrics used in security application are high- particularly where the privacy of individuals and hence non-repudiation and irrevocability are concerned. It is possible to remove the need for such distributed databases through the careful application of biometric infrastructure without compromising security. The influences of biometric technology on society and the risks to privacy and threat to identify will require mediation through legislation. For much of the short history of biometrics the technology developments have been in advance of ethical or legal ones. Careful consideration of the importance of biometrics data and how it should be legally protected is now required on a wider scale. References [1] Smart Cart Alliance Identity Council (2007): Identity and Smart Card Technology and Application Glossary, http://www.smartcardalliance.org, as visited on 25/10/2008. [2] Jain, A. K.; Ross, A. & Pankanti, S., "Biometrics: A Tool for Information Security", IEEE Transactions on Information Forensics And Security, Volume 1, issue 2, Jun. 2006, pp 125 – 144. [3] R. Cappelli, D. Maio, D. Maltoni, J. L. Wayman, and A. K. Jain, “Performance evaluation of fingerprint verification systems”, IEEE Trans. Pattern Anal. Mach. Intell., Volume 28, issue 1, Jan. 2006, pp. 3–18. [4] A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circuits Syst. Video Technology, Special Issue Image- and Video-Based Biomet., Volume 14, Issue 1, Jan. 2004, pp. 4–20. [5] R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzales-Marcos, “Biometric

identification through hand geometry measurements,”, IEEE Trans. Pattern Anal. Mach. Intell., Volume 22, Issue. 10, Oct. 2000, pp. 1168–1171. [6] M. A. Dabbah, W. L. Woo, and S. S. Dlay, "Secure Authentication for Face Recognition," In Proc. of IEEE Symposium on Computational Intelligence in Image and Signal Processing, Apr. 2007. USA, pp. 121 - 126. [7] Sanjay R. Ganorkar, Ashok A. Ghatol, “Iris Recognition: An Emerging Biometric Technology”, In Proc. of the 6th WSEAS International Conference on Signal Processing, Robotics and Automation, Greece, Feb. 2007, pp. 91 – 96. [8] E. Kukula, S. Elliott, “Implementation of Hand Geometry at Purdue University's Recreational Center: An Analysis of User Perspectives and System Performance”, In Proc. of 35th Annual International Carnahan Conference on Security Technology, UK, Oct. 2001, pp. 83 – 88. [9] C. Marin˜ o Æ M. G. Penedo Æ M. Penas Æ M. J. Carreira F. Gonzalez, “Personal authentication using digital retinal images”, Journal of Pattern Analysis and Application, Springer, Volume 9, Issue 1, May. 2006, pp. 21–33. [10] Kar, B. Kartik, B. Dutta, P.K. “Speech and Face Biometric for Person Authentication”, In Proc. of IEEE International Conference on Industrial Technology, India, Dec.2006, pp. 391 - 396. [11] Samir K. Bandopadhaya, Debnath Bhattacharyya, Swarnendu Mukherjee, Debashis Ganguly, Poulumi Das, “Statistical Approach for Offline Handwritten Signature Verification”, Journal of Computer Science, Science Publication, Volume 4, Issues 3, May. 2008, pp. 181 – 185. [12] S. Hocquet, J. Ramel, H. Cardot, “Fusion of Methods for Keystroke Dynamic Authentication”, In Proc. of 4th

Page 163: Keynote 2011

163

IEEE Workshop on Automatic Identification Advanced Technologies, USA, Oct. 2005, pp. 224 – 229. [13] S. Im, H. Park, Y. Kim, S. Han, S. Kim, C. Kang, and C. Chung, “A Biometric Identification System by Extracting Hand Vein Patterns”, Journal of the Korean Physical Society, Korean Publication, Volume 38, Issue 3, Mar. 2001, pp. 268-272.[14] R. Sanchez-Reillo, C. Sanchez-Avilla, and A. Gonzalez-Macros, “Biometrics Identification Through Hand Geometry Measurements”, IEEE Transactions on Pattern Anakysis and Machine Intelligence, Volume 22, Issue 18, Oct. 2000, pp. 1168-1171. [15] Zhang, D.; Wai-Kin Kong; You, J.; Wong, M, “Online palmprint identification”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 25, Issue 9, Sep. 2003, pp. 1041 – 1050. [16] Alfred C. Weaver, “Biometric Authentication”, IEEE Computer Society, Feb. 2006, Volume 39, No. 2, pp. 96-97. [17] C. Lin and K. Fan, “Biometric Verification Using Thermal Images of Palm-Dorsa Vein Patterns”, IEEE Transactions on Circuits and systems for Video Technology Volume 14, No. 2, Feb. 2004, pp. 191- 213 . [18] Hui Chen, Bhanu, B, “Shape Model-Based 3D Ear Detection from Side Face Range Images”, In Proc. Of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, USA, Jun. 2005, pp. 122 – 122. [19] A. K. Jain, L. Hong, and R. Bolle, “On-line fingerprint verification”, IEEE Transactions on Pattern Recognition and Machine Intelligence, Volume 19, No. 4, Aug. 1996, pp. 302–314. [20] A. Kumar, D. C. Wong, H. C. Shen, and A. K. Jain, “Personal Verification using Palmprint and Hand Geometry Biometric”, In Proc. of 4th International Conference on Audio- and Video-based Biometric Person

Authentication, Guildford, UK, Jun. 2003, pp. 668 - 678. [21] J. L. Wayman, “Fundamentals of Biometric Authentication Technologies”, International Journal of Image and Graphics, World Scientific Publication, Volume 1, No. 1, Jan. 2001, pp. 93-113. [22] A. Ross, S. Dass, and A. K. Jain, “A deformable model for fingerprint matching”, Journal of Pattern Recognition, Elsevier, Volume 38, No. 1, Jan. 2005, pp. 95–103. [23] T. Matsumoto, H. Hoshino, K. Yamada, and S. Hasino, “Impact of artificial gummy fingers on fingerprint systems”, In Proc. of SPIE, Volume 4677, Feb. 2002, pp. 275–289. [24] L. Hong and A. K. Jain, “Integrating faces and fingerprints for personal identification”, IEEE Trans. Pattern Anal. Mach. Intell., Volume 20, No. 12, Dec. 1998, pp. 1295–1307. [25] A. Ross and R. Govindarajan, “Feature level fusion using hand and face biometrics”, In Proc. of SPIE Conf. Biometric Technology for Human Identification II, Mar. 2005, pp. 196–204. [26] Abiyev, R.H. Altunkaya, K., “Neural Network Based Biometric Personal Identification”, Frontiers in the Convergence of Bioscience and Information Technologies, Jeju, Oct. 2007, pp. 682 – 687. [27] A. K. Jain, A. Ross, and S. Pankanti, “Biometric: A Tool for Information Security”, IEEE Trans. Information Forensics and Security, Volume 1, No. 2, Jun. 2006, pp. 125–144. [28] A. K. Jain, S. Pankanti, S. Prabhakar, L. Hong, A. Ross, and J. L. Wayman, “Biometrics: a grand challenge”, In Proc. of International Conference on Pattern Recognition, Cambridge, U.K., Aug. 2004, pp. 935 - 942. [29] J. Phillips, A. Martin, C. Wilson, and M. Przybocki, “An introduction to evaluating biometric systems”, IEEE

Page 164: Keynote 2011

164

Computer Society., Volume 33, No. 2, Feb. 2000, pp. 56–63. [30] J. L. Wayman, A. K. Jain, D. Maltoni, and D. Maio, Eds., “Biometric Systems: Technology, Design and Performance Evaluation”, New York: Springer Verlag, 2005. [31] A. Eriksson and P. Wretling, “How flexible is the human voice? A case study of mimicry,” In Proc. Of European Conference on Speech Technology, Rhodes, Greece, Sep. 1997, pp. 1043–1046. [32] S. Z. Li and A. K. Jain, Eds., Handbook of Face Recognition. New York: Springer Verlag, 2004. [33] R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzales-Marcos, “Biometric identification through hand geometry measurements”, IEEE Transaction on Pattern Analysis Machine Intelligence, Volume 22, No. 10, Oct. 2000, pp. 1168–1171,. [34] J. Daugman, “The importance of being random: statistical principles of iris recognition”, Journal of Pattern Recognition, Elsevier, Volume 36, No. 2, Feb. 2003, pp. 279–291. [35] F. Monrose and A. Rubin, “Authentication via keystroke dynamics”, In

Proc. of 4th ACM Conference on Computer and Communications Security, Switzerland, Apr. 1997, pp. 48–56. [36] V. S. Nalwa, “Automatic on-line signature verification”, In Proc. of IEEE, Volume 85, No. 2, Feb.1997, pp. 213–239. [37] S. Furui, "Recent Advances in Speaker Recognition", In Proc. of First International Conference on Audio and Video based Biometric Person Authentication, UK, Mar. 1997, pp. 859-872. [38] Z. Korotkaya, “Biometric Person Authentication: Odor”, Pages: 1 – 6,http://www.it.lut.fi/kurssit/03-04/010970000/seminars/Korotkaya.pdf as visited on 10/08/2008 [39] F. Cardinaux, C. Sanderson, and S. Bengio, “User Authentication via Adapted Statistical Models of Face Images”, IEEE Transaction on Signal Processing, Volume 54, Issue 1, Jan. 2006, pp. 361 - 373. [40] D. Zhang and W. Shu, “Two Novel Characteristic in Palmprint Verification: Datum Point Invariance and Line Feature Matching”, Pattern Recognition, Vol. 32, No. 4, Apr. 1999, pp. 691-702.

Page 165: Keynote 2011

165

21. THE RESEARCH DETECTION ALGORITHMS USING MULTIMODAL

SIGNAL AND IMAGE ANALYSIS

K.Baskar Dept of Statistics Periyar E.V.R College, Trichy-23 .

K.GNANATHANGAVELU * Dept of Computer Science & Eng, Bharathidasan university, Trichy-23.

S.Pradeep**Dept of Computer Science & Eng, Bharathidasan university, Trichy-23.

Abstract :

Dynamic textures are common in natural scenes.

E x a m p l e s o f d y n a m i c t e x -

tures in video include , smoke, clouds, volatile

organic compound (VOC) plumes in infra-red

(IR) videos, trees in the wind, sea and ocean

waves, etc. Researchers extensively studied 2-D

textures and related problems in the fields of

image processing and computer vision. On the

other hand, there is very little research on

dynamic texture detection in video. In this

dissertation, signal and image processing methods

developed for detection of a specific set of

dynamic textures are presented. Signal and image

processing methods are developed for the

detection of flames and smoke in open and large

spaces with a range of up to 30m to the camera in

visible-range (IR) video. Smoke is semi-

transparent at the early stages of . Edges present

in image frames with smoke start loosing their

sharpness and this leads to an energy decrease in

the high-band frequency content of the image.

Local extrema in the wavelet domain correspond

to the edges in an image. The decrease in the

energy content of these edges is an important

indicator of smoke in the viewing range of the

camera. Image regions containing flames appear

as –colored (bright) moving regions in (IR) video.

In addition to motion and color (brightness) clues,

the flame flicker process is also detected by using

a Hidden Markov Model (HMM) describing the

temporal behavior. Image frames are also

analyzed spatially. Boundaries of flames are

represented in wavelet domain.

High frequency nature of the boundaries of

regions is also used as a clue to model the flame

flicker. Temporal and spatial clues extracted from

the video are combined to reach a final decision.

Introduction

Page 166: Keynote 2011

166

Dynamic textures are common in many

image sequences of natural scenes. Examples of

dynamic textures in video include ¯ re, smoke,

clouds, volatile organic compound (VOC) plumes

in infra-red (IR) videos, trees in the wind, sea and

ocean waves, as well as traffic scenes, motion of

crowds, all of which exhibit some sort of spatio –

temporal stationarity. They are also named as

temporal or 3-D tex-tures in the literature.

Researchers extensively studied 2-D textures and

related problems in the fields of image processing

and computer vision [1]. On the other hand, there

is comparably less research conducted on

dynamic texture detection in video [2]. There are

several approaches in the computer vision

literature aiming at recognition and synthesis of

dynamic textures in video independent of their

types [3]. Some of these approaches model the

dynamic textures as linear dynamical systems

[4], some others use spatio- temporal auto-

regressive models [5]. Other researchers in the

field analyze and model the optical flow vectors

for the recognition of generic dynamic tex-tures in

video [6]. In this dissertation, we do not attempt

to characterize all dynamic textures but we

present smoke and detection methods by taking

advantage of specific properties of smoke and .

The motivation behind attacking a specific kind of

recognition problem is influenced by the notion of

`weak’ Artificial Intelligence (AI) framework

which was first introduced by Hubert L. Dreyfus

in his critique of the so called `generalized’ AI

[7]. Dreyfus presents solid philosophical and

scientific arguments on why the search for

`generalized’ AI is futile . Current content based

general image and video content understanding

methods are not robust enough to be deployed for

detection [8]. Instead, each specific problem

should be addressed as an individual engineering

problem which has its own characteristics. In this

study, both temporal and spatial characteristics

related to flames and smoke are utilized as clues

for developing solutions to the detection problem.

Another motivation for video and pyroelectric

infra-red (PIR) sensor based detection is that

conventional point smoke and detectors typically

detect the presence of certain particles generated

by smoke and by ionization or photometry. An

important weakness of point detectors is that they

cannot pro-vide quick responses in large spaces.

Furthermore, conventional point detectors cannot

be utilized to detect smoldering s in open areas. In

this thesis, novel image processing methods are

proposed for the detection of flames and smoke in

open and large spaces with ranges up to 30m.

Flicker process inherent in is used as a clue for

detection of flames and smoke in (IR) video. A

similar technique modeling flame flicker is

developed for the detection of flames using PIR

sensors. Wild smoke appearing far away from the

camera has different spatio-temporal

characteristics than nearby smoke. The algorithms

for detecting smoke due to wild are also proposed.

Each detection algorithm consists of several sub-

Page 167: Keynote 2011

167

algorithms each of which tries to estimate a

specific feature of the problem at hand. For

example, long distance smoke detection algorithm

consists of four sub-algorithms: (i) slow moving

video object detection, (ii) smoke-colored region

detection, (iii) rising video object detection, (iv)

shadow detection and elimination. A framework

for active fusion of decisions from these sub-

algorithms is developed based on the least-mean-

square (LMS) adaptation algorithm.

Related work

Video based detection systems can be useful for

detecting in covered areas including auditoriums,

tunnels, atriums, etc., in which conventional

chemical sensors cannot provide quick responses

to . Furthermore, closed circuit television (CCTV)

surveillance systems are currently installed in

various public places monitoring indoors and

outdoors. Such systems may gain an early

detection capability with the use of a detection

software processing the outputs of CCTV cameras

in real time.

There are several video-based and flame

d e t e c t i o n a l g o r i t h m s i n t h e l i t -

erature [9], . These methods make use of various

visual signatures including color, motion and

geometry of regions. Healey et al. [38] use only

color clues for flame detection. Phillips et al. [9]

use pixel colors and their temporal variations.

Chen et al. [17] utilize a change detection scheme

to detect flicker in regions. In Fast Fourier

Trans- forms (FFT) of temporal object boundary

pixels are computed to detect peaks in Fourier

domain, because it is claimed that turbulent

flames flicker with a characteristic flicker

frequency of around 10 Hz independent of the

burning material and the burner in a mechanical

engineering paper [10] . We observe that flame

flicker process is a wide-band activity below 12.5

Hz in frequency domain for a pixel at the

boundary of a flame region in a color-video clip

recorded at 25 fps . Liu and Ahuja [48] also

represent the shapes of regions in Fourier

domain. However, an important weakness of

Fourier domain methods is that flame flicker is

not purely sinusoidal but it’s random in nature.

Therefore, there may not be any peaks in FFT

plots of regions. In addition, Fourier Transform

does not have any time information. Therefore,

Short-Time Fourier Transform (STFT) can be

used requiring a temporal analysis window. In this

case, temporal window size becomes an important

parameter for detection. If the window size is too

long, one may not observe peakiness in the FFT

data. If it is too short, one may completely miss

cycles and therefore no peaks can be observed in

the Fourier domain. Our method not only detects

and flame colored moving regions in video but

also analyzes the motion of such regions in

wavelet domain for flicker estimation. The

appearance of an object whose contours,

chrominance or luminosity values oscillate at a

Page 168: Keynote 2011

168

frequency higher than 0.5 Hz in video is an

important sign of the possible presence of flames

in the monitored area [11]. High-frequency

analysis of moving pixels is carried out in wavelet

domain in our work. There is an analogy between

the proposed wavelet domain motion analysis and

the temporal templates of [12] and the motion

recurrence images of [13], which are ad hoc tools

used by computer scientists to analyze dancing

people and periodically moving objects and body

parts. However, temporal templates.

4. Markov Models Using Wavelet Domain

Features for

Short Range Flame and Smoke Detection.

A common feature of all the algorithms developed

in this thesis is the use of wavelets and Markov

models. In the proposed approach wavelets or

sub-band analysis are used in dynamic texture

modeling. This leads to computationally efficient

algorithms for texture feature analysis, because

computing wavelet co- efficient is an Order-(N)

type operation. In addition, we do not try to

determine edges or corners in a given scene. We

simply monitor the decay or increase in wavelet

coefficients’ sub-band energies both temporally

and spatially. Another important feature of the

proposed smoke and detection methods is the use

of Markov models to characterize temporal

motion in the scene. Turbulent behavior is a

random phenomenon which can be conveniently

modeled in a Markovian setting. In the following

sub-sections, the general overview of the

proposed algorithms are developed. In all

methods, it is assumed that a stationary camera

monitors the scene.

2.Flame Detection in Visible Range Video

Novel real-time signal processing techniques

a r e d e v e l o p e d t o d e t e c t f l a m e s b y

processing the video data generated by a visible

r a n g e c a m e r a . I n a d d i t i o n t o

motion and color clues used by [64], flame flicker

i s d e t e c t e d b y a n a l y z i n g t h e

video in wavelet domain. Turbulent behavior in

f l a m e b o u n d a r i e s i s d e t e c t e d b y

performing temporal wavelet transform. Wavelet

c o e f f i c i e n t s a r e u s e d a s f e a t u r e

parameters in Hidden Markov Models (HMMs).

A M a r k o v m o d e l i s t r a i n e d w i t h

flames and another is trained with ordinary

a c t i v i t y o f h u m a n b e i n g s a n d o t h e r

flame colored moving objects. Flame flicker

p r o c e s s i s a l s o d e t e c t e d b y u s in g a n

HMM. Markov models representing the flame and

f l a m e c o l o r e d o r d i n a r y m o v i n g

objects are used to distinguish flame flicker

p r o ces s fr o m mo t io n o f f la me co lo r ed

moving objects. Other clues used in the detection

algorithm include irregularity of the boundary of

the colored region and the growth of such regions

Page 169: Keynote 2011

169

in time. All these clues are combined to reach a

final decision. The main contribution of the

proposed video based detection method is the

analysis of the flame flicker and integration of this

clue as a fundamental step in the detection

process.

3.Flame Detection in IR Video

A novel method to detect flames in videos

captured with Long-Wavelength Infra- Red

(LWIR) cameras is proposed, as well. These

cameras cover 8-12¹m range in the

electromagnetic spectrum. Image regions

containing flames appear as bright regions in IR

video. In addition to motion and brightness clues,

flame flicker process is also detected by using an

HMM describing the temporal behavior as in the

previous algorithm. IR image frames are also

analyzed spatially. Boundary of flames are

represented in wavelet domain and high frequency

nature of the boundaries of regions is also used as

a clue to model the flame flicker. All of the

temporal and spatial clues extracted from the IR

video are combined to reach a final decision.

False alarms due to ordinary bright moving

objects are greatly reduced because of the HMM

based flicker modeling and wavelet domain

boundary modeling. The contribution of this work

is the introduction of a novel sub-band energy

based feature for the analysis of flame region

boundaries.

5. Short-range Smoke Detection in Video

Smoldering smoke appears first, even before

flames, in most s. Contrary to the common belief,

smoke cannot be visualized in LWIR (8-12¹m

range) video. A novel method to detect smoke in

video is developed. The smoke is semi-transparent

at the early stages of a . Therefore edges present

in image frames start loosing their sharpness and

this leads to a decrease in the high frequency

content of the image. The background of the scene

is estimated and decrease of high frequency

energy of the scene is monitored using the spatial

wavelet transforms of the current and the

background images. Edges of the scene produce

local extrema in the wavelet domain and a

decrease in the energy content of these edges is an

important indicator of smoke in the viewing range

of the camera. Moreover, scene becomes grayish

when there is smoke and this leads to a decrease

in chrominance values of pixels. Random

behavior in smoke boundaries is also analyzed

using an HMM mimicking the temporal behavior

of the smoke. In addition, boundaries of smoke

regions are represented in wavelet domain and

high frequency nature of the boundaries of smoke

regions is also used as a clue to model the smoke

flicker. All these clues are combined to reach a

final decision. Monitoring the decrease in the sub-

band image energies corresponding to smoke

regions in video constitutes the main contribution

of this work.

Page 170: Keynote 2011

170

6. Flame Detection Using PIR sensors

A flame detection system based on a pyroelectric

(or passive) infrared (PIR) sensor is developed.

This algorithm is similar to the video flame

detection algorithm. The PIR sensor can be

considered as a single pixel camera. Therefore an

algorithm for flame detection for PIR sensors can

be developed by simply removing the spatial

analysis steps of the video flame detection

method. The flame detection system can be used

for detection in large rooms. The flame flicker

process of an uncontrolled and ordinary activity

of human beings and other objects are modeled

using a set of Hidden Markov Models (HMM),

which are trained using the wavelet transform of

the PIR sensor signal. Whenever there is an

activity within the viewing range of the PIR

sensor system, the sensor signal is analyzed in the

wavelet domain and the wavelet signals are fed to

a set of HMMs. A or no decision is made

according to the HMM producing the highest

probability.

6.Detection of Rising Regions

Wild smoke regions tend to rise up into the sky at

the early stages of the . This characteristic

behavior of smoke plumes is modeled with three-

state Hidden Markov Models (HMM) in this

chapter. Temporal variation in row umber of the

upper-most pixel belonging to a slow moving

region is used as a one dimensional (1-D) feature

signal, F = f(n), and fed to the Markov models

shown fig 1 in One of the models (λ1)

corresponds to genuine wild smoke regions and

the other one (λ2) corresponds to regions with

clouds and cloud shadows. Transition

probabilities of these models are estimated o®-

line from actual wilds and test s, and clouds. The

state S1 is attained, if the row value of the upper-

most pixelin the current image frame is smaller

than that of the previous frame (rise-up). If the

row value of the upper-most pixel in the current

image frame is larger than that of the previous

frame, then S2 is attained and this means that the

region moves-down. No change in the row value

corresponds to S3.

Figure 1: Markov model λ1 corresponding to wild

smoke (left) and the Markov model λ2 of clouds

(right). Transition probabilities aij and bij are esti-

mated off-line

Conclusion

detection systems are vital in saving people’s

lives, and preventing hazards before they get out

of control. Particle based detectors are commonly

used in such systems. These sensors are sensitive

Page 171: Keynote 2011

171

to produced during . An alarm is issued only if

these chemicals physically reach the sensor and

their presence is detected by either ionization or

photometry. This requirement makes these

sensors dependent on the distance to re replace as

well as their location. Besides, these sensors

cannot be used outdoors. Image and video based

systems can be an alternative to particle sensors

for detection. In this work, we developed novel

signal and image analysis techniques for

automatic detection using cameras and infra-red

sensors. Detection problem can be also viewed as

a problem of recognition of a specific dynamic

texture type in video. Dynamic texture and flame

and smoke behavior can be modeled using

stochastic methods. In this research, we used

Markov models which are tailored for flame and

smoke detection. We developed specific methods

for flame, smoke and wild smoke which have

various spatio-temporal characteristics.

Reference

[1]. D. Forsyth and J.Ponce. Computer Vision: A

Modern Approach. Prentice

Hall, 2002.

[2] D. Chetverikov and R. Peteri. A brief survey

of dynamic texture description

and recognition. In Proceedings of the Fourth

International Conference on Computer

Recognition Systems (CORES), pages 1726,

2005.

[3] A. Chan and N. Vasconcelos. Layered

dynamic textures. In Proceedings of the

Conference on Neural Information Processing

Systems (NIPS), pages 203210, December 2005.

[4] C.-B. Liu, R.-S. Lin, N. Ahuja, and M.-H.

Yang. Dynamic textures synthesis as nonlinear

manifold learning and traversing. In Proceedings

of the 17th British Machine Vision Conference

(BMVC), pages 859868, 2006.

[5] C. Liu and N. Ahuja. Vision based ¯ re

detection. In Proceedings of the International

Conference on Pattern Recognition (ICPR),

volume 4, 2004.

[6] R. Vidal and A. Ravichandran. Optical °ow

estimation and segmentation of multiple moving

dynamic textures. In Proceedings of the IEEE

Conference on Computer Vision and Pattern

Recognition (CVPR), volume 2, pages 516 521,

2005.

[7] H. Dreyfus. What Computers Still Can’t Do:

A Critique of Arti¯ cial Reason. MIT Press, 1992.

[8] M. Naphade, T. Kristjansson, B. Frey, and T.

Huang. Multimedia Objects (Multijects): A Novel

Approach to Video Indexing and Retrieval in

Page 172: Keynote 2011

172

Mul- timedia Systems. In Proceedings of the

IEEE International Conference on Image

Processing (ICIP), pages 536540, 1998.

[9] W. Phillips, M. Shah, and N. Lobo. Flame

recognition in video. Pattern Recognition Letters,

23:319327, 2002.

[10] B. Albers and A. Agrawal. Schlieren analysis

of an oscillating gas-jet di®u-sion. Combustion

and Flame, 119:8494, 1999.

[11] W. Straumann, D. Rizzotti, and N. Schibli.

Method and Device for Detecting s Based on

Image Analysis. European Patent EP 1,364,351,

2002.

[12] J. Davis and A. Bobick. The representation

and recognition of action using temporal

templates. In Proceedings of the IEEE Conference

on Computer Vision and Pattern Recognition

(CVPR), pages 928934, 1997.

[13] O. Javed and M. Shah. Tracking and object

classi¯ cation for automated surveillance. In

Proceedings of the European Conference on

Computer Vision (ECCV), pages 343357, 2002.

Page 173: Keynote 2011

173

22.

S.Balamurugan [1] & G.Saravanan [2]

[1][email protected] PG Scholar, Prist University, Thanjavur-613403, Tamil Nadu

[2][email protected] PG &Lecturer, Prist University, Thanjavur-613403, Tamil Nadu

Page 174: Keynote 2011

174

Page 175: Keynote 2011

175

Page 176: Keynote 2011

176

23. AN ADAPTIVE MOBILE LIVE VIDEO STREAMING

K. SENTHIL Mr. D. SARAVANAN M.E.,

M.E (Computer Science) Final Year Asst. Professor CSE

[email protected] PABCET

PAVENDHAR BHARATHIDHASAN COLLEGE OF ENGINEERING AND TECHNOLOGY - TRICHY

ABSTRACT Mobile

video learning is

framework proposes to provide a reliable and adaptive delivery of video lectures to the mobile clients.

technically challenging due to the limited computational power of mobile devices

Index Terms – E-Learning, Mobile

storage, bandwidth and limited Computing, Mobile Learning, Mobile computational power. Mobile learning is an active and vast area of research since mobile phones have a much higher penetration rate than laptop and desktop computers. The aim of this proposed work is to design a mobile learning system that streams lectures to the student‟s mobile devices. Due to the synchronous nature of the system, students can interact with the teacher during the lecture. The proposed to create a framework that incorporates the use of third generation of cellular networks-3G for its increased operational bandwidth speed of over 3Mbps and with a failover mode of operation in Global Packet Radio Service (GPRS), which is used as the streaming medium for the video lectures. All the mobile devices are connected to the central server through this framework. This framework also hosts an adaptive codes choosing mechanism that dynamically updates itself depending upon the type of the mobile phone and its connectivity medium. Codec‟s play a vital role since it is responsible for compression

of the video and audio signal to reduce bandwidth. By adaptively choosing the appropriate type of codec dynamically, this

Page 177: Keynote 2011

177

Multimedia Applications, Mobile Video. 1. INTRODUCTION 1.1 BASIC TECHNOLOGY This section describes the technological foundations of MLVLS system design. We start by discussing the development platform, followed by an overview on GPRS, which was used as the streaming medium for the video lectures. Finally, we will present the employed codecs, that is, the software used for compressing the video and audio signal to reduce bandwidth.

1.2 MOBILE USABILITY Developing for a mobile setting brings forth its own challenges. The limited size of the mobile devices enables portability, but has consequences on usability. The small screen size, compared to laptop/desktop computers, is often quoted as making reading and viewing information difficult, even though this is not automatically the case The limitations depended on the modality: text was read fine, but details got

Page 178: Keynote 2011

178

lost in images and video. This needs to be taken into account when transmitting visual information such as slides. In any case, designing learning materials specifically for mobile devices is a difficult and resource-intensive task [2]. In our setting where we aim at a wide adoption, it was not an option to require the teachers to author additional learning material just for the mobile learners. In such a case, only very few teachers would use the MVLS. We therefore opted for an approach that streams the standard video lectures to the mobile devices. Readability issues, the system transmits three different views to the clients. • Large resolution view of the teaching screen (slide-view). This view typically displays the slides and thus needs to be clearly readable. We decided to broadcast this screen in 320x240px, a format higher than the typical screen resolution of a mobile device (usually 176x144px). Within this screen, the user can zoom out to get a complete overview, and zoom in to see details. • Slide- and teacher-view. In this view, the user sees a close-up of the lecturer superimposed on the teaching screen (64x80px). Thus, the student sees both the learning material as well as the teacher explaining it. • Teacher-view. The teacher-view consists of a larger resolution view of the lecturer (176x144px). Fig 1.2 Three view of Mobile video

1.3 CLIENT PLATFORM Symbian is a real-time, multi- tasking, 32-bit operating system, with low power consumption and memory footprint. It is a stable and mature system, with support for all existing wireless data communication protocols. For these reasons, we selected it as the operating system platform for our mobile learning software. 1.4 THE GPRS NETWORK GPRS (General Packet Radio Service) is a mobile data service, used for mobile Internet access. Compared to the original GSM dial-up access based on a circuit data transmission, GPRS is a packet switching technology, which provides relatively fast transmission times and offers highly efficient, low-cost wireless data services. 1.5 REUSED VIDEO AND AUDIO CODECS Codecs compress and decompress digital data. The purpose of video/audio codes is to reduce the size of the data to reduce bandwidth load while retaining sufficient information for the data to be understood by its consumers. Compression is especially important in a mobile context since mobile transmission channels still have limited bandwidth compared to desktop Internet connections. Low bit rate: compared to MPEG-2 and MPEG-4, the size of data compressed with H.264 is 1/8 of that of MPEG-2 and 1/3 of MPEG-4, with similar image quality. Thus, using H.264 reduces download time and data transfer costs. High quality: standard video data processed with H.264 has high quality when played back.

Page 179: Keynote 2011

179

Strong fault-tolerance: H.264 deals efficiently with errors due to unstable network environments, such as packet loss. Strong network adaptation ability: H.264 integrates a network adaptation layer that simplifies the transmission of H.264 documents in different networks. Even though audio information requires less space than video documents, it is still very large and requires compression, too. The standard MPEG-4 accPlus [30] combines three kinds of technologies: Advanced Audio Coding (AAC), Spectral Band Replication of Coding Technologies (SBR) and Parametric Stereo (PS). 2. FRAMEWORK In the existing system many number of user can access the services which may lead to expensive, high delay etc. to avoid it our idea is to completely implement the p2p video platform in a cloud computing which is an emerging field.

2 FRAME WORK DESCRIPTION 2.1 USER IDENTIFICATION. Three types of user can enter into the System. The user can be a Mobile, Pc or IPTV. Mobile user will connect to the server using WAP (wireless application protocol) browser and Pc users via Web browser whereas IPTV users through EGP (Exterior gateway protocol) server. 2.2 IDENTIFYING COMMUNICATION MODE. The user can communicate via various different forms using GSM, GPRS, BLUETOOTH, 3G technologies. General Packet Radio Services (GPRS) is a packet- based wireless communication service that promises data rates from 56 up to 114Kbps and continuous connection to the Internet for mobile phone and computer users. The higher data rates allow users to take part in video conferences and interact with multimedia Web sites and similar applications using mobile handheld devices as well as notebook computers.

By implementing it in GPRS is based on Global System for cloud computing many numbers of users can occupy whatever resources they want by paying the amount .due to this scalability increases, bandwidth consumption is reduced.

Mobile (GSM) communication and complements existing services such circuit-switched cellular phone connections and the Short Message Service (SMS). Bluetooth is an open wireless technology standard for exchanging data over short distances

(using short wavelength radio

Fig 2.1 Video Streaming To Multi Terminal Users transmissions) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. International Mobile Telecommunications- 2000 (IMT--2000), better known as 3G or 3rd Generation, is a generation of standards for mobile phones and mobile telecommunications services fulfilling specifications by the International Telecommunication Union.[1] Application

Page 180: Keynote 2011

180

services include wide-area wireless voice telephone, mobile Internet access, video

Page 181: Keynote 2011

181

calls and mobile TV, all in a mobile environment. Compared to the older 2G and 2.5G standards, a 3G system must allow simultaneous use of speech and data services, and provide peak data rates of at least 200 Kbit/s according to the IMT- 2000 specification. Recent 3G releases, often denoted 3.5G and 3.75G, also provide mobile broadband access of several Mbit/s to laptop computers and 2.3 VIDEO CONVERSION FFMPEG is a command line tool convertor that, implements a decoder and then an encoder, thus enabling the user to convert files from one container/codec co to another format. FFMPEG can also do a few basic manipulations on the audio and video data just before it is re-encoded by the target codec‟s. These manipulations include changing the sample rate of the audio and advancing or delaying it with respect to the video. They also [5] include changing the frame rate of the resulting video, cropping it, resizing it, placing bars left and right and/or top and bottom in order to pad it when necessary, or hanging the aspect ratio of the picture (if the container supports anamorphic display − not all do). Furthermore, FFMPEG allows importing audio and video from different sources. 2.4 VIDEO STREAMING IN PC 2.4.1 True streaming The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. The protocol is used for establishing and controlling media sessions between end points. Clients of media servers issue VCR-like commands, such as play and pause, to facilitate

real-time control of playback of media files from the server. The transmission of streaming data itself is not a task of the RTSP protocol. Most RTSP servers use the Real time Transport Protocol (RTP) for media stream delivery; however some vendors implement proprietary transport protocols. 2.4.2 HTTP STREAMING This is the simplest and cheapest way to stream video from a website. Small to medium-sized websites are more likely to use this method than the more expensive streaming servers. To Create HTTP Streaming Video *Create a video file in a common Streaming media format * Upload the file to your web server *Make a simple hyperlink to the video file, or use special HTML tags to embed the video in a web page. 2.5 USER AUTHENTICATION. User authentication module is done by using the username and password entered by them in a mobile or in a pc .Information entered by the user is verified by connecting to a database. Database connection is done using Data source name as well as without Data source name. Once user has been verified. Using FFMPEG command line convertor user can receive many video format supported by their mobile CONCLUSION: In video streaming in personal computer, video conversion from one format to another, database connection for authentication has been implemented. The modules to be completed is video streaming in mobile as well as in IPTV streaming and the whole p2p video platform is to be implemented in cloud

Page 182: Keynote 2011

182

computing. In Future it can be connected with the pervasive computing and make easy use. REFERENCES: [1] A Survey on Peer-to-Peer Video Streaming Systems Yong Liu · Yang Guo · Chao Liang [2] H. Yin, C. Lin, Q. Zhang, Z.J. Chen, and D. Wu, ‟TrustStream: A Secure and Scalable Architecture for Large-scale Internet Media Streaming‟, IEEE Transactions on Circuits and Systems for Video Technology (CSVT), VOL. 18, NO. 12, DECEMBER 2008. [3] P2P Streaming Systems: A Survey and Experiments Gustavo Marfia, Giovanni Pau,Paolo Di Ricoy, Mario Gerla Computer Science Department - University of California Los Angeles, CA 90095 Dipartimento di Ingegneria Informatica, Elettronica e delle Telecomunicazioni, Universita„ degli Studi di Pisa. [4] J.C. Liu, S. Rao, B. Li and H. Zhang, “Opportunities and Challenges of Peer-to-

[5] C. Huang, J. Li and K.W. Ross, “Can Internet VoD be Profitable?”, in Proc. of ACM Sigcomm 2007, Kyoto, 2007. [6] S. Banerjee, B. Bhatacharjee, C. Kommareddy. “Scalable Application Layer Multicast”. In: ACM SIGCOMM 2002:43-51 [7] Y. Chu, S. Rao and H. Zhang, "A Case For End System Multicast", Proceedings of ACM Sigmetrics, Santa Clara,CA, pp. 1–12.,2000 [8] D. Wu, C. Liang, Y. Liu, K.W. Ross, View-Upload Decoupling: A Redesign of Multi-Channel Video Streaming Systems, IEEE Infocom Mini-Conference, Rio de Janeiro, April 2009 [9] C. Feng, B. C. Li. "On Large-Scale Peer-to-Peer Streaming Systems with Network Coding," in the Proceedings of ACM Multimedia 2008, Vancouver, British Columbia, October 27 - November 1, 2008.

Peer Internet Video Broadcast,” Proceedings of the IEEE, Special Issue on Recent Advances in Distributed

Page 183: Keynote 2011

183

24. A comprehensive study of Credit Card Fraud Detection Techniques using Statistics tools

V.MAHALAKSHMI, Dept of computer Science & Eng BUTP, Trichy-23

J.MARIA SHILPA* Dept of computer Science & Eng BUTP, Trichy-23

K.BASKAR** Dept of computer Science & Eng BUTP, Trichy-23

ABSTRACT In this advanced electronic E-world, the

usage of credit cards has significantly increased, and a credit card becomes the most important and most popular mode of payment for online transactions as well as offline transactions, cases of fraud also rising associated with it. In this survey paper we analyses, categorizes, compares, summarizes and evaluate Based on recently published almost all research papers regarding credit card fraud detection methods and techniques such as Distributed Data Mining method ,Neural Data Mining method, Behavior-Based Credit Card Fraud Detecting Model, Detection Model Based on Similar Coefficient Sum, Detection Model Based on Distance Sum, BLAST-SSAHA Hybridization Model, Web Services-Based Collaborative Scheme and Fraud detection Using Hidden Markov Model.

Keywords: Data mining, credit card fraud, Hidden Markov Model, Neural Network.

1. INTRODUCTION & MOTIVATION

The popularity of online shopping is growing day by day. According to an ACNielsen study conducted in 2005,one-tenth of the world’s population is shopping online [1]. Germany and Great Britain have the largest number of online shoppers, and credit card is the most popular mode of payment (59 percent). About 350 million transactions per year were reportedly carried out by Barclaycard, the largest credit card company in the United Kingdom, toward the end of the last century [2]. Retailers like Wal-Mart typically handle much larger number of credit card transactions including online and regular

purchases. As the number of credit card users rises world-wide, the opportunities for attackers to steal credit card details and, subsequently, commit fraud are also increasing. The total credit card fraud in the United States itself is reported to be $2.7 billion in 2005 and estimated to be$3.0 billion in 2006, out of which $1.6 billion and $1.7 billion, respectively, are the estimates of online fraud [3].Credit-card-based purchases can be categorized into two types:

1) physical card and 2) virtual card. In a physical-card- based purchase, the cardholder presents his card physically to a merchant for making a payment.

To carry out fraudulent transactions in this kind of purchase, an attacker has to steal the credit card. If the cardholder does not realize the loss of card, it can lead to a substantial financial loss to the credit card company. In the second kind of purchase, only some important information about a card (card number, expiration date, secure code) is required to make the payment. Such purchases are normally done on the Internet or over the telephone. To commit fraud in these types of purchases, a fraudster simply needs to know the card details. Most of the time, the genuine cardholder is not aware that someone else has seen or stolen his card information. The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns. Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds. Since humans tend to exhibit specific behaviorist profiles, every

Page 184: Keynote 2011

184

cardholder can be represented by a set of patterns containing information about the typical purchase category, the time since the last purchase, the amount of money spent, etc. Deviation from such patterns is a potential threat to the system. Several techniques for the detection of credit card fraud have been proposed in the last few years. We briefly review some of them. 2. RELATED WORK ON CREDIT

CARD FRAUD DETECTION

Credit card fraud detection has drawn a lot of research interest and a number of techniques, with special emphasis on data mining and neural networks, have been suggested. Ghosh and Reilly [4] have proposed credit card fraud detection with a neural network. They have built a detection system, which is trained on a large sample of labeled credit card account transactions. These transactions contain exam- ple fraud cases due to lost cards, stolen cards, application fraud, counterfeit fraud, mail-order fraud, and nonreceived issue (NRI) fraud. Recently, Syeda et al. [5] have used parallel granular neural networks (PGNNs) for improving the speed of data mining and knowledge discovery process in credit card fraud detection. A complete system has been imple- mented for this purpose. Stolfo et al. [6] suggest a credit card fraud detection system (FDS) using metalearning techniques to learn models of fraudulent credit card transactions. Metalearning is a general strategy that provides a means for combining and integrating a number of separately built classifiers or models. A metaclassifier is thus trained on the correlation of the predictions of the base classifiers. The same group has also worked on a cost-based model for fraud and intrusion detection [7].

They use Java agents for Metalearning (JAM), which is a distributed data mining system for credit card fraud detection.

A number of important performance metrics like True Positive—False Positive (TP-FP) spread and accuracy have been defined by

them. Aleskerov et al. [8] present CARDWATCH, a database mining system used for credit card fraud detection. The system, based on a neural learning module, provides an interface to a variety of commercial databases. Kim and Kim have identified skewed distribution of data and mix of legitimate and fraudulent transactions as the two main reasons for the complexity of credit card fraud detection [9]. Based on this observation, they use fraud density of real transaction data as a confidence value and generate the weighted fraud score to reduce the number of misdetections.Fan et al. [10] suggest the application of distributed data mining in credit card fraud detection. Brause et al. [11] have developed an approach that involves advanced data mining techniques and neural network algorithms to obtain high fraud coverage. Chiu and Tsai [12] have proposed Web services and data mining techniques to establish a colla- borative scheme for fraud detection in the banking industry. With this scheme, participating banks share knowledge about the fraud patterns in a heterogeneous and distributed environment. To establish a smooth channel of data exchange, Web services techniques such as XML, SOAP, and WSDL are used. Phua et al. [13] have done an extensive survey of existing data-mining-based FDSs and published a comprehensive report. Prodromidis and Stolfo [14] use an agent-based approach with distributed learning for detecting frauds in credit card transactions. It is based on artificial intelligence and combines inductive learning algorithms and meta learning methods for achieving higher accuracy.

Back Propagation neural networks as the base classifiers. A metaclassifier is used to determine which classifier should be considered based on skewness of data. Although they do not directly use credit card fraud detection as the target application, their approach is quite generic. Vatsa et al. [16] have recently proposed a game-theoretic approach to credit card fraud detection. They model the interaction between an attacker and an FDS as a multistage game between two players, each trying to maximize his payoff. The

Page 185: Keynote 2011

185

problem with most of the abovementioned approaches is that they require labeled data for both genuine, as well as fraudulent transactions, to train the classifiers. Getting real-world fraud data is one of the biggest problems associated with credit card fraud detection. Also, these approaches cannot detect new kinds of frauds for which labeled data is not available. In contrast, we present a Hidden Markov Model (HMM)-based credit card FDS, which does not require fraud signatures and yet is able to detect frauds by considering a cardholder’s spending habit. We model a credit card transaction processing sequence by the stochastic process of an HMM. The details of items purchased in individual transactions are usually not known to an FDS running at the bank that issues credit cards to the cardholders. This can be represented as the underlying finite Markov chain, which is not observable. The transactions can only be observed through the other stochastic process that produces the sequence of the amount of money spent in each transaction. Hence, HMM is an ideal choice for addressing this problem. Another important advantage of the HMM-based approach is a drastic reduction in the number of False Positives (FPs)—transactions identified as malicious by an FDS although they are actually genuine. Since the number of genuine transactions is a few orders of magnitude higher than the number of malicious transactions, an FDS should be designed in such a way that the number of FPs is as low as possible. Otherwise, due to the “base rate fallacy” effect [17], bank administrators may tend to ignore the alarms.

3. BACKGROUND This section highlights the types of fraudsters and affected industries.

3.1 Fraudsters

Figure 1: Hierarchy chart of white-collar crime perpetrators from both firm-level and community-level perspectives.

With reference to figure 1, the profit-

motivated fraudster has interactions with the affected business. Traditionally, each business is always susceptible to internal fraud or corruption from its management (high-level) and non-management employees (low-level). In addition to internal and external audits for fraud control, data mining can also be utilised as an analytical tool. the fraudster can be an external party, or parties. Also, the fraudster can either commit fraud in the form of a prospective/existing customer(consumer) or a prospective/existing supplier (provider). The external fraudster has three basic profiles: the average offender, criminal offender, and organised crime offender. Average offenders display random and/or occasional dishonest behavior when there is opportunity, sudden temptation, or when suffering from financial hardship.

3.2 Schematic processing path

Fig 2 Schematic processing path

3.3 Affected Commercial Industries

Page 186: Keynote 2011

186

Figure3 Bar chart of fraud types from 51 unique and published fraud detection papers. The most recent publication is used to represent previous similar publications by the same author(s). Figure 2 details the subgroups of internal, insurance, credit card, and telecommunications fraud detection. Internal fraud detection is concerned with determining fraudulent financial reporting by management (Lin et al, 2003; Bell and Carcello, 2000; Fanning and Cogger, 1998; Summers and Sweeney, 1998; Beneish, 1997; Green and Choi, 1997), and abnormal retail transactions by employees (Kim et al, 2003). There are four subgroups of insurance fraud detection: home insurance (Bentley, 2000; Von Altrock, 1997), crop insurance (Little et al, 2002), automobile insurance (Phua et al, 2004; Viaene et al, 2004; Brockett et al,2002; Stefano and Gisella, 2001; Belhadji et al, 2000; Artis et al,1999), and medical insurance (Yamanishi et al, 2004; Major and Riedinger, 2002; Williams, 1999; He et al, 1999; Cox, 1995). Credit fraud detection refers to screening credit applications (Wheeler and Aitken, 2000), and/or logged credit card transactions (Fan, 2004; Chen et al, 2004; Chiu and Tsai, 2004; Foster and Stine, 2004; Kim and Kim, 2002; Maes et al, 2002; Syeda et al, 2002; Bolton and Hand, 2001; Bentley et al, 2000; Brause et al, 1999; Chan et al, 1999; Aleskerov et al, 1997; Dorronsoro et al, 1997; Kokkinaki,1997; Ghosh and Reilly,1994). Similar to credit fraud detection, telecommunications subscription data (Cortes et al, 2003; Cahill et al, 2002; Moreau and Vandewalle, 1997; Rosset et al, 1999), and/or wire-line and wire-less phone calls (Kim et al, 2003; Burge and Shawe-Taylor, 2001; Fawcett and Provost, 1997; Hollmen and Tresp, 1998;

Moreau et al, 1999; Murad and Pinkas, 1999; Taniguchi et al,1998; Cox, 1997; Ezawa and Norton, 1996) are monitored.

4. METHODS AND TECHNIQUES This section examines seven major methods commonly used, and their corresponding techniques and algorithms for credit card fraud detection.

Figure 4: Structured diagram of the possible data for analysis. Data mining approaches can utilise training/testing data with labels, only legal examples, and no labels to predict/describe the evaluation data

Figure 3 shows that many existing fraud detection systems typically operate by adding fraudulent claims/applications/ transactions/accounts/sequences (A) to “black lists” to match for likely frauds in the new instances (E). Some use hard-coded rules which each transaction should meet such as matching addresses and phone numbers, and price and amount limits (Sherman,2002).An interesting idea borrowed from spam (Fawcett, 2003, p144, figure 5) is to understand the temporal nature of fraud in the “black lists” by tracking the frequency of terms and category of terms (style or strategy of fraudster) found in the attributes of fraudulent examples over time. Below outlines the complex nature of data used for fraud detection in general (Fawcett, 2003; 1997):

Volume of both fraud and legal classes will fluctuate independently of each other; therefore class distributions (proportion of illegitimate examples to legitimate examples) will change over time.

Page 187: Keynote 2011

187

Multiple styles of fraud can happen at around the same time.

Each style can have a regular, occasional, seasonal, or once- off temporal characteristic.

Legal characteristics/behaviour can change over time.

Within the near future after uncovering the current modus operandi of professional fraudsters, these same fraudsters will continually supply new or modified styles of fraud until the detection systems start generating false negatives again.

4.1. Behavior-Based Approach of Fraud Detecting Model

In this approach. A credit card fraud detecting model is established based on the behavior patterns of the credit card holder. Different with traditional models which were based on demographic and economic information, this model detects credit card fraud with historical behavior patterns of the credit card.

4.1.1 Detection Model

Figure 5 Credit Card Fraud Detection Process

When a credit card is used to withdraw money from bank or Automatic Teller Machine(ATM),the information of behavior is collected, and the history transaction data of this credit card are queried from credit card transaction database. Then these data are pre-processed so that the fraud detection algorithms can work on them. The processed data as input information are sent to fraud detection algorithms. The fraud detection

algorithms compare the new behavior with history behavior pattern .If the new behavior is similar to history behavior pattern then it is legal one, otherwise it is suspected behavior. With the detection results, bank can take planned actions, such as decline the request or call the cardholder. 4.4.2 Neural Data Mining

Approach for Credit Card Fraud Detection In this Research, One major obstacle for using neural network training techniques is the high necessary diagnostic quality: Due to these credit card transaction proportions complete new concepts had to be developed and tested on real credit card data. This research shows how advanced data mining techniques and neural network algorithm can be combined successfully to obtain a high fraud coverage combined with a low false alarm rate.

4.4 3Generalizing and weighting the association rules In contrast to standard basket prediction association rules [19], [20] research goal does not consist of generating long associating rules but of shortening our raw associations by generalizing them to the most common types of transactions. Although generalizations are common for symbolic AI, start with the data base of fraud transactions and compare each transaction with all others in order to find pairs of similar ones. Each pair is then merged into a generalized rule by replacing a non-identical feature by a ‘don’t-care’-symbol ‘*’. By doing so, a generalization process evolves. Fig. 5 explained. Here, the generalization of two transactions with the feature tuples x1 = (F,D,C,D,A) and x2 = (F,D,G,D,A) (dotted circle) to the rule (F,D,*,D,A) and further up to (F,*,*,D,A) and to (*,*,*,D,*) is shown. Thus, each generalization provides at least one ‘don’t-care’-symbol for an unimportant feature, increases the generalization level by one and shortens the rule excluding one feature. All generalizations which have not been generalized themselves are the root of the sub graph, forming a tree.

Page 188: Keynote 2011

188

Fig.6 The generalization graph

4.4.4 Mining the analog data Each transaction is characterized by symbolic and analog data. only used the symbolic part of the transactions. Does the analog part containing transaction time, credit amount etc. The problem of fraud diagnosis can be seen as separating two kinds or classes of events: the good and the bad transactions. problem is indeed a classification problem. One major approach for dynamic classification with demand driven classification boundaries is the approach of learning the classification parameters, the classification boundaries, by an adaptive process. Learning is the domain of artificial neural networks, and used a special model of it to perform the task. 4.5 A Web Services-Based Collaborative Scheme for Credit Card

Fraud Detection A web services-based collaborative scheme for credit card fraud detection is proposed in this approach With the proposed scheme, participant banks can share the knowledge about fraud patterns in a heterogeneous and distributed environment and further enhance their fraud detection capability and reduce financial loss. 4.6 BLAST-SSAHA Hybridization for Credit Card Fraud Detection In this paper, proposed to use two-stage sequence alignment in which a profile analyzer (PA) first determines the similarity of an incoming sequence of transactions on a given credit card with the genuine cardholder’s past spending sequences. The unusual transactions traced by the profile analyzer are next passed on to a deviation analyzer (DA) for possible alignment with past fraudulent behavior. The final decision about the nature of a

transaction is taken on the basis of the observations by these two analyzers. In order to achieve online response time for both PA and DA, suggest approach for combining two sequence alignment algorithms BLAST and SSAHA

In credit card application, the card holder’s past transactions form profile sequences. Last few transactions including the incoming one are taken as a query sequence which is aligned with the profile sequences. There may be some fraudulent transactions in the query sequence that would not match with the cardholder’s profile form a fraud sequence. 4.7 Credit Card Fraud Detection Using Hidden Markov Model

In this paper, model the sequence of operations in credit card transaction processing using a Hidden Markov Model (HMM) and showed how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, try to ensure that genuine transactions are not rejected. Detailed experimental results to shows the effectiveness of this approach and compare it with other techniques available in the literature. 4.7.1 Use of HMM for Credit Card Fraud

Detection An FDS runs at a credit card issuing bank. Each incoming transaction is submitted to the FDS for verification. FDS receives the card details and the value of purchase to verify whether the transaction is genuine or not. The types of goods that are bought in that transaction are not known to the FDS. It tries to find any anomaly in the transaction based on the spending profile of the cardholder, shipping address, and billing address, etc. If the FDS confirms the transaction to be malicious, it raises an alarm, and the issuing bank declines the transaction. The concerned

Page 189: Keynote 2011

189

cardholder may then be contacted and alerted about the possibility that the card is compromised. To map the credit card transaction processing operation in terms of an HMM, first deciding the observation symbols in this model. quantize the purchase values x into M price ranges V1; V2; . . . VM, forming the observation symbols at the issuing bank. The actual price range for each symbol is configurable based on the spending habit of individual cardholders. These price ranges can be determined dynamically by applying a clustering algorithm on the values of each cardholder’s transactions. We use Vk, k = 1; 2, . . .M, to represent both the observation symbol, as well as the corresponding price range.In this work, considered only three price ranges, namely, low(l) medium (m), and high(h). set of observation symbols is, therefore, V =l,m,h making M = 3. A credit cardholder makes different kinds of purchases of different amounts over a period of time. One possibility is to consider the sequence of transaction amounts and look for deviations in them.

Fig. 7. HMM for credit card fraud detection After deciding the state and symbol representations, the next step is to determine the probability matrices A, B, and π so that representation of the HMM is complete. These three model parameters are determined in a training phase using the Baum-Welch algorithm [18]. The initial choice of parameters affects the performance of this algorithm and, hence, they should be chosen carefully. We consider the special case of fully connected HMM in which every state of the model can be reached in a single step from every other state, as shown in Fig. 13. Gr, El, Mi, etc., are names

given to the states to denote purchase types like Groceries, Electronic items, and Miscellaneous purchases. Spending profiles of the individual cardholders are used to obtain an initial estimate for probability matrix. 5. CONCLUSION & FUTURE WORK

This research presents methods and techniques together with their problems. Compared to all related reviews on Credit Card fraud detection, In this survey paper we analyze and evaluate the following credit card fraud detection methods and techniques such as Distributed Data Mining method ,Neural Data Mining method ,Behavior-Based Credit Card Fraud Detecting Model, Detection Model Based on Similar Coefficient Sum, Detection Model Based on Distance Sum, BLAST-SSAHA Hybridization model ,Web Services-Based Collaborative Scheme and Fraud detection Using Hidden Markov Model. and is the only one, which proposes Hidden Markov Model is one of the most effective solutions for credit card fraud detection, because HMM can detect whether an incoming transaction is fraudulent or not. Experimental results show the performance and effectiveness of this system and demonstrate the usefulness of learning the spending profile of the cardholders. Comparative studies reveal that the Accuracy of the system is higher than other methods

REFERENCES [1] “Global Consumer Attitude Towards On-Line Shopping,” http://www2.acnielsen.com/reports/documents/2005_cc_online shopping.pdf, Mar. 2007. [2] D.J. Hand, G. Blunt, M.G. Kelly, and N.M. Adams, “Data Mining for Fun and Profit,” Statistical Science, vol. 15, no. 2, pp. 111-131, 2000. [3] “Statistics for General and On-Line Card Fraud,” http://www.epaynews.com/statistics/fraud.html, Mar. 2007.

Page 190: Keynote 2011

190

[4] S. Ghosh and D.L. Reilly, “Credit Card Fraud Detection with a Neural-Network,” Proc. 27th Hawaii Int’l Conf. System Sciences: Information Systems: Decision Support and Knowledge-Based Systems, vol. 3, pp. 621-630, 1994. [5] M. Syeda, Y.Q. Zhang, and Y. Pan, “Parallel Granular Networks for Fast Credit Card Fraud Detection,” Proc. IEEE Int’l Conf. Fuzzy Systems, pp. 572-577, 2002. [6] S.J. Stolfo, D.W. Fan, W. Lee, A.L. Prodromidis, and P.K. Chan, “Credit Card Fraud Detection Using Meta-Learning: Issues and Initial Results,” Proc. AAAI Workshop AI Methods in Fraud and Risk Management, pp. 83-90, 1997. [7] S.J. Stolfo, D.W. Fan, W. Lee, A. Prodromidis, and P.K. Chan, “Cost-Based Modeling for Fraud and Intrusion Detection: Results from the JAM Project,” Proc. DARPA Information Survivability Conf. and Exposition, vol. 2, pp. 130-144, 2000. [8] E. Aleskerov, B. Freisleben, and B. Rao, “CARDWATCH: A Neural Network Based Database Mining System for Credit Card Fraud Detection,” Proc. IEEE/IAFE: Computational Intelligence for Financial Eng., pp. 220-226, 1997. [9] M.J. Kim and T.S. Kim, “A Neural Classifier with Fraud Density Map for Effective Credit Card Fraud Detection,” Proc. Int’l Conf. Intelligent Data Eng. and Automated Learning, pp. 378-383, 2002. [10] W. Fan, A.L. Prodromidis, and S.J. Stolfo, “Distributed Data Mining in Credit Card Fraud Detection,” IEEE Intelligent Systems, vol. 14, no. 6, pp. 67-74, 1999. [11] R. Brause, T. Langsdorf, and M. Hepp, “Neural Data Mining for Credit Card Fraud Detection,” Proc. IEEE Int’l Conf. Tools with Artificial Intelligence, pp. 103-106, 1999.

[12] C. Chiu and C. Tsai, “A Web Services-Based Collaborative Scheme for Credit Card Fraud Detection,” Proc. IEEE Int’l Conf.e-Technology, e-Commerce and e-Service, pp. 177-181, 2004. [13] C. Phua, V. Lee, K. Smith, and R. Gayler, “A Comprehensive Survey of Data Mining-Based Fraud Detectioesearch,”ttp://www.bsys.monash.edu.au/people/cphua/, Mar. 2007. [14] S. Stolfo and A.L. Prodromidis, “Agent-Based Distributed Learning Applied to Fraud Detection,” Technical Report CUCS-014-99, Columbia Univ., 1999. [15] C. Phua, D. Alahakoon, and V. Lee, “Minority Report in Fraud Detection: Classification of Skewed Data,” ACM SIGKDD Explorations Newsletter, vol. 6, no. 1, pp. 50-59, 2004. [16] V. Vatsa, S. Sural, and A.K. Majumdar, “A Game-theoretic Approach to Credit Card Fraud Detection,” Proc. First Int’l Conf. Information Systems Security, pp. 263-276, 2005 [17] Teuvo Kohonen."Self-Organizing Maps".Springer,2000. [19] Wang Xi. Some Ideas about Credit Card Fraud Prediction China Trial. Apr. 2008, pp. 74-75. [20] Liu Ren, Zhang Liping, Zhan Yinqiang. A Study on Construction of Analysis Based CRM System. Computer Applications and Software.Vol.21, Apr. 2004, pp. 46-47. [21] S Ramaswamy, R Rastogi, K Shim. Efficient Algorithm for Mining Outliers from Large Data Sets[C]. In: Proceedings of the ACM .SIGMOD Conference,2000:473-438 [22] R. Agrawal, T. Imielinski, and A.N. Swami, “Mining Association Rules Between Sets of Items in Large Databases”, Proc. of the 1993 ACM SIGMOD Int. Conf. on Management of Data, pp. 207-216, 1993.

Page 191: Keynote 2011

191

[23] XML Schema, http://www.w3.org/xml/schema [24 Predictive Model Markup Language, http://www.dmg.org/pmml-v2-0.htm [26] L. Kaufman and P.J. Rousseeuw, Finding Groups in Data: An Introduction to Cluster

Analysis, Wiley Series in Probability and Math. Statistics, 1990 [27] L.R. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” Proc. IEEE, vol. 77, no. 2, pp. 257-286, 1989. .

Abstract.

In this E- world, environment is polluted day by day due to various factors. One of the causes is forest fire. The application of remote sensing and satellite is at present a significant method for forest fire monitoring, particularly in huge and remote arias. Deferent Method have been presented by researchers for forest fire detection. The forest fire is due to local climate, radiation balance, biogeochemistry, hydrology and the diversity and abundance of terrestrial species. This paper presents a comparative study of three time series based algorithms for detecting in forest fire. In particular, one of the algorithms significantly outperforms the other two since it accounts for variability in the time series.

Key word : Artificial Neural Networks, Forest Fire Detection, Remote Sensing, Spatial data mining, Time series

1. Introduction :

The climate and earth sciences have recently undergone a rapid transformation from a data- poor to a data-rich environment. In particular, climate and ecosystem related observations from remote sensors on satellites, as well as outputs of climate or earth system models from large-scale computational platforms, provide terabytes of temporal, spatial and spatial -temporal data. These massive and information-rich datasets offer huge potential for advancing the science of land cover change, climate change and anthropogenic impacts.

The process of identifying previously hidden but valuable information from vast spatial databases is known as spatial data mining[1]. The development of novel techniques and tools in assist for humans, aiding in the transformation of data into useful knowledge, has been the heart of the comparatively new and interdisciplinary research area called the “Knowledge Discovery in Databases (KDD)”[2]. Data mining techniques profit a number of fields like marketing, manufacturing, process control, fraud detection and network management. Other than this, a huge variety of data sets like market basket data, web data, DNA data, text data, and spatial data. Other

25. A Comparative Study of Algorithms For Forest

Fire Detection

First A K.BASKAR, Second B Prof. GEETHA., and Third C. Prof .SHANTHA ROBINSON

Page 192: Keynote 2011

192

than this, a huge variety of data sets like market basket data, web data, DNA data, text data, and spatial data [3].

The complexity of spatial data types, spatial relationships, and spatial autocorrelation than that of the conventional numeric and categorical data. Involves Spatial Data mining Spatial data mining techniques involves visual interpretation and analysis, spatial and attribute query and selection, characterization, generalization and classification, detection of spatial and non spatial association rules, clustering analysis and spatial regression in addition to a wide variety of other fields [4]. Data mining combines machine learning, pattern recognition, statistics, databases, and visualization techniques into a single unit so as to enhance efficient information extraction from large databases [5]. Ecosystem-related observations from remote sensors on satellites offer huge potential for understanding the location and extent of global land cover change. This Author has been explained that a comparative study of three time series based algorithms for detecting changes in land cover [6]. Some distinguishing characteristics of spatial data that forbid the usage of regular data mining algorithms include: (i) rich data types (e.g., extended spatial objects) (ii) inherent spatial relationships between the variables, (iii) other factors that influence the observations and (iv) Spatial autocorrelation among the characteristics [7]. Classification of data mining In general, data mining is classified into two categories namely descriptive and predictive data mining.

The process of drawing the necessary features and properties of the data from the data base is called descriptive data mining. Some examples of descriptive mining techniques include Clustering, Association and Sequential mining. In case of predictive mining patterns from data are inferred so as to make predictions. Common predictive mining techniques include Classification, Regression and Deviation detection [8]. More recently, several time series change detection techniques have been explored in the context of land cover change detection. Lunetta et

al. [ 9] presented a change detection study that uses MODIS data and evaluated its performance for identifying land cover change in North Carolina.

A. Author Student in Dept of Computer Science & Eng, Bharathidasan university Trichy-23 .

B. Author Professor in Dept of Computer Science & Eng, Bharathidasan university Trichy-23 .

C. Author Professor in * H. O.D Of Statistics in Periyar E.V.R College, Trichy- 23.

Kucera et al. [10]. Describe the use of CUSUM for land cover change detection. However, no qualitative or quantitative evaluation was performed. The Recursive Merging algorithm proposed by Boriah et al.[11]. Follows a segmentation approach to the time series change detection problem and takes the characteristics of ecosystem data into account. They provide a qualitative evaluation using MODIS EVI (Enhanced Vegetation Index) data for the state of California and MODIS FPAR (Fraction of Photo synthetically Active Radiation) data globally.

2. Related work There is an extensive journalism on time

series change detection that can, in principle, be applied to the forest fire detection problem. Time series based change detection has significant advantages over the comparison of snapshot images of selected dates since it can take into account information about the temporal dynamics of landscape changes. In these schemes, detection of changes is based on the pattern of spectral response of the landscape over time rather than the differences between two or more images collected on deferent dates. Therefore, additional parameters such as the rate of the change (e.g. a sudden forest _re vs. gradual logging), the extent,

Page 193: Keynote 2011

193

and pattern of regrowth can be derived. By contrast, for image-based approaches, changes that occur outside the image acquisition windows are not detected, it is difficult to identify when the changes occurred, and information about ongoing landscape processes cannot be derived.

limit change: In this setting, the time series is expected to follow a particular distribution and any significant departure from the distribution is flagged as a change. Fang et al. [19] presented a parameter change based approach for land cover change detection. CUSUM (and its variants) is the most well-known technique of the parameter change approach.

Segmentation: The goal of the segmentation problem is to partition the input time series into homogeneous segments (the subsequence within a segment is contiguous). Segmentation is essentially a special case of change detection since by definition, successive segments are not homogeneous, which means there is likely to be a change point between the segments. Recursive merging follows a segmentation-based approach to change detection.

Analytical: Predictive approaches to change detection are based on the assumption that one can learn a model for a portion of the input time series, and detect change based on deviation from the model. The underlying model can range from relatively simple smoothing models to more sophisticated filtering and state-space models.

The change detection algorithm used to generate the Burned Area Product (a well-known MODIS data set) follows a predictive approach.

3.Forest fire detection

Fires have been a source of trouble for a long time now. Fires have remarkable influence over the ecological and economic utilities of the forest being a principal constituent in a huge number of forest ecosystems [12]. Forest and wild land fires have been witnessed in the past. They have played a remarkable role in determining landscape structure, pattern and eventually the species composition of ecosystems. Controlling factors like the plant community development, soil nutrient availability and biological diversity forms the integral part of the ecological role of forest fires [13].

Fig 1

Since the forest fires cause notable economical and ecological damage despite jeopardizing human lives they are considered as a significant environmental issue [12]. Annually several hundred million hectares (ha) of forest and other vegetation are deteriorated by the forest fires [14]. Intermittently forest fires have compelled the evacuation of vulnerable communities besides heavy damages amounting to millions of dollars. 19.27% or 63.3 million ha of the Indian land has been classified into forest area as per the report of the Forest Survey of India [15], of which 38 million ha alone are hoarded with resources in abundance (crown density above 40%). Thus the country’s forests face immense difficulty. Indian

Page 194: Keynote 2011

194

forests are also in jeopardy due to the forest fires leading to their degradation [16]. Fires caused huge damage in the year 2007 affecting huge territories besides notable number of human casualties [17]. Forest fires pose continuous threats to ecological systems, infrastructure and human lives. Detecting the fires at their early stages and reacting in a fast pace so as to prevent the spread is the only viable and effective option to minimize the damage. Henceforth huge efforts have been put forth to facilitate early detection of forest fires, conventionally being carried out with the aid of human surveillance [18]. Forest fire, Drought, Flood and many other phenomena particularly the ones with huge spatial extent are some of the spatial phenomena that posses predictable spatial patterns that are noticeable through remote sensing Images/products.

4.Algorithm

4.1. Lunetta et al. Scheme. This anomaly based method for identifying changes relies on the fact that in a spatial neighborhood most of the locations remain unchanged and only a few locations get changed at any particular time interval. For every location the algorithm computes the sequence of the annual sum of vegetation index for each year. The deference between the annual sum of consecutive years is then computed. We will refer to this as diff -sum. This is equivalent to applying first-order differencing [7] to the time series of annual sums. High values for the difference in the annual sum for consecutive years indicate a possible change. To determine the \strength" or \significance" of this change, Lunetta et al. compute a z-score for this diff-sum value for the combination of each year boundary and spatial location. When computing the z-score, Lunetta et al. define the standard deviation across all the spatial neighbors of the pixel for that time window in the data set and they further assume that diff-sum is normally distributed with a mean of 0 in the spatial neighborhood. An implicit assumption made by

the scheme (due to this method of z-score computation) is that at each yearly boundary, same fraction of locations undergo land cover change. Note that high values of z-score indicate a decrease in vegetation and vice-versa. In subsequent discussions, we will refer to the scheme described above as the LUNETTA0 scheme.

4.2. CUSUM. Statistical parameter change techniques assume that the data is produced by some generative mechanism. If the generative mechanism changed then the change will cause one of the parameters of the data distribution to change. Thus changes can be detected by monitoring the change in this parameter. CUSUM technique is a parameter change technique that uses the mean of the observations as a parameter to describe the distribution of the data values. The basic CUSUM scheme has an expected value for the process. It then compares the deviation of every observation to the expected value, and maintains a running statistic (the cumulative sum) CS of deviations from the expected value. If there is no change in the process, CS is expected to be approximately 0. Unusually high or low values of CS indicate a change. A large positive value if CS indicates an increase in the mean value of the vegetation (and vice-versa). We will refer to this scheme as CUSUM MEA.

4.3 Recursive Merging Algorithm. The main idea behind the recursive merging algorithm is to exploit seasonality in order to distinguish between points that have had a land cover change and those that have not. In particular, if a given location has not had a land cover change, then we expect the seasonal cycles to look very similar going from one year to the next; if this is not the case, then based on the extent to which the seasons are different one can assign a change score to a land location. Recursive Merging follows a bottom-up strategy of merging annual segments that are consecutive in time and similar in value. A cost corresponding to each merge is defined as a notion of the distance between the segments. We use Manhattan distance in our

Page 195: Keynote 2011

195

implementation of the algorithm, although other distance measures can be used. One of the strengths of the Manhattan distance is that it takes the seasonality of the time series into account because it takes difference between the corresponding months. The key idea is that the algorithm will merge similar annual cycles and most likely the nal merge would correspond to the change (if a change happened) and would have the highest cost of merging. In case the maximum cost of merging is low, it is likely that no change occurred in the time series. The algorithm described above takes into account the seasonality of the data but not the vari- ability. A high cost of merge in a highly variable time series is perhaps not as reliable indicator of change as a moderate score in a highly stable time series. In recursive merging algorithms the cost for the initial merges can be used as an indicator of the variability within each model. To account for this variability, the change score is defined as the ratio of the maximum merge cost (corresponding to difference in models) to the minimum merge cost (corresponding to the intra-model variability). Time series with a high natural variability, or time series with noise data due to inaccurate measurement have a high minimum cost of merging also, thus a smaller change score. We will refer to this scheme as RM0.

Figure 2. Comparison of algorithms on DS1.

Figure 3.Comparison of algorithms with noisy data (DS3).

Ground Truth Data

Figure 4. Example of a polygon representing the boundary of a fire.

Page 196: Keynote 2011

196

Source: MODIS Land Group, Alfredo Huete and Kamel Didan, University of Arizona.

Evaluation Data Set. Since our ground truth is about forest fires in California we created two data sets DS1 and DS3 which consists of forest pixels in California as described below(from fig 5)

5.Conclusion

A amount of insights can be derived from the quantitative evaluation of the algorithms and their variations presented in this paper. On relatively high quality datasets, all three schemes perform reasonably well, but their ability to handle noise and natural variability in the vegetation data differs dramatically. In particular, Recursive Merging algorithm significantly outperforms the other two algorithms since it accounts for variability in the time series. However, the algorithm has several limitations that need to be addressed in future work. For example, due to manner in which the segments are constructed from annual cycles, changes occurring in the middle of segment boundaries are given lower scores than changes occurring at the segment boundaries. The algorithm normalizes the change score for a given time series by the estimated variability. The normalization is currently performed using the minimum distance between a pair of segments, which is not optimal:

how this normalization leads to false positives when a time series with relatively low mean undergoes a small shift. Additionally, there are several limitations of the experimental evaluation in this study. For ex-ample, the ground truth data set consists of only one type of forest fires, thus excluding many other changes of interest. Furthermore, the nature of vegetation data in California can be quite different from other parts of the world such as the tropics, where the issues of noise are acute because of persistent cloud cover.

Reference :

[1] Shekhar, S., Zhang, P., Huang, Y. and Vatsavai, R.R., Trends in spatial data mining. In: Kargupta, H., Joshi, A. (Eds.), Data Mining: Next Generation hallenges and Future Directions, AAAI/MIT Press, pp. 357-380, 2003.

[2] Martin Ester, Hans-Peter Kriegel, Jörg Sander, “Algorithms and Applications for Spatial Data Mining", Published in Geographic Data Mining and Knowledge Discovery, Research Monographs in GIS, Taylor and Francis, 2001.

[3] K. Julish, Data Mining for Intrusion Detection, a Critical Review, in pplications of Data Mining in Computer Security, D. Barbara and S.Jajodia (Eds.), Kluwer Academic Publisher, 2002.

[4] Hong Tang, and Simon McDonald, "Integrating GIS and spatial data mining technique for target marketing of university courses", ISPRS Commission IV, Symposium 2002, Ottawa Canada, July 9-12 2002.

[5].] Tadesse, T., J.F. Brown, and M.J. Hayes. 2005. A new approach for predicting drought-related vegetation stress: Integrating satellite, climate, and biophysical data over the U.S. central

Figure 5. The above MODIS Enhanced Vegetation Index (EVI) map shows the density of plant growth over the entire globe for October 2000. Very low values of EVI (white and brown areas) correspond to barren areas of rock, sand, or snow. Moderate values (light greens) represent shrub and grassland, while high values indicate temperate and tropical rainforests (dark greens).

Page 197: Keynote 2011

197

plains. ISPRS Journal of Photogrammetry and Remote Sensing 59(4):244-253.

[6]. A COMPARATIVE STUDY OF ALGORITHMS FOR LAND COVER CHANGE. SHYAM BORIAH, VARUN MITHAL, ASHISH GARG, VIPIN KUMAR, MICHAEL STEINBACH, CHRIS POTTER, AND STEVE KLOOSTER.

[7]. Shekhar, S., Zhang, P., Huang, Y. and Vatsavai, R.R., Trends in spatial data mining. In: Kargupta, H., Joshi, A. (Eds.), Data Mining: Next Generation Challenges and Future Directions, AAAI/MIT Press, pp. 357-380, 2003.

[8]. U. M. Fayyad, G. Piatestsky-Shapiro, P. Smyth, R. Uthurusamy, “Advances in Knowledge Discovery and Data Mining", AAAI Press/ The MIT Press, pp. 560, 1996, ISBN-10: 0262560976.

[9] R. S. Lunetta, J. F. Knight, J. Ediriwickrema, J. G. Lyon, and L. D. Worthy. Land-cover change detection using multi-temporal MODIS NDVI data. Remote Sensing of Environment, 105(2):142154, 2006.

[10] J. Kucera, P. Barbosa, and P. Strobl. Cumulative sum charts - a novel technique for process-ing daily time series of modis data for burnt area mapping in portugal. In MultiTemp 2007: International Workshop on the Analysis of Multi-temporal Remote Sensing Images, pages 16,2007.

[11] S. Boriah, V. Kumar, M. Steinbach, C. Potter, and S. Klooster. Land cover change detection: A case study. In KDD '08: Proceedings of the 14th ACM SIGKDD International

Conference on Knowledge Discovery and Data Mining, pages 857865, 2008.

[12] González, J.R., Palahí, M., Trasobares, A., Pukkala, T., "A fire probability model for forest stands in Catalonia (northeast Spain)," Annals of Forest Science, 63: 169-176, 2006.

[13] P.S. Roy , "Forest Fire and Degradation Assessment Using Satellite Remote Sensing and geographic Information System", Proceedings of a Training Workshop Satellite Remote Sensing and GIS Applications in Agricultural Meteorology, pages 361-400, 2003.

[14] Y. Rauste, "Forest Fire Detection with Satellites for Forest Fire Control," Proc. XVIII Congress of ISPRS, Int'l Soc. For Photogrammetry and Remote Sensing, Vol.31, No. B7, pp. 584-588, 1996.

[15] Dr. D. Pandey and Anoop Kumar, "Valuation and Evaluation of Trees-Outside-Forests (TOF) of India", Forest Survey of India, Ministry of Environment and Forests, Kaulagarh Road, PO: IPE, Dehradun, India, February 2000.

[16] Bahuguna, V.K. & Singh, S., “The forest fire situation in India”, Int. Forest Fire News, no. 26, pp. 23-27, 2001.

[17]. European-Commission. “Forest Fires in Europe 2007”, Technical report, Report No-8, 2008.

[18] D.Stipaničev, T.Vuko, D.Krstinić, M.Štula, Lj.Bodrožić, Forest Fire Protection by Advanced Video Detection System - Croatian Experiences, Third TIEMS Workshop - Improvement of

Page 198: Keynote 2011

198

Disaster Management System, 10 pages, Sept. 26 - 27, 2006.

[19]. Y. Fang, A. R. Ganguly, N. Singh, V. Vijayaraj, N. Feierabend, and D. T. Potere. Online

hange detection: Monitoring land cover from remotely sensed data. In ICDM Workshops,pages 626631, 2006.

26. IMAGE STEGNOGRAPHY BASED ON RGB INTENSITY

A.SANGEETHA IInd YEAR, M.Tech,

BHARATHIDASAN UNIVERSITY THIRUCHIRAPPALLI.

E-MAIL : [email protected]

ABSTRACT

Image based steganography uses the images as the cover media. LSB is a commonly used technique in this filed. “The goal of Information Hiding is to hide messages inside other harmless messages in a way that does not allow any enemy to even detect that there is a second secret message present”. In this paper, we present a new algorithm for image based steganography. Our algorithm introduces the way of storing variable number of bits in each channel (R, G or B) of pixel based on the actual color values of that pixel: lower color component stores higher number of bits. Hiding the variable number of bits in each channel gives additional security to the message to be hide. This algorithm offers very high capacity for cover media compared to other existing algorithms. The size of the original image and the information embedded image are the same.

No one can detect the existence of message or information in the image.Hereby strengthening tight security to information.

Figure 1. Steganography tradeoff parameter

Keywords: Steganography, RGB image, cryptography, information hiding.

1. INTRODUCTION Steganography is the art and science of

hiding information by embedding messages within other, seemingly harmless messages. Steganography, meaning covered writing, dates

back to ancient Greece, where common practices consisted of etching messages in wooden tablets and covering them with wax, and tattooing a shaved messenger's head, letting his hair grow back, then shaving it again when he arrived at his contact point. Different types of Steganographic techniques employ invisible inks, microdots, character arrangement, digital signatures, convert channel, and spread spectrum communications.

Page 199: Keynote 2011

199

Steganography techniques require two files: cover media, and the data to be hidden . When combined the cover image and the embedded message make a stegofile, which is in our work for image steganography known as stego-image image [3]. One of the commonly used techniques is the LSB where the least significant bit of each pixel is replaced by bits of the secret till secret message finishes . The risk of information being uncovered with this method as is, is susceptible to all ‘sequential scanning’ based techniques [1], which is threatening its security.

Steganography deals with

embedding information in a given media (called cover media) without making any visible changes to it [1]. The goal is to hide an embedded file within the cover media such that the embedded file’s existence is concealed. Image based steganography uses images as the cover media. Several methods have been proposed for image based steganography, LSB being the simplest one. Techniques involving bitmap images (RGB images) as cover media use single or multi-channel hiding, RNG or color cycle methods [2] etc. In the pixel indicator technique where one channel is used to locate the channel to store data.

In this paper, a new technique for RGB image steganography is introduced, where color intensity (values of R-G-B) is used to decide the no of bits to store in each pixel. Channels containing lower color values can store higher no of data bits. The sequence of channels is selected randomly based on a shared key. Our technique ensures a minimum capacity (as opposed to [3]) and can accommodate to store large amount of data. Experimental results show that our algorithm performs much better compared to the existing algorithms. Our algorithm can also be used to store fixed no of bits per channel, but can still offer very high capacity for cover media.

2. REQUIREMENTS Designing any stego algorithm should

take into consideration the following three aspects

• Capacity: The amount of data that

can be hidden without significantly changing the cover medium.

• Robustness: the resistance for possible modification or destruction in unseen data.

• Invisibility (Security or Perceptual Transparency): The hiding process should be performing in a way that it does not raise any suspicion of eavesdroppers.

The Figure 1, shows the relation between these three main parameters. If we increase the capacity of any cover to store more data than a practical possible threshold, then its transparency or robustness will be affect and visa versa. Similarly, transparency and robustness are related; if any of these two parameters are influenced, it can affect the performance in the other one. The capacity, robustness, and security parameters relation issues can be driven by the application need and its priorities. 3. Review

In this section, we review the pixel indicator technique in [3]. The pixel indicator technique uses the least two significant bits of one of the channels from Red, Green or Blue as an indicator for existence of data in the other two channels. The indicator channels are chosen in sequence, with R being the first. Table 1 shows the relation between the indicator bits and the amount of hidden data stored in the other channels. Table 1:

Relation between the indicator bits and the amount of hidden data.

Page 200: Keynote 2011

200

Indicator bits

Channel 1 Channel 2

00 No hidden data

No hidden data

01 No hidden data

2 bits of

hidden data 10 2 bits

of hidden data

No hidden data

11 2 bits of

hidden data

2 bits of hidden data

The disadvantage of the algorithm in [3]

is that the capacity depends of the indicator bits and based on the cover image, the capacity can be very low.

Also, the algorithm in [3] uses fixed no

of bits per channel (2 bits) to store data and the image may get distorted if more bits are used per channel.

Our algorithm guarantees a minimum capacity for every cover image and the number of bits stored in each channel varies depending on the intensity.

Figure 2: Effect in colors for changes in the Red values.

4. The algorithm

In this section, we briefly outline our

proposed algorithm for RGB image based steganography. The idea behind our algorithm is that, for ‘insignificant’ colors, significantly

more bits can be changed per channel of an RGB image. Figure2 shows one example. In (a) the color is WHITE (R=255, G=255, B=255). In (b), we only change the last 4 bits of R to zeros, resulting in a color where distortion can be noticed. In (c), the color component is same as (a), except that R = 55. In (c), we again set the least 4 bits of R to zeros, resulting in a color which seems to be same as (c).

Our idea is that, lower color-value of a channel has less effect on the overall color of the pixel than the higher value. Therefore, more bits can be changed in a channel having ‘low’ value than a channel with a ‘high’ value. Therefore, we propose the following algorithm.

• Use one of the three channels as the indicator. The indicator sequence can be made random, based on a shared key between sender and receiver.

• Data is stored in one of the two channels other than the indicator. The channel, whose color value is lowest among the two channels other than the indicator, will store the data in its least significant bits.

• Instead of storing a fixed no of data-bits per channel, no of bits to be stored will depend on the color value of the channel. The lower the value, the higher the data-bits to be stored. Therefore a partition of the color-values is needed. Through experimentations, we show that optimal partition may depend on the

actual cover image used. • To retrieve the data, we need to

know which channel stores the data-bits. This is done by looking at the least significant bits of the two channels other than the indicator:

O If the bits are same, then the channel following the indicator in cyclic order stores the data.

O Otherwise, the channel which precedes the indicator in cyclic order stores the data.

Page 201: Keynote 2011

201

Here, the cyclic order is assumed to be R-G-B-R- G-B and so on. The appropriate bits can be set while the data is stored.

Figure 2 shows the encoding (at sender’s side) and decoding (at receiver’s side) part of our algorithm in flow charts. Note that, it is assumed that a shared key and partition scheme is already agreed upon by the two parties.

Figure 3 demonstrates one example of storing data bits in a channel. In step – 1, the indicator channel (here G) is selected randomly. In step – 2, the data channel is chosen (R). In step – 3, no of data bits to store is determined from the current channel value and partition scheme. Step – 4 stores the data bits and step – 5 modifies the other channel (B) LSB, which is used while retrieving the data.

4.1. Partition schemes

In our algorithm, a partition scheme is defined as a monotonically decreasing sequence [ai], i = 1 to 8. Assume that the color value of a channel is c. Then that channel with value c stores i no of data bits if c >= ai and for all j, j < i, c < aj. For correctness of the algorithm, we only use valid partitions schemes. We define a valid partition scheme as follows: let [ai] be a partition scheme where lower i bits of ai is all 0. Let [bi], i = 1 to 8, be another sequence, where bi is generated by setting the lower i bits of ai all 1. If ai > bi+1, i = 1 to 7, then [ai] is a valid

partition scheme. This simple condition ensures that the same no of data bits are read from a channel in the receiver’s side a s stored in the sender’s side.

4. Experimentations

We implemented our algorithm in JAVA and carried our experimentations with different cover images and data files exploring different values for shared keys and partition schemes. Here, we only present the

results obtained using the cover image and the data file as shown in Figure 4.

Figure 5 shows the stego files for

4 different partition schemes. The value of the key was 17 and the indicator sequence was chosen randomly using this shared key.

Page 202: Keynote 2011

202

Figure 3a: Flow charts of the encoding part of our algorithm.

Figure 3b: Flow charts of the

decoding part of our algorithm.

Figure 4: An example of hiding data

Page 203: Keynote 2011

203

bits inside a channel.

(a) Cover Image Image size : 640 X480, No of pixels = 307200

(b) Data File (a bitmap)

Image size : 150 X117, Data length = 150896 bits

Figure 5: Cover media and data file

used in the experimentations.

(a) Constant 3 bits per channel

Pixels utilized in cover media : 50939 Partition Scheme : [256,256,0,0,0,0,0,0]

(b) 3 to 4 bits per channel

Pixels utilized in cover media : 41061 Partition Scheme : [256,256,96,0,0,0,0,0]

The same set of images were used to evaluate the algorithm in [3], the pixel indicator technique. Table2 shows the comparative results of the two algorithms.

Table 2 clearly shows that, intensity based variable-bits algorithm clearly performs better than the pixel indicator algorithm in terms of capacity utilizations.

For comparison purpose, we have modified the pixel indicator algorithm to store up to 3 or 4 data bits per channel. But this modification leads to visual distortions to the cover image.

Our algorithm ensures a minimum capacity for any cover image, whereas for pixel indicator method, the capacity may sometimes become very low. As an example, for the cover image and data file of Figure 6, our algorithm uses only 16.5% of the capacity of the cover image (for fixed 3 bits data per channel),

whereas pixel indicator method cannot store the data file due to capacity shortage!

Page 204: Keynote 2011

204

Table 2: Comparison of performance of two steganography algorithms.

6. Conclusion and future works

In this paper, a new idea in image based

steganography is introduced, where variable no bits can be stored in each channel. Our algorithm uses actual color of the channel to decide no of data bits to store. This approach leads to very high capacity with low visual distortions. Experimental results demonstrate that our algorithm performs better than other similar algorithms.

There are several ways to improve our variable- bits algorithm: • Select the partition at run time, based

on the cover media, rather than using the static (fixed) partition scheme for all cover images.

• Use color information of all three channels to determine the partition. This will lead to using

Different partition schemes for different parts of the image.

7. References [1] Provos, N., Honeyman, P, Hide and

seek: An

introduction to steganography, IEEE Security &

Privacy Magazine 1 (2003) pp. 32-44 [2] Karen Bailey, Kevin Curran, An

evaluation of image

based steganography methods using

visual

inspection and automated detection

techniques, Multimedia Tools and Applications, Vol

30 , Issue 1 (2006) pp. 55-88 [3] Adnan Gutub, Mahmoud Ankeer, Muhammad Abu- Ghalioun, Abdulrahman Shaheen, and Aleem Alvi, Pixel indicator high capacity technique for RGB image based Steganography, WoSPA 2008 – 5 IEEE International Workshop on Signal Processing and its Applications, University of Sharjah, Sharjah, U.A.E. 18 – 20 March 2008.

Page 205: Keynote 2011

205

[4] D. Artz, "Digital Steganography: Hiding Data within Data", IEEE Internet Computing: Spotlight, pages

75-80, May-June 2001.

th

27. CLOUD COMPUTING SYSTEM ARCHITECTURE

1.A.JAINULABUDEEN.M.Sc.,M.Tech., M.Phil.,

2.B.MOHAMED FAIZE BASHA,MCA.,M.Phil.,

ASST.PROF.OF COMUTER SCIENCE.JM,C, TRICHY.

ABSTRACT

In general cloud computing as divide it

into two sections: the front end and the back

end. They connect to each other through a

network, usually the Internet. The front end is

the side the computer user, or client, sees. The

back end is the "cloud" section of the system.

The front end includes the client's computer (or

computer network) and the application required

to access the cloud computing system. Not all

cloud computing systems have the same user

interface. Services like Web-based e-mail

programs leverage existing Web browsers like

Internet Explorer or Fire fox. Other systems

have unique applications that provide network

access to clients. In this paper expands the

system and architecture and stuffs in

information technology.

Keywords:

network, Internet, . Services, technology….

Introduction: A central server administers the system,

monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. The front end includes the client's computer (or computer Setwork) and the application required to access the cloud computing system.

Not all cloud computing systems have the same user interface. Services like

Web-based e-mail programs leverage

existing Web browsers like Internet Explorer or Fire fox. Other systems have unique applications that provide network access to clients. On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.

If a cloud computing company has a lot of clients, there's likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients' information stored. That's because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients' information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.

Cloud computing:

Page 206: Keynote 2011

206

The success of cloud computing is largely based on the effective implementation of its architecture. In cloud computing, architecture is not just based on how the application will work with the intended users.

Cloud computing requires an intricate

interaction with the hardware which is very essential to ensure uptime of the application.

Applications:

The following diagram shows that how cloud computing used in various fields.

Architecture:

Clouding computing is no more considered as an emerging technology.

Now, it’s a reality and this low-cost computing power is gaining popularity among businessmen, especially medium and small size, and governmental organizations, as people are releasing the power of cloud environments.

Though there is no official definition and straight forward way to explain what exactly

Page 207: Keynote 2011

207

cloud computing is, but it can be expressed in general as the following statement:

“cloud computing is such a type of computing environment, where business owners outsource their computing needs including application software services to a third party and when they need to use the computing power or employees need to use the application resources like database, emails etc., they access the resources via Internet.” Cloud Computing Service Architecture: Mainly, three types of services we can get from a cloud service provider. 1. Infrastructure as a service- service provider bears all the cost of servers, networking equipment, storage, and back-ups. You just have to pay to take the computing service. And the users build their own application software’s. Amazon EC2 is a great example of this type of service.2. Platform as a service-service provider only provides platform or a stack of solutions for your users. It helps users saving investment on hardware and software. Google Go engine and Force.com provide this type of service. For example, In data centers:

One of the most distinguishing characteristics of cloud computing architecture is its close dependency on the hardware components. An online application is just a simple application that could be launched in different servers but when the application is considered with cloud computing, it will require massive data centers that will ensure the processes are done as expected and timely.

Data centers for cloud computing architecture are not your run-of-the-mill data processing centers. It’s composed of different servers with optimal storage capacity and processing speed. They work together to ensure that the application will be operating as expected.

The area is usually in a highly controlled environment where it would be constantly monitored through various applications and manually checked for actual physical problems. Features: Microsoft India today announced the commercial availability of its Windows Azure platform, an Internet-scale cloud services platform, which will help it provide software, platform and infrastructure services to businesses through cloud computing. Conclusion:

Cloud computing is closely related to grid

computing and utility computing. In a grid

computing system, networked computers are able to access and use the resources of every other computer on the network. In cloud computing systems, that usually only applies to the back end. Utility computing is a business model where one company pays another company for access to computer applications or data storage.

References: i)The Vision of Autonomic Computing, by Jeffrey O Kephart and David M Chess, IBM Thomas J Watson Research Center, 2001 ii)Autonomic Computing Manifesto, ttp://www.research.ibm.com/autonomic/manifesto/autonomic_computing.pdf, International Business Machines Corporation 2001 iii)web Services 2.0, by Thomas B Winans and John Seely Brown, Deloitte, 2008.iv)Farber, R. Cloud Computing: Pie in the Sky? ScientificComputing.com, November/December 2009 v) Davies, K. Amylin, Amazon, and the Cloud. BioIT World, November/December 2009, pp. 35, 42.

31

28. Issues in Cloud computing solved using middle server technology

Page 208: Keynote 2011

208

Mr. Palivela Hemant

Computer Science and Engineering Dept. Annasaheb Chudaman Patil College of Engineering,

Navi Mumbai, India [email protected]

Mr. Nitin.P.Chawande

Computer Science and Engineering Dept. Annasaheb Chudaman Patil College of Engineering,

Navi Mumbai, India [email protected]

Mr. Hemant Wani

Computer Science and Engineering Dept. Annasaheb Chudaman Patil College of Engineering,

Navi Mumbai, India [email protected]

Abstract— Location independency can be achieved

through distributed systems but cloud computing

has

become a new era altogether for distributed systems.

But

there are lot of hurdles in accomplishing this idea in

the

form of security parameters and backup issues. In

this

paper we have discussed the solution to there issue

by

integrating the encryption and server management

techniques in order to make a smooth transaction

between the user and the server. We propose a new

prototype system where we have introduced a

governance body which will handle all the

transactions

from the user to the actual server from where the user

is

requesting. We have introduced routing table at

each

end server and middle server (Governance server) so

as

to get the database of client to server connectivity.

Keywords-Central server,Application level

firewall,Service

level agreement, SSL technology.

I. INTRODUCTION

Cloud computing emerged from so called

distributed

computing and grid computing. Here the user can

access any

service which he/she wants for a specific task and

for a

specific amount of time. Cloud computing provides us

with a

facility of sharing and interoperating the resources

between

different users and the systems being owned by the

organizations. Security is a major hindrance in such

type of

systems because if the users are storing their data in a

remote

location owned by an unknown person and an

organization

then their data is not protected. Members

communicating to

each other should have a good level of trust so as to

share the

data and resource with each other.

In actual scenario, the cloud is the concept of

virtualizing

Page 209: Keynote 2011

209

the local system of the user using remote cloud

operating

system to get a virtual desktop with a specific or a

choice of

operating systems to choose of operating systems to

choose

and to store the personal data and execute the

application

form anywhere. The customers or the user

purchase the

computing power depending on their demand and

are not

concerned with the underlying technologies used.

The

resources used and data accessed are owned by a third

party

and operated by them. This third party may not be

located in

the same area the user lives may be in the state or

country.

In this paper we propose a new way that is very

important to improve the security and the

dependable

computing in the cloud and various backup issues sorted.

We

have used a middle server named as governance server

trusted platform which will perform the action of a

central

server.

II. CLOUD STRUCTURE AND TYPES

A. Public cloud

Public cloud is basically used by lot of users in the whole

world and the security aspects act as utmost hindrance

in

such situations. It is basically a pay per use model in which

users pay as per their use which becomes very useful

and

cost effective for the companies they are working for and

for themselves. But the web services the user is using

can

get spikes in their amount of usage. But even if the model

works for an utmost ease still it has a lot of security issues.

It can be very well handled with SLA (Service Level

Agreement) but some strategy has to be taken into

consideration to provide additional support and security.

The cloud vender and the user should agree upon the system

which can be mutually coordinated but with some more

security aspects taken into account.

B. Private Cloud

The cloud infrastructure is operated solely for an

organization. It may be managed by the organization or

a

third party and may exist on premise or off premise [1].

Page 210: Keynote 2011

210

There are hybrid solutions such as the Nasuni filer or

Nirvanix hNode to tier storage between your on-

premises

storage products and public clouds. But, those still use

some

amount of public cloud storage solutions. Now there is

an

alternative to bring it all internal in terms of where the

data

resides, yet you can get the scalability and ease of

access

associated with public cloud storage technologies. In

private

cloud we get additional benefits like additional security

as

the company has the server at its end. As a way to

exercise

greater control over security and application

availability,

some enterprises are moving toward building private

clouds.

With the right approach and expertise in place, this type of

setup can offer the best of both worlds: the cost-

effectiveness of cloud computing and the assurance

that

comes with the ability to manage data and applications

more

closely. Cloud services are owned and delivered only

within

Page 211: Keynote 2011

211

an enterprise or organization. Private clouds

employ

advances in virtualization technologies and

automated

management technologies to enhance scalability and

utilization of local data centers while reducing

administrative and management tasks. Private clouds

can be

built on existing on-premises computing

infrastructures

using open-source cloud software (Cloud ware) or

third-

party.

C. Hybrid cloud

A hybrid provides services by combining private and

public

clouds that have been integrated to optimize

service. The

promise of the hybrid cloud is to provide the local

data

benefits of the private clouds with the economies,

scalability, and on-demand access of the public cloud.

The

hybrid cloud remains somewhat undefined because

it

specifies a midway point between the two ends of

the

continuum of services provided strictly over the

Internet and

those provided through the data centre or on the

desktop.

Today, almost every enterprise could be said to have an

IT

infrastructure containing some elements of both

extremes.

Meshing them into a common structure is what

becomes

interesting and offers a range of new possibilities in

handling local and cloud-based data, but it also

introduces a

range of complexities in data transfer and integration

[2]. In

our proposed system we have included a strategy by

which

we can have connectivity between all the clouds and

have

provided additional security as we have introduced a

governance body like a central server which can

have the

routing table in it as shown in fig. 1. The primary

difference

between hybrid cloud and private cloud is the extension

of

service provider-oriented low cost cloud storage to

the

enterprise. The service provider based cloud may be

a

private cloud (single tenant) or a public loud (multi-

tenant)

[3].

III. MODELS OF CLOUD COMPUTING

A. Model 1:Infrastructure as a service(Iaas)

Page 212: Keynote 2011

212

The key aspects of IT infrastructure, hardware,

facilities,

and administration have traditionally been the domain

of IT

departments within each company. Dedicated

personnel

install and configure servers, routers, firewalls, and

other

devices in support of their respective employers.

This

equipment requires dedicated housing as well as

environmental controls, emergency power, and

security

systems to keep it functioning properly. Finally,

every

company allocates additional space where IT

personnel

work to support the infrastructure that is in place.

Every

aspect of IT infrastructure has evolved on its own,

yet-until

now - has not moved toward integration. For

example, a

company purchases software it needs and then

purchases a

server to run it. If data storage is necessary for

files or

databases, disk arrays and hard drives are added into

the mix

to accommodate the needs of the company. A local

network

is maintained to provide employees access to IT resources,

and high speed internet connectivity for voice and data

is

added to the company account as necessary. Practically

speaking, each IT system has its own management system,

with some systems requiring the addition of a

specialized

worker to the staff. Infrastructure as a service takes the

traditional components of IT infrastructure, takes them

off

site, and offers them in one unified, scalable package to

companies who can manage them through one management

interface. Infrastructure as a service results in IT services

that easily conform to the changing requirements of a

business. Because the infrastructure does not reside on

the

premises, obsolete equipment, upgrades, and retrofits no

longer play a role in the company's decision to adopt

new

technology: the IaaS provider takes care of that seamlessly,

allowing the business to focus on its mission .Cost

effectiveness augments the convenience of IaaS. Because

the IaaS provider has massive platforms segmented for each

customer, the economies of scale are enormous, providing

significant cost savings through efficiency. The need for

every company to maintain its own infrastructure is

eliminated through IaaS. The power of IaaS brings the

resources needed to service government and enterprise

contracts to businesses of every size. IaaS improves

reliability because service providers have specialized

workers that ensure nearly constant uptime and state-of-the-

art security measures. Infrastructure as a Service is a form

Page 213: Keynote 2011

213

of

hosting. It includes network access, routing services

and

storage. The IaaS provider will generally provide the

hardware and administrative services needed to store

applications and a platform for running applications.

Scaling

of bandwidth, memory and storage are generally

included,

and vendors compete on the performance and pricing

offered on their dynamic services. IaaS can be

purchased

with either a contract or on a pay-as-you-go basis.

However,

most buyers consider the key benefit of IaaS to be the

flexibility of the pricing, since you should only need to

pay

for the resources that your application delivery requires

[4].

During many phased of SDLC, there is a need to

create

hardware and software environments repeatedly to

perform

development, integration testing, system testing and

user

acceptance testing. This repeated deployment of

hardware

and software takes lot of time and eats into the bandwidth

of

today’s delivery organization to focus on the core

responsibility of delivering tested application. Delivery

organization also needs to provision and re-provision

hardware and software for ever increasing

requirements of

development and testing in addition to the resources. Also,

setting up of separate hardware and software takes time as

well as prone to configuration errors. Plus, separate

hardware installations consume space which is at a

premium. The solution to above mentioned usage pain

points is to setup private cloud. Eucalyptus, an open source

private cloud provider provides software that can be

deployed to setup private cloud within the organization’s

internal premises. Eucalyptus is software that implements

scalable IaaS-style private and hybrid clouds. Eucalyptus

implements the Amazon Web Services (AWS) API

which

Page 214: Keynote 2011

214

allows interoperability with existing AWS-

compatible

services and tools. You can read more here -

B. Model 2:Software as a Service(SaaS)

Software is ubiquitous in today’s business world,

where

software applications can help us track shipments

across

multiple countries, manage large inventories, train

employees, and even help us form good working

relationships with customers. For decades,

companies have

run software on their own internal infrastructures

or

June 2009, a study conducted by VersionOne found that

41% of senior IT professionals actually don't know what

cloud computing is and two-thirds of senior finance

professionals are confused by the concept, highlighting the

young nature of the technology. In Sept 2009, an Aberdeen

Group study found that disciplined companies achieved on

average an 18% reduction in their IT budget from cloud

computing and a 16% reduction in data center power costs.

TABLE I. COMPARISON CHART

computer networks. In recent years, traditional

software

license purchases have begun to seem antiquated,

as many

vendors and customers have migrated to software

as a

service business model. Software as a service, or

'SaaS', is a

software application delivery model by which an

enterprise

vendor develops a web-based software application,

and then

hosts and operates that application over the Internet

for use

by its customers. So far the applications segment

of cloud

computi

ng are

the only

segment

that has

proven

successf

ul

as a

business

model.

By

running

business

applicati

ons over

the

internet

from

centrali

zed

servers

rather

than

from

on-site

servers,

compan

ies can

cut

some

serious

costs.

Furtherm

ore,

while

avoiding

maintena

nce

costs,

licensing

costs

and the

costs of

the

hardware

required

for

running servers

on-site,

companies are

able to run

applications

much more

efficiently from

a computing

standpoint [8].

Customers do

not need to buy

software

licenses or

additional

infrastructure

equipment, and

Page 215: Keynote 2011

215

typically only pay monthly fees (also

referred to as annuity payments) for using the

software. It is

important to note that SaaS typically encapsulates

enterprise

as opposed to consumer-oriented web-hosted

software,

which is generally known as web 2.0. According

to a

leading research firm, the SaaS market reached

$6.3B in

2006; still a small fraction of the over $300B

licensed

software industry. However, growth in SaaS since

2000 has

averaged 26% CAGR, while licensed software

growth has

remained relatively flat. Demand for SaaS is being

driven

by real business needs — namely its ability to drive

down

IT-related costs, decrease deployment times, and

foster

Amazon

Elastic

Compute

Cloud

Platforms:Li

nux Based

Systems

Cost: 10 cent

per hour for

a small

instance

(1.7

GB of

memory, 160

GB of

instance

storage), 15

cents per GB

of data

storage per

month, 10

to

17cents per

gigabyte of

data

transferred.

Google

Application

Engine

Platforms:

Any python

2.5 web

application

that

operates in

a sand box.

About 5

million

free

page views

after that,

10 to 12

cents per

hour of

CPU core,

15 to 18

cents per

GB of

storage per

month and

9 to 13

cents per

GB of

data

transfer.

Go Grid

Platforms:CE

NT OS and

Red Hat

Enterprise

Linux with

Apache,

PHP,

MYSQL.

12 cents per

hour of a

CPU core.

Load

balancing,

DNS, 500

GB of

storage and

incoming

data transfers

are free,

outbound

data transfers

cost 25 cents

per GB.

AppNexus

Platforms:Li

nux Based

Systems

22 cents per

hour for a

2.83 Ghz

server. High

throughput

NAS cost 50

cents per GB

per month,

while

archival

storage costs

significantly

lower.

innovation [5]. Both public and private cloud

models are

now in use. Available to anyone with Internet access,

public

Page 216: Keynote 2011

216

models include Software as a Service (SaaS) clouds

like

IBM LotusLive™, Platform as a Service (PaaS) clo

These hybrids are designed to meet specific

business and

technology requirements, helping to optimize

security and

privacy with a minimum investment in fixed IT

costs. In

All these services are cost effective but have a lot of issues

regarding security and backup. Depending upon the

implementation and platform needed the central server can

send the request to the respective server.

IV. REQUIREMENTS OF SECURITY

Gives a general description of security services and related

mechanisms, which can be ensured by the Reference

Model,

and of the positions within the Reference Model where the

services and mechanisms may be provided. Extends the

field of application of ISO 7498 [6] to cover secure

communications between open systems. Adds to the

concepts and principles included in ISO 7498 but does not

modify them. In the fig 1, we have showed how the

requirements are fulfilled in our proposed system.

Page 217: Keynote 2011

217

A. Authentication and Authorisation

User can be identified in this model as we are using

the SSL

security for that purpose. A governance body is acting

as an

interface between the user and the cloud servers.

There will

be encryption between the user and central server

and

between the central server and cloud of servers. User

details

will be stored within the central server in the form

of

UserID etc and validation will be done accordingly.

Hence

the requirement is fulfilled in this. Authorization is

not a big

issue in private cloud because the system

administrator can

look into it by granting access only to those who

are

authorized to access the data. Whereas in public cloud

it will

become more hectic due to requests from normal

users have

to be taken into considerations. Privileges over the

process

flow have to be considered as the control may flow

from

one server to another. Respective UserID will be

saved in

the central servers after the registration and

authorization

can be done easily as the respective rights can be

stated

there.

B. Confidentiality

Confidentiality plays a very important role as the data

has to

be secure and should not be reviled anywhere. This can

be

achieved in this system as we have used Dual SSL

technology. User’s data, profiles etc have to be

maintained

and as they are virtually accessed various protocols

(security) have to be enforced. If we standardize the

whole

cluster of a particular sector then it can be easily

imposed.

With regard to data-in-transit, the primary risk is in

not

using a vetted encryption algorithm. Although this

is

obvious to information security professionals, it is

not

common for others to understand this requirement

when

using a public cloud, regardless of whether it is IaaS,

PaaS

or SaaS. It is also important to ensure that a protocol

provides confidentiality as well as integrity (e.g., FTP

over

SSL [FTPS], Hypertext Transfer Protocol Secure

[HTTPS],

and Secure Copy Program [SCP])—particularly if

Page 218: Keynote 2011

218

the

protocol is used for transferring data across the

Internet.

Merely encrypting data and using a non-secured

protocol

(e.g., ―vanilla‖ or ―straight‖ FTP or HTTP) can

provide

confidentiality, but does not ensure the integrity of the

data

(e.g., with the use of symmetric streaming ciphers)

[6].

C. Integrity

Integrity is maintained as the hashing is done in

SSL

technology. The major drawback in case of this

technology

is the excessive redundant data due to which the

bandwidth

is used up and the packet size is increased. From a

privacy

and confidentiality perspective, the terms of service

may be

the most important feature of cloud computing for

an

average user who is not subject to a legal or

professional

obligation. It is common for a cloud provider to

offer its

facilities to users without individual contracts and

subject to

the provider’s published terms of service. A

provider may

offer different services, each of which has distinct

terms of

service. A cloud provider may also have a separate

privacy

policy. It is also possible for a cloud provider to

conduct

business with users subject to specific contractual

Page 219: Keynote 2011

219

agreements between the provider and the user that

provides

better protections for users. The contractual model is

not

examined further here. If the terms of service give the

cloud

provider rights over a user’s information, then a user

is

likely bound by those terms. A cloud provider may

acquire

through its terms of service a variety of rights, including

the

right to copy, use, change, publish, display, distribute,

and

share with affiliates or with the world the user’s

information. There may be few limits to the rights that

a

cloud provider may claim as a condition of offering

services

to users. Audits and other data integrity measures may

be

important if a user’s local records differ from the

records

maintained on the user’s behalf by a cloud provider.

D. Non-repudiation

Non-repudiation is the requirement which states that if

a

sender is sending the data to the other end. In our

proposed

system this requirement is fulfilled by the central

server

because it has the routing table as well as the table of

content of all the servers in the cloud with corresponding

server ID, name, location etc. Due to the routing table’s

entry of server ip, receiver and sender ip we can state that if

the user has sent the request he cannot deny it and if

receiver gives acknowledgement or response he also cannot

deny of giving it.

V. USE OF PROPOSED MODEL

In the proposed system we have introduced an idea in which

we have defined a central server which will be having a

router table which contains cloud Id, the corresponding user

Id , the actual server Id to which the user is connecting to.

The source ip and the destination ip also have been put into

the table. It also contains the actual amount of data flow that

is the packets per second transfer rate. On the user end there

will be personal firewall and the connectivity between the

user and the central server will be encrypted using SSL

encryption standards that are regularly used now-a-days.

Again at the Central server’s end there will be an

application level firewall which will check whether the

packets are malicious or not. Application-level firewalls

(sometimes called proxies) have been looking more deeply

into the application data going through their filters.

Fig.2

shows the architectural diagram of the proposed system. By

considering the context of client requests and application

responses, these firewalls attempt to enforce correct

application behavior, block malicious activity and help

organizations ensure the safety of sensitive information and

systems. They can log user activity too. Application-

level

filtering may include protection against spam and viruses as

well, and be able to block undesirable Web sites based

on

Page 220: Keynote 2011

220

content rather than just their IP address. [6] Further what

we

have suggested is to make a separate cluster of clouds

for

banking sector, educational sector, government bodies

(will

not contain confidential data). The user has a personal

firewall at his end. The central server say for banks as

an

example consists of a table which consists of the

userID,

server id, its name and all the related information

through

which a governance body can back track the server and

the

Page 221: Keynote 2011

221

user. When a user tries to connect to a particular server from

the cloud then his/her user id sever id source ip and

destination ip are saved. The total time of synchronization,

packet size being transferred server name and the total lease

time in case of a secure connection is saved in the table

incase if the user is not able to connect to a server i.e., if the

ping shows connection time out we can easily track the

server from the central servers routing table. Even the user

credentials and the session are secured by SSL technology.

Further we can achieve more security by clubbing different

security algorithms with SSL.

There is a secured connectivity between the user and the

central server and between cloud’s servers. Due to double

encryption all the security requirements are fulfilled in this

model. Tracking the server is also simple because their will

be a table which will help us know the cloud id server name,

server id and the corresponding organizations name whose

server it is. So if the server is not getting connected then we

can track it. We also have to standardize all the servers in

the cloud for a particular sector like banking sector, all the

banks like ICICI, HDFC and all the centralized banks and

co-operative banks etc have to come together and use

standardized protocols so as to achieve this proposal . Even

by standardizing in education sector we can achieve a

common place to gain knowledge and we can use the

services as according. We have also included the routing

table below which depicts the actual scenario.

TABLE II. ROUTING TABLE

UID SID

Source IP

Destn IP

1216

3

Page 222: Keynote 2011

222

2532

5

123.13

8.63.6

3

101.123.22.25

8650 2535

1

Time Clou

d ID

111.125.25.23

Packe Server

t Size Name

102.124.12.35

Lease Time

Figure 1. Architecture

Diagram of proposed model

VI. CONCLUSION

The model we have submitted is having its own advantages

500

ms

800

ms

124 255k

b

263 500k

b

Rudra

Indra

15mins

25mins

in case of security and backup. Due to a governance body in

between the user and the cloud server we can easily

track

the user as well as the server in the cloud. We can also

nexus both public cloud and private cloud together in

one

with hybrid clouds. Due to SSL security the security

parameters are also taken into consideration. This model

can

help cloud computing and make it reach new ends.

Page 223: Keynote 2011

223

REFERENCES

[1] Peter Mell and Tim Grance,‖The NIST Definition of

Cloud

Computing‖http://csrc.nist.gov/groups/SNS/cloud-

computing/

[2] Architectural Requirements Of The Hybrid Cloud

Information Management Online, February 10, 2010

Brian J. Dooley

[3] http://cloudstoragestrategy.com/2010/01/cloud-storage-

for-the-enterprise---part-2-the-hybrid-cloud.html By

Steve Lesem on January 25, 2010

[4] R. Nicole, ―Title of paper with only first word

capitalized,‖ J. Name Stand. Abbrev., in press.

[5] http://www.wikinvest.com/concept/Software_as_a_Ser

vice

[6] Tim Mather, Subra Kumaraswamy, and Shahed

Latif‖Cloud Privacy and security‖ pp. 529–551,

September 2009: First Edition

[7] "IBM Point of View: Security and Cloud

Computing"Cloud computing White paper November

2009.

[8] ―Whitepaper on cloud

computing‖blog.unsri.ac.id/userfiles/26156855-Cloud-

Computing.doc

uds such

as IBM Computing on Demand™, and Security

and Data

Protection as a Service (SDPaaS) clouds like the IBM

Vulnerability Management Service. Private clouds are

owned and used by a single organization. They offer many

of the same benefits as public clouds, and they give the

Page 224: Keynote 2011

224

owner organization greater flexibility and control.

Furthermore, private clouds can provide lower latency than

public clouds during peak traffic periods. Many

organizations embrace both public and private cloud

computing by integrating the two models into hybrid clouds.

29. ANALYSIS AND IMPLEMENTATION OF A SECURE MOBILE INSTANT MESSENGER (JARGON) USING JABBER SERVER OVER

EXISTING PROTOCOLS

Padmini Priya Chandra.S

PG Scholar, VIT University

[email protected]

ABSTRACT

This paper analyses the previously

existing instant messengers and implements a

secure mobile instant messenger for mobile

environments that undergoes problems and

possibilities with applications targeted at

computers. The interface scalability, changing

and querying the state, security while data

transmitted from mobile to any kind of

computers and vice versa are some major issues

for a mobile instant messaging system designed

for mobile environments. This system overcomes

the above issues all the way through the xml

security enhancements at both the server and the

client side. The location awareness, end to end

encryption and access control enhancement

mechanisms all work together to provide greater

security throughout the data transmission from

client1-server-client2. The concept here is

implemented using Jabber at the server side

which takes over other protocols by its unique

properties. An attempt to connect with the

Timber xml database is targeted to provide the

xml security features at database level. Finally,

a fine grained mobility and security is obtained,

when jabber and Timber work together.

KEYWORDS: Jabber, timber, access control,

encryption and instant messaging.

I.INTRODUCTION More than millions use instant messengers these days. These instant messenger services provide two major problems for the organizations. The first one is the adaption driven by the end user and not by the management and the other is the client services mainly targeted at home users and not on the business environment. As a result, emphasize on functionality over security is made in the project. The various users of instant messengers (IM) undergo number of security risks. For example, sending attachments in through IM fosters a richer experience and the files are subject to viruses and other malware. IM attachments often bypass antivirus software. Some IM networks are susceptible to eaves dropping. The problem looming in the horizon is “spim” i.e. the IM version of spam. The other need to develop a secure IM environment is it’s hardly difficult to verify instant messaging source that sends unwanted messages to IM clients and bogus advertisements, solicitations for personal information. II. WORMS AND VULNERABILITIES IN IM ENVIRONMENT “The risk is comprised of assets, threats, vulnerabilities and counter measures thus it is a function of threats exploiting vulnerabilities

Page 225: Keynote 2011

225

against threats”. The worms in an IM environment can reduce to connectivity of the environment, delay the time to generate the instant hit list, can increase the user interaction time and decrease IM integration in applications. The above worms have the most popular mechanisms for propagation. A few requires user action, but some may not require user actions. They are automatically downloaded. Malicious file transfer and malicious URL in a message are examples for worms that need user actions. Exploitation of implementation bugs in IM clients and exploitation of vulnerabilities in operating systems or commonly used software applications are examples for worms that may or may not need user actions. [1] A scenario for the need of the security in mobile environment is discussed here. Consider that you want to communicate a secret to your friend. You will definitely feel that your message should reach your friend safely without any distractions and changes when the message passes through the communication medium. You should be acknowledged that your friend got your message exactly and you both are safe with the environment. The same scenario can be extended for military and submarines where secure message passing is the main focus. The different security threats derivatives from the scenarios are eaves dropping i.e. over hearing the conversation using attacks like man in the middle and hosts that pass worms are common nowadays. So here, the location, context awareness and access control with greater security is provided through the

jargon instant messenger, that is developed using the jabber server and timber database. III. SECURITY MECHANISMS IN IM:

A. Jabber properties: Jabber XMPP protocol overcomes all the drawbacks over the existing protocols having support for asynchronous message relaying, transport layer security, unlimited number of contacts, bulletin to all contacts, standardized spam protection, with audio and video support. The support for groups, access for non body and non members are made as optional phase. This enhances the privacy in a communication between two members. Therefore,

my point here is jabber is an excellent protocol for an instant messenger that no other can take into that place for secured data transfer between various computers (including audio and video based data transfer)[2].

B. XML security in Jabber: The instant messenger traffic simply cannot be blocked at the firewalls with the support of native IM port. Many instant messengers such as AIM, Yahoo, MSN are “port-agile” that when their native port is closed, they can open other port instead. In this paper, we focus on some solutions by Jabber/XMPP to define specific services, to block specific features, to Log IM access and communication, and to block by categories [3].

C. Jabber security in IM:

Jabber XMPP protocol which was standardized by IETF, is an open standard can be accessed using the unique jabber ID. There are other open standard instant messaging protocols like Gale, IRC and PSYC which are compared below to differentiate the excellence of Jabber protocol. • The GALE protocol can only support

transport layer security using public/private key and form groups.

• The IRC can support asynchronous message relying but was done via memo system that differs from main system. The IRC had transport layer security depending on individual support, with simplistic routing of one-to-many multicast. This had a medium level of spam protection and support for groups.

• PSYC had paved way for all kinds of technology support including asynchronous

Page 226: Keynote 2011

226

message relaying, transport layer security,

unlimited number of contacts, and bulletin to all contacts, customized multicast, and higher level of spam protection. But the major drawback of PSYC was that allowed channels for non members to access the instant messenger, also had no support for audio and webcam/video.

Table I: comparison of properties of jabber

with other protocols

Protocols

Transport layer

security

Groups and

non membe

rs

Spam protecti

on

Audio/

Video

Gale Yes Yes No No IRC Yes Yes Medium

level No

PSYC Yes Yes, but allow non member

Higher level

No

Jabber Yes Yes, doesn’t allow non-members

Higher level

Yes

Figure I: comparative study of jabber with

other open sourced protocols.

D. Timber database properties: Timber was one of those XML database systems, used to store the information and to process queries the data which were semi-structured. Algebraic underpinning is the basic principle in this XML database. The exchange of data and the integration, everything is denoted in XML documents and XML DTDs. Our instant messenger uses this database system for experimental evaluation all the way through several XML datasets with a variety of queries [4] [5].

E. XML security in Timber: The database resources integration in the pervasive environment is the need for database level security enhancement. The security

mechanisms in this project are implemented using the XML and Web services provided by Timber2.1 database, that the problems of security are solved for various platforms in the pervasive environment [6].

F. XML security in Access control: Access control critically protects against security threats in computer environments. Authorizing, guaranteeing and managing are the target of the access control strategies [7]. For example, consider an organization access control strategies using administration models in the diagram followed. Figure II: comparing administration models with access control strategies.

XACML2.0, the access control mark up language defines schemas and namespaces for access control strategies in XML, rather the objects being identified in xml. This language has the features of digital signatures, multiple resources, hierarchical resource, Role Based Access Control, Security Assertion Mark up Language and other privacy enhancement mechanisms.

G. End to end encryption: End to end means that the data is protected from the point it is captured, the intermediary processes it passes through and till it reaches the destination point. Encryption transforms information using an algorithm that makes the information unreadable unless using keys [8]. Consider the encryption zones applied for our jargon instant messenger: Figure III: Encryption zones

Page 227: Keynote 2011

227

IV. JARGON INSTANT MESSENGER:

The security enhancement for the newly designed ‘Jargon Instant Messenger’ is the focus. The enhancements include the support for server side database ‘Timber2.1’, Jabber and XMPP servers which act as gateways to other IM protocols. The list of users in this system is maintained in buddy list. The attempt is to combine the many disparate protocols inside the IM server application and to provide secure transmission, to maintain the state of users with the help of Timber which has the in-built features of the xml streams based effective cost estimation and the query optimization techniques [9]. Jargon Instant Messenger is all new messenger for secured communication in an organization that controls data and security breaches with security algorithms discussed above. The whole messenger is to be developed based on xml technology such that the risks management is the main focus.

i. Server implementation:

The main function of instant messenger is to connect both the clients (who would like to chat to the server) and when once they both are getting connected, the path for them is established. From next messages they establish peer to peer connection. Security algorithms such as access control and location awareness are included at all levels for preventing security and data breaches. Even the diagram below describes the “jargon instant messenger environment for a secure chat” [10]. Figure IV: Server implementation for jargon

ii. Client implementation:

The end to end encryption at various zones of a client communicating with other client at other space is implemented. The clients have the following user interfaces that are designed in J2ME platform using Java Mobile Information Device Profile and Java specifications. The interfaces are browser based and can support any kind of mobile devices in such way the application is device independent [11] [12]. Figure V: Client communicating through

Jabber server

iii. Architecture of Jargon Instant

Messenger:

The messenger’s overall structure is represented. The clients of different computers are connected with the plug-ins. The plug-ins establishes connections from the browser to the server. The user profiles are recognized using Java MIDP applications and the details of the users are encrypted. The profiles are forwarded to jabber server. Here various attempts to integrate the features of other existing protocols such as sip, wap and http are made. The timber xml database is the back end storage for profile information. The gateways to connect other networks are available [13] [14].

Figure VI: Overall architecture

Communication in IM:

Client 1Client 2

Jabber

server

Java web

start

browser plug-in

Timber xml

database

Middlewar

e for

encryption

and access

control

Plug-in

Java web

start

browser

Page 228: Keynote 2011

228

Architecture:

Jabber client PDA Jabber client PC Jabber client mobile

phone

Jabber client Plug-in

Jabber serverTimber xml

databasesip xmpp wap

gateway

Mobile phone network

vi. Evaluation of client and server:

The evaluation scheme for client and server denotes the working of client and server independently in “Jargon Instant Messenger”. Figure VII: Client evaluation

Initial state

Waiting for server to

respond

Waiting for server to verify

username and

password

Ready

Waiting for servers List all

users

Waiting for server to respond to

challenge

List displayed in a new pop up window

Waiting for peer to

peer respond to challenge

View

message integrity

UserA and userB

authenticated. Peer connection established

EVALUATIONClient:

Figure VIII: Server evaluation

Waiting for client to respond to challenge

along with login

credentials

Server steady

state

Verify user authentication

online

Successful peer

to peer

connection.Allow data

transmission

Timber xml

database

Server:

V. DISCUSSION This messenger mainly targets at the attempt to connect the jabber server with Timber xml database, where no other existing messengers had done this ever before. The messenger has implemented the unique open sourced jabber protocol. This decision has been made by the comparative study made on the already available existing instant messaging protocols. The whole messenger is of xml based such that xml security using jabber is the main focus of this messenger. The extensions for file transfer, video and audio communications are under study. This system will be helpful for military purposes, message passing in submarines, profit/ non profit organizations where the data communication should not undergo any data or security breaches. REFERENCE [1]. Ubiquitous Computing Environment Threats and Defensive Measures, Dr. Byeong-Ho KANG, Associate Professor, School of Computing and Information Systems, University of Tasmania. [2]. A Jabber Based Framework for Building Communication Capable Java Mobile Applications, Mohy Mahmoud, Sherif G. Aly, Ahmed Sabry, Mostafa Hamza, Department of Computer Science and Engineering, The American University in Cairo, P.O. Box 74, New Cairo, 11835, Egypt. [3]. Instant messaging: Architectures and Concepts, Linan Zheng, Institute of communication networks and computer engg, university of Stuttgart. [4]. Querying XML in TIMBER, Yuqing Wu Indiana University, Stelios Paparioz Microsoft Research, H.V.Jagadish University of Michigan. [5]. Estimating answer sizes for XML Queries, Yuqing Wu, Jignesh.M.Patel, H.V.Jagdish, University of Michigan, MI, USA. [6]. Grouping in XML, Stelios Paparizos, Shurug Al-Khalifa, Srivastava, Yuqing Wu, University of Michigan, USA.

Page 229: Keynote 2011

229

[7]. Privacy Access Control Model with Location Constraints for XML Services

Yi Zheng1, Yongming Chen2 and Patrick C. K. Hung2.

[8]. http://www.cgisecurity.com/ws/WestbridgeGuideToWebServicesSecurity.pdf. [9]. The JavaSpaces Specification. Sun Microsystems, Inc. [10]. Enabling Interoperability between Mobile IM and different IM applications using Jabber, Mohd hilmi hasan, Zuraidah sulaiman, nazleeni samiha haron, azlan fazly mustaza, CIS Department, University Teknologi, Malaysia.

[11]. Xml security-A comparative literature review, Andreas Ekelhart, Stefan Fenz, Geront Goluch, Markus Steinkellner, Edgar Weippl. [12]. Mobile Information Device Profile Final Specification 1.0a. JSR-000037, Sun Microsystems, Inc. [13]. The instant messaging personalization using behaviour pattern “A review of theories and research based on instant messaging”, Qiao Xinxin, Department of industrial design, Zhejiang University of Technology, China. [14]. Jabber/J2ME based Instant Messaging, Client for Presence Service, A Master Thesis in

Media Computer Science, submitted to the University of Applied Science Stuttgart, Hochschule der Medien.

30. Cloud Computing – The Virtual World ¹ S.Sabari Rajan, Dept.Of.MCA

² S.Anand Asst Prof, Dept.Of.MCA

Department of MCA, JJ College of Engineering and Technology,

Tiruchirappalli, Tamil Nadu, India

[email protected]

Abstract — This paper describes the real effectiveness of

Cloud comes from its core called ‘Virtualization’, which provides abstraction for the user. It explains clouds, its business benefits, and outlines the cloud architecture and its major components. It discovers how a business can use cloud

computing to foster innovation and reduce IT costs. [1] “Cloud is designed to be available everywhere, all the time. By using redundancy and geo-replication, cloud is so designed that services be available even during hardware failures including full data centre failures.” Cloud computing offers a new, better and economical way of delivering services and all the stakeholders will have to embrace the dramatic changes to exploit opportunities to avoid becoming irrelevant. Cloud-

computing-based technologies will enable the borderless delivery of IT services based on actual demands to keep costs competitive. It promises different things to different players in the IT ecosystem. It offers a radical way of collaborating, delivering applications and content. [1]

Keywords: Cloud computing, redundancy, geo-

replication, IT Services

V. INTRODUCTION

The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. Today, organizations of all sizes are investigating cloud computing and the benefits it can bring to their company. Cloud Computing,[2] the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about over provisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or under provisioning for one that becomes wildly popular, thus missing potential customers and revenue. Cloud Computing refers to both the applications

Page 230: Keynote 2011

230

delivered as services over the Internet and the hardware and systems software in the data centres that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The data centre hardware and software is called as Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, it is called as Public Cloud; the service being sold is Utility Computing. The term Private Cloud is used to refer to internal data centres of a business or other organization not made available to the general public. [2]

VI. WHAT IS CLOUD?

“Cloud” is the aggregation of Servers, Low end computers and storage hosting the program and data. It is a metaphor for the internet, based on how it is depicted in computer network, diagrams and is an abstraction for complex infrastructure it conceals.

A cloud can: Host a variety of different workloads, including batch-style back-end jobs and interactive, user-facing applications. Allow workloads to be deployed and scaled-out quickly through the rapid provisioning of virtual machines or physical machines, Support redundant, self-recovering, highly scalable programming models that allow workloads to recover from many unavoidable hardware/software failures.

VII. WHAT IS CLOUD COMPUTING?

[3]Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction [3].

[4]Cloud computing is a term used to describe both a platform and type of application. Servers in the cloud can be physical machines or virtual machines. [4]

Cloud computing offers the ability to access software or information that can be delivered on-demand, over the internet, without the need to store it locally.

Advanced clouds typically include other computing resources such as SANs, network equipment, firewall and other security devices. Cloud computing also describes applications that are extended to be accessible through the Internet. These cloud applications use large data centers and powerful servers that host Web applications and Web services. Anyone with a suitable Internet connection and a standard browser can access a cloud application.

VIII. EVOLUTION OF CLOUD COMPUTING

Figure 1. Latest Evolution of Hosting

IX. ESSENTIAL CHARACTERISTICS:

On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

Resource pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data centre). Examples of resources include storage, Processing, memory, network bandwidth, and virtual machines.

Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the

Page 231: Keynote 2011

231

capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured ServiceCloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

X. SERVICE MODELS/LAYERS OF CLOUD

COMPUTING

Figure 2. Cloud Platform as a Service (PaaS).

Cloud platform services or "Platform as a Service (PaaS)" deliver a computing platform and/or solution stack as a service, often consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers

Cloud Software as a Service (SaaS): SaaS is hosting applications on the Internet as a service (both consumer and enterprise). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various

client devices through a thin client interface such as a web browser. The consumer does not manage or control the underlying cloud infrastructures. The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.

Cloud Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems; storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

XI. HOW CLOUD COMPUTING WORKS

[5]In a cloud computing system, there's a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications.

The network of computers that make up the cloud handles them instead. Hardware and software demands on the user's side decrease. The only thing the user's computer needs to be able to run is the cloud computing system's interface software, which can be as simple as a Web browser, and the cloud's network takes care of the rest.[5]

. EC2 Site Architecture Diagrams – Cloud Computing Images

Depending on the computer resource requirements and budget, RightScale provides the user with the flexibility to create a custom architecture that provides the necessary

Infrastructure as a Service (IaaS)

Platform as a Service (PaaS)

Software as a Service (SaaS)

Servers Networking Storage

Middleware

Collaboration

Business

CRM/ERP/HR

Industry

Data Center

Shared virtualized, dynamic provisioning

Database

Web 2.0 Application Java

Development

Page 232: Keynote 2011

232

performance and failover redundancy necessary to run the site in the Cloud.

Use one of the “All-in-one” Server Templates, such as the LAMP (Linux, Apache, MySQL, PHP) Server Template to launch a single server that contains Apache, as well as the application and database.

Figure 4 All-in-one Single Server Setup

Figure 5 Basic 4-Server Setup with EBS

This is the most common architecture on the cloud. Each front-end server acts as a load balancer and application server. There will be a master and slave database for redundancy and failover. Backups of the database are saved as EBS Snapshots.

Figure 6 Basic 4-Server Setup without EBS

If the persistent storage of EBS is not required,

then it can be used with standard MySQL-S3 setup, which saves regular backups to the S3

bucket. Each front-end server acts as a load balancer and application server.

Figure 6 Intermediate 6-Server Setup

Additionally, it will also have a master and

slave database for redundancy and failover. Backups of database are saved as gziped (*.gz) dump files. The same can be used as non-EBS, MySQL-S3 database setup in other architectures as well. In the intermediate architecture, the two front end servers are used strictly as load balancers, so it can be expanded out the number of application servers.

XII. ROLE ENTITIES : CLOUD COMPUTING

The Enablers: Enablers provide resources that drive and support the creation of solutions in terms of both hardware and software that the consumer utilizes. Following are the buzz words in the enabler’s arena:

Consolidation and Integration: With the markets changing rapidly, it is imperative for players to find new opportunities. Some of the recent acquisitions highlight the clear horizontal expansion across hardware and software towards services. Ubiquity

and Virtualization: The fact that the consumer would demand seamless access to content, impacts both the enablers as well as the delivery. This mandates higher investment in product development but does not necessarily allow a longer concept-to market cycle. To support the increased demand and adoption of cloud computing, the enablers are aligning their resources to provide multi-tenanted architectures, virtualization technologies along with support to highly scalable and elastic services. Virtualization technologies span platforms, resources and applications and the likes of VMware’s Mobile virtualization platform are steps in that direction.

Page 233: Keynote 2011

233

Environmental Sustainability and Data

Centers: Environmental awareness will further drive enterprises towards cloud computing as it allows considerable reduction in energy costs. Gartner estimates that over the next five years, most enterprise data centers will spend as much on energy (power and cooling) as they do on hardware infrastructure.

”Cloud enabling technologies like virtualization and server consolidation can help enterprises reduce energy costs by as much as 80%. Cloud computing is also driving the usage of net books or laptops that are enhanced for mobility, compromised on computing capacity with a reduced storage capacity. Therefore, there will be an increased demand for transfer processing and storage in data centers

Marginalization of Fringe Players: Desktop based utilities and tools like MS Office and Norton antivirus will see a reduction in their installed user base and will ultimately be marginalized, as the same services will be available online. The traditional fringe players will have to re-invent themselves to align with the new modes of delivery, warranted by the cloud. Adobe is already providing an online version of its graphics editing program called Photoshop. Appistry is one of the more innovative companies and has recently launched the Cloud IQ platform, offering enterprises the capability to port nearly any enterprise application to the cloud.

The Delivery Agents: Delivery agents are value added resellers of the capabilities offered by the enablers. Following are the key changes that we foresee in this domain:

Collaboration, Partner Driven Work

Environments: Industry alliances are being forged and it is important for the delivery agents to weigh pros and cons before investing in the platforms. In the retail space Microsoft and Google can emerge as dominant players due to the inertia keeping consumers tied to its suite of products. Supporting them will be hardware players and virtualization providers like Citrix and VMware.

Death of the System Integrators: System integrators, as we know them today, will have to take a second look at their model of operation. With the rising popularity of subscription based

applications like Siebel On-Demand and SalesForce.com, the demand for customized compromised will decrease, taking away with it the biggest market of the SIs. Once cloud computing technology reaches the critical mass, there will be an increased demand from enterprises to migrate data, applications and content to the cloud.

Last Mile Connectivity: Internet service providers (ISPs) and last mile access supplier will have to ramp up their offerings rapidly to meet the increasing requirements of the bandwidth hungry content and applications, with fiber being the predominant technology for last mile access.

New Pricing and Delivery Models: Sales channels will also have to evolve to provide ubiquitous delivery models and the revenues are going to be long-tailed as the sales model will shift to a subscription based service, which will imply that customer retention and loyalty becomes all the more important.

Piracy: With the onset of the cloud, the users will no longer be required to download or install applications in the traditional sense. In the online world, controlled access implies that piracy will become increasingly difficult, if not impossible. Likewise with online gaming, the problem of pirated copies of the games being spread around, resulting in millions of dollars worth of revenue loss can be curbed.

The Consumers: Consumers are the demand side of the cloud equation and following are the trends for them

Convergence, On-Demand: The retail customer will now, more than ever, come to expect on demand everything - be it multimedia content, applications, gaming or storage. AMD’s new campaign ‘The Future is Fusion’ is again reflective of the changing times. For the retail user, it is all about bringing together convergent multimedia solutions on any screen supported with advanced graphics capabilities; for the enterprise user it is delivering enhanced server and powerful virtualization capabilities.

Collaboration and Social Networking: Cloud based platforms like Facebook and Twitter will become destinations for collaboration, e-commerce and marketing. Enterprises are already

Page 234: Keynote 2011

234

planning to listen to the voice of the customer using such tools. Collaboration and virtual workspace solutions will see increased investments. A key player in this space is WebEx, acquired by Cisco in 2007 for $3.2 billion – again an example of a hardware player moving to the software cloud domain. Another promising technology is IBM’s Bluehouse, based on Lotus Notes.

Back to Core Competencies: The cloud enables businesses to focus on their core competency and cloud source the IT estate enabling the consumers to transfer risk. ‘My problem’ now becomes a look at an IDC study makes it clear that businesses want the cloud because of the cost benefit.

Decentralization of Management: The traditional view of management and governance of IT resources through standards and frameworks like ITIL, Sarbanes Oxley, HIPPA, etc., will change. As much as the technological impacts, the challenges for enterprises will also be to manage employee expectations working in a decentralized and distributed manner. Enterprises need to clearly understand and estimate the risks of losing visibility and control over critical data.

XIII. DEPLOYMENT MODELS

Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.

Community cloud: The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or offpremise. Public cloud: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.

Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or

proprietary technology that enables data and application portability

XIV. KEY AREAS OF CLOUD COMPUTING

PAYBACK

Hardware Payback: The hardware savings come from improving server utilization and decreasing the number of servers. The typical server in a datacenter is running a single application and is being utilized on average from 5% to 25% of its capacity. As systems are consolidated and virtualized in a cloud environment the number of servers required can drop significantly and the utilization of each server can be greatly increased resulting in significant savings in hardware costs today and the avoidance of future capital investment.

Software payback: The two major components of software costs are virtualization software that enables consolidation and more importantly the service management software that enables visibility, control and automation of your environment to ensure efficient service delivery.

Automated provisioning: Automated provisioning provides the ability to provision systems without the long and error prone manual process used in non-automated environments. Automated provisioning tools allow a skilled administrator to record and test provisioning scripts for deploying cloud services. These scripts can then be used by automated tools to consistently provision systems without the human error introduced through the use of manual provisioning processes. This greatly reduces the amount of time required to provision systems and allows skilled resource to be leveraged over many more systems, while improving the overall quality of the provisioning process.

Productivity improvements: For achieving productivity improvements two projects should be implemented; a user friendly self-service interface that accelerates time to value and a service catalog which enables standards that drive consistent service delivery. These two projects will enable end users to request the services they need from a service catalog and provide an automated

Page 235: Keynote 2011

235

approval process to enable the rapid delivery of new services.

System administration: In a cloud environment there are less physical servers to manage, but the number of virtual servers’ increases. Virtual systems are more complex to administer and this can lead to higher administrative costs. That is one of the reasons that a strong service management system is so critical to an efficient and effective cloud environment. With the proper service management tools for a highly virtualized environment, the administration savings can range 36% savings 15 from 36% to 45%. [6]

Table -1 Summary of savings and costs

Area Saving Metrics Cost Metrics Hardware Reduction in

number of Servers Drives reduction in server depreciation cost, energy usage and facility costs

Software Reduction in the number of OS licenses

Cost of virtualization software o Cost of cloud management software

Automated Provisioning

Reduction in number of hours per provisioning task

Training, deployment, administration and maintenance cost for automation software

Productivity Reduction in number of hours waiting for images per project

System Administration

Improved productivity of administration

and support staff (support more systems per administrator)

XV. LEADING COMPANIES

[7]Google is the perceived leader for setup, infrastructure and application management in a Public Cloud and IBM is the top choice of developers for Private Clouds according to Evans Data recently released Cloud Development Survey 2010. Over 40% of developers sited Google as the Public cloud leader and almost 30% identified IBM as the top Private Cloud provider.

own search – and are working on owning the cloud.[7] With Gmail, Google Docs, Google Calendar, and Picasa in its line-up, Google offers some of the best known cloud computing services available.[9] They also offer some lesser known cloud services targeted primarily at enterprises, such as Google Sites, Google Gadgets, Google Video, and most notably, the Google Apps Engine. The Apps Engine allows developers to write applications to run on Google’s servers while accessing data that resides in the Google cloud as well as data that resides behind the corporate firewall. While it has been criticized for limited programming language support, the Apps Engine debuted Java and Ajax support in April, which may make it more appealing to developers.

Although it was somewhat late to the cloud computing party, IBM launched its “Smart Business” lineup of cloud-based products and services in June. The company is focusing on two key areas: software development and testing, and virtual desktops. But the company makes it clear that the cloud model has much wider-reaching implications, noting that “cloud computing represents a true paradigm shift in the way IT and IT-enabled services are delivered and consumed by businesses.”

Leading cloud pioneer Amazon offers several different in-the-cloud services. The best known is Amazon Elastic Compute Cloud, or Amazon EC2, which allows customers to set up and access virtual servers via a simple Web

Page 236: Keynote 2011

236

interface. Fees are assessed hourly based on the number and size of virtual machines you have ($.10 -$.80 per hour), with an additional fee for data transfer. EC2 is designed to work in conjunction with Amazon’s other cloud services, which include Amazon Simple Storage Service (S3), Simple DB, Cloudfront, Simple Queue Service (SQS), and Elastic MapReduce.

The software giant’s ambitious Azure initiative has a solution for every Microsoft constituency, from ISVs to Web developers to enterprise clients to consumers. Microsoft is investing a king’s fortune in a network of $500 million, 500,000-square-feet data centres around the country. The facilities will presumably form the physical backbone of the cloud network. If all goes according to plan, Microsoft will not only control the software but also the physical infrastructure that delivers that software.

More than 59,000 companies use Salesforce.com’s Sales Cloud and Service Cloud solutions for customer relationship management, which has helped make it one of the most well-known and most successful cloud computing companies. In addition, through Force.com, it allows developers to use the Salesforce.com platform to develop their own applications. Users can also purchase access to the Force.com cloud infrastructure to deploy their applications.

have been optimizing the cloud for over ten years, building a global computing platform that helps make cloud computing a reality.

the Cloud Operating System Company, was founded by the team that developed the industry-leading Amazon EC2 service. Nimbula delivers a new class of cloud infrastructure and services system that uniquely combines the flexibility, scalability and operational efficiencies of the public cloud with the control, security and trust of today’s most advanced data centres.

[10] Security and availability are the most critical elements in any system that contains your business data. NetSuite has been developed

and implemented with multiple layers of data redundancy for comprehensive security and business continuity. [10]

, can be used to create, deploy, manage, and share individual copies of IT with end users on-demand, in seconds. There’s no install or recoding required. Beyond basic remote access, hosted machines, and “virtual labs,” CloudShare supports private label branding, user hierarchy and permissions, analytics, management features, billing, and other line of business needs.

XVI. NASA AND JAPAN ANNOUNCE CLOUD

COMPUTING COLLABORATION

[10] NASA and Japan’s National Institute of Informatics (NII) have announced plans to explore interoperability opportunities between NASA’s Nebula Cloud Computing Platform and Japan’s NII Cloud Computing Platform.

NASA Nebula and NII’s Cloud are built entirely of open-source components and both employ open-data application programming interfaces. NASA and NII will collaborate on open source reference implementations of interoperable cloud services. [10]

XVII. CLOUD COMPUTING PROS

The great advantage of cloud computing is “elasticity”: the ability to add capacity or applications almost at a moment’s notice. Companies buy exactly the amount of storage, computing power, security and other IT functions that they need from specialists in data-centre computing. The metered cost, pay-as-you-go approach appeals to small- and medium-sized enterprises; little or no capital investment and maintenance cost is needed. IT is remotely managed and maintained, typically for a monthly fee, and the company can let go of “plumbing concerns”. Since the vendor has many customers, it can lower the per-unit cost to each customer. Larger companies may find it easier to manage collaborations in the cloud, rather than having to make holes in their firewalls for contract research organizations. SaaS deployments usually take less time than in-house ones, upgrades are easier, and users are always using the most recent version of the application. There may be fewer bugs because

Page 237: Keynote 2011

237

having only one version of the software reduces complexity.

Dependability / Redundancy: Cloud computing offers cost effective solutions that are dependable, and reliable.

Flexibility / Scalability: Cloud computing solutions give clients the ability to choose the resources they need in a way that can grow over time or instantaneously as needs change.

Levelled Playing Field – Democratization: Small corporations / organizations now have the same capabilities for computing and information access, as larger ones.

Meets Client Expectations for Innovation and Options: Users no longer have any adherence or loyalty to traditional technologies or approaches. Users will embrace cloud computing if it provides the needed functionality at the right price.

Security: Often the primary concern of a client is whether or not their systems will be secure in the cloud. Cloud computing providers recognize that security is paramount and focus a lot of attention on ensuring security.

Identity Management and Access Control: Cloud computing solutions must and do provide advanced solutions for access control and identity management measures that ensure users and uses of the data are legitimate.

XVIII. SECURITY BENEFITS OF CLOUD

COMPUTING

Cloud computing offers many advantages and some disadvantages while offering its servers to be used by customers. Security is both an advantage and disadvantage of the cloud-based environment. Because of the immense size, skilled and dedicated professionals of the cloud providers offer improved security, privacy and confidentiality of your

data. Security Because of Large Scale: Large scale

implementation of anything is much cheaper than that of small scales. Cloud computing servers provide their services to a large number of

businesses and companies. It is easier and more economical for them to make sure that their system is 100% secure from hackers, accidents, and bugs. They can easily afford all types of defensive measures like filtering, patch management, and cryptography techniques.

Security as Market Demand: Today everyone is searching for foolproof security measure to insure of the safety of important data. Security has ultimately become one of the core factors determining which cloud provider to choose. This has persuaded cloud computing vendors to put special emphasis on security from the very beginning of this technology.

More Chances of Standardization and Collaboration: The majority of the servers will be owned and kept by service providers rather than by individual companies. With this there are more chances for standardization and collaboration for improving security services. This leads to more uniform, open and readily available security services market.

Improved Scaling of Resources: Cloud computing service providers can easily relocate resources and data for filtering, traffic controlling, verification, encryption and other security measures. This ability provides more resilience against security threats.

Advantage of Concentrated Resources: The concentration of resources is potentially dangerous for security, however at the same time; this can be used for improving security by other methods. It allows service providers cheaper physical access and security control. The saved resources in terms of money, time and location can be reallocated to improve security.

Evidence-gathering and Investigation: Cloud computing services offer quick evidence gathering for forensic and investigation purpose. In traditional systems this is attained by turning your server offline, but cloud based servers don’t need to be turned down. The customers can now store logs more cost-effectively, allowing comprehensive logging and increasing performance.

XIX. CLOUD COMPUTING CONS

Page 238: Keynote 2011

238

In the cloud the user may not have the kind of control over the data or the performance of the applications that is needed, or the ability to audit or change the processes and policies under which users must work. Different parts of an application might be in many places in the cloud. Complying with federal regulations such a Sarbanes Oxley, or FDA audit, is extremely difficult. Monitoring and maintenance tools are immature. It is hard to get metrics out of the cloud and general management of the work is not simple.3 there are systems management tools for the cloud environment but they may not integrate with existing system management tools, so you are likely to need two systems.

Cloud customers may risk losing data by having them locked into proprietary formats and may lose control of data because tools to see who is using them or who can view them are inadequate. Data loss is a real risk. Cloud computing is not risky for every system. Potential users need to evaluate security measures such as firewalls, and encryption techniques and make sure that they will have access to data and the software or source code if the service provider goes out of business.

It may not be easy to tailor (SLAs) to the specific needs of a business. Compensation for downtime may be inadequate and SLAs are unlikely to cover concomitant damages, but not all applications have stringent uptime requirements. It is sensible to balance the cost of guaranteeing internal uptime against the advantages of opting for the cloud. It could be that your own IT organization is not as sophisticated as it might seem.

Calculating cost savings is also not straightforward. Having little or no capital investment may actually have tax disadvantages. SaaS deployments are cheaper initially than in-house installations and future costs are predictable; after 3-5 years of monthly fees, however, SaaS may prove more expensive overall. Large instances of EC2 are fairly expensive, but it is important to do the mathematics correctly and make a fair estimate of the cost of an “on-premises” operation.

Standards are immature and things change very rapidly in the cloud. All IaaS and SaaS providers

use different technologies and different standards. The storage infrastructure behind Amazon is different from that of the typical data centre (e.g., big Unix file systems). The Azure storage engine does not use a standard relational database; Google’s App Engine does not support an SQL database. So you cannot just move applications to the cloud and expect them to run. At least as much work is involved in moving an application to the cloud as is involved in moving it from an existing server to a new one. There is also the issue of employee skills: staff may need retraining and they may resent a change to the cloud and fear job losses. Last but not least, there are latency and performance issues.

The Internet connection may add to latency or limit bandwidth. In future, programming models exploiting multithreading may hide latency.[5[ Nevertheless, the service provider, not the scientist, controls the hardware, so unanticipated sharing and reallocation of machines may affect run times. Interoperability is limited. In general, SaaS solutions work best for non-strategic, non-mission-critical processes that are simple and standard and not highly integrated with other business systems. Customized applications may demand an in-house solution, but SaaS makes sense for applications that have become commoditized, such as reservation systems in the travel industry.

XX. BENEFITS AND RISKS

The shift of the computing stack provides an opportunity to eliminate complexities, cost and capital expenditure. The main benefits of cloud computing are therefore economies of scale through volume operations, pay-per-use through utility charging, a faster speed to market through componentisation and the ability to focus on core activities through the outsourcing of that which is not core (including scalability and capacity planning). Providing self-service IT while simultaneously reducing costs may be highly attractive but it creates a competitive risk to doing nothing. However, it should be remembered that we are in a disruptive transition at this moment. This transition creates commonly-repeated concerns covering management, legal, compliance

Page 239: Keynote 2011

239

and trust of service providers. A Canonical survey of 7,000 companies and individuals found that, while over 60 per cent of respondents thought that the cloud was ready for mission-critical workloads, less than 30 per cent are planning to deploy any kind of workload to the cloud. In addition to these transitional risks, there also exist the normal considerations for outsourcing any common activity, including whether the activity is suitable for outsourcing, what the second-sourcing options are and whether switching between providers is easy. Though service level agreements (SLAs) can help alleviate some concerns, any fears can only be truly overcome once customers can easily switch between providers through a marketplace. Until such markets appear, it is probable that many corporations will continue to use home-grown solutions, particularly in industries that have already expended capital to remove themselves from lock-in, which is widespread in the product world. While the cloud lacks any functioning marketplaces today, it is entirely possible that ecosystems of providers will emerge based on easy switching and competition through price and quality of service. In such circumstances, many of the common concerns regarding the uncertainty of supply will be overcome. The benefit of cloud computing are many and obvious, there exist the normal concerns associated with the outsourcing of any activity, combined with additional risks due to the transitional nature of this change.

XXI. FUTURE

Cloud is in the infancy stage:

Many companies are using cloud computing for small projects. The trust hasn’t been accepted. Details such as licensing, privacy, security, compliance, and network monitoring need to be finalized for the trust to be realized

Future Educational Uses An expansion of Microsoft live@edu: More useful spending of technology budgets, Classroom collaboration, Offline web applications, Google Docs Future Personal Uses: No more backing up files to thumb drives or syncing computers together, Services replace devices, A single hard drive for the rest of a person’s life, accessible anywhere with internet Expansion: Resources are expected to triple by 2012, from $16 billion to $42 billion, Cloud computing is said to be the foundation of the next 20 years of IT technology

XXII. CONCLUSION

Cloud Computing is the fastest growing part of IT and provides tremendous benefits to customers of all sizes. Cloud computing promises different things to different players in the IT ecosystem. It offers a radical way of collaborating, delivering applications and content. Cloud services are simpler to acquire and scale up or down, it provides the key opportunity for application and infrastructure vendors. The public clouds work great for some but not all applications, and private clouds offer many benefits for internal applications. Also, Public and private clouds can be used in combination. Economic environment is accelerating adoption of cloud solutions. It is important to realize that the complete shift to the cloud is not imminent, but enterprises will be better off with a long term vision for technology, people, information, legality and security to leverage capabilities offered by cloud computing.

XXIII. REFERENCES

Page 240: Keynote 2011

240

SETLabs Briefing VOL 7 NO 7 Gaurav Rastogi Associate Vice Presiednt, Amitabh Srivastava Senior Vice Presiedent Microsoft Technologies.

Above the Clouds: A Berkeley View of Cloud Computing, Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee, David Patterson, Ariel Rabkin, Ion Stoica, and Matei Zaharia, February 10, 2009.

The NIST Definition of Cloud Computing, Authors: Peter Mell and Tim Grance, Version 15, 10-7-09.

INTERNATIONAL SUPERCOMPUTING CONFERENCE, Hamburg,Germay, june 2009.

[5] http://communication.howstuffworks.com/cloud-computing.htm by Jonathan Strickland

[6] Cloud Computing Payback, Richard Mayo Senior Market Manager

Cloud Computing [email protected], Charles Perng IBM T.J. Watson Research Center [email protected], November, 2009

[7] http://www.saasbuzz.com/2010/05/top-cloud-computing-vendors/, by saas-buzz on May 19, 2010 in Cloud Computing.

[8] http://blog.karrox.com/cloud-computing/

[9] http://www.nasa.gov/offices/ocio/ittalk/07-2010_open_source.html

[10] http://www.netsuite.com/portal/infrastructure/main.shtml

31. Conjugate Gradient Algorithms in Solving Integer Linear Programming

Optimization Problems

Dr. G. M. Nasira

Assistant Professor in CA

Department of Computer Science,

Government Arts College, Salem,

[email protected]

S. Ashok Kumar

Assistant Professor, Dept. of CA,

Sasurie College of Engineering,

Vijayamangalam Tirupur,

[email protected]

M. Siva Kumar

Lecturer, Dept. of CA,

Sasurie College of Engineering,

Vijayamangalam, Tirupur,

[email protected]

Abstract – Integer Linear Programming optimization problem is one in which a few or all the decision variables are required to be non-negative integers. Integer Linear Programming Problems are usually solved with Gomory cutting-plane method or the Branch and bound method is used. In this work we have considered implementation of Integer linear programming problems with Conjugate gradient Artificial Neural Network algorithms with a new proposed method of solving Integer Linear Programming problem. Literature reviews proves that Integer Linear Programming problem is used more in solving optimization problems such as transportation problem, assignment problem, traveling

salesman problem and sequencing problem. Solving these problems analytically using traditional method consumes more time. The implementation of artificial intelligence to this method will allow predicting the answers at acceptable rate.

Keywords - Artificial Neural Network, Conjugate Gradient, Integer Linear Programming, Optimization problems

IV.

I. INTRODUCTION

Linear programming Problem (LPP) is an optimization method appropriate for solving problems in which the objective functions

Page 241: Keynote 2011

241

and the constraints are linear functions of the decision variables. The constraints in linear programming problem will be in the form of equalities or inequalities [2]. Integer Linear Programming problem (ILPP) is an extension of linear programming where some or all the variables in the optimization problem are restricted to take only integer values. The most popular methods used for solving all-integer Linear Programming is by the use of gomary cutting-plane method and Branch-and-Bound method [1].

The integer programming literature

review shows that there exist a lot of algorithms used in solving the integer programming problems. ILPP occurs most frequently in various problems like transportation problems, assignment problems, traveling salesman problems and sequencing problems [5][7]. A new technique for solving the integer linear programming problem called primal-dual method designed by Karthikeyan et al is considered in this work [2]. The algorithm taken is a hybrid (i.e., primal-dual) cutting plane method for solving all Integer Linear Programming Problems.

V. II. INTEGER LINEAR PROGRAMMING

Consider the following type of integer

linear programming problem. Maximize Z

=CX such that AX ≤ B where

A= a11 a12 B= b1

a21 a22 b2

------- --

-

an1 an2 bn

C= (c1, c2) and X= x1

x2

Where ai1, ai2, bi, x1, x2, c1, c2 >= 0,

i=1,2,…n and x1, x2 are integers. The algorithm presented by Karthi Keyan et.al [2] is found to have many advantages like, that it does not need to use of simplex algorithm repeatedly, no need of additional constraints as of in Gomory cutting plane method, no branching to ILPP into sub problems is required and finally the method overcomes the problem of degeneracy. Due to such advantages this primal-dual algorithm was taken for neural network implementation which could provide faster solution, handle huge volume of data and time saving than conventional systems.

VI.

VII. III. ARTIFICIAL NEURAL NETWORKS

Artificial Neural networks (ANN)

process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements called neurons which works in parallel to solve a specific problem. Neural networks take a different approach in solving a problem than that of conventional methods.

Conventional methods use algorithmic approach, where the method follows a set of instructions in order to solve the problem. Unless we know the specific steps in prior that the method needs to follow, only then the computer can solve the problem.

That restricts the problem solving capability of conventional methods to solving problems. But a method would be so much more useful if they could do things that we don't exactly tell them rather train them how to do [13]. A detailed literature

Page 242: Keynote 2011

242

survey was made in the area of neural network which has motivated us to apply this technique to solve this problem [7].

VIII. A. BACKPROPOGATION ALGORITHM

A Back Propagation network learns by

example. You give the algorithm examples of what you want the network to do and it changes the network’s weights so that, when training is finished, it will give you the required output for a particular input. The backporpogation algorithm works as follows:

1.First apply the inputs to the network and

work out the output – the initial weights were random numbers.

2.Next work out the error for neuron B. The error is

ErrorB = OutputB (1-OutputB) x (TargetB – OutputB)

3.Change the weight. Let W+AB be the new (trained) weight and WAB be the initial weight.

W+AB = WAB + (ErrorB x OutputA)

4.Calculate the Errors for the hidden layer neurons. Unlike the output layer we can’t calculate these directly (because we don’t have a Target), so we back propagate them from the output layer. This is done by taking the Errors from the output neurons and running them back through the weights to get the hidden layer errors.

ErrorA = OutputA (1 - OutputA)(ErrorB

WAB + ErrorC WAC)

5. Having obtained the Error for the hidden layer neurons now proceed as in stage 3 to change the hidden layer weights. By repeating this method we can train a network of any number of layers.

IV. CONJUGATE GRADIENT ALGORITHMS

The basic backpropagation algorithm

adjusts the weights in the steepest descent direction (negative of the gradient), the direction in which the performance function is decreasing most rapidly. It turns out that, although the function decreases most rapidly along the negative of the gradient, this does not necessarily produce the fastest convergence. In the conjugate gradient algorithms a search is performed along conjugate directions, which produces generally faster convergence than steepest descent directions [15].

A search is made along the conjugate gradient direction to determine the step size, which minimizes the performance function along that line. The conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is an iterative method [15].

Conjugate gradient is a method of accelerating gradient descent. In ordinary gradient descent, one uses the gradient of energy to find the steepest downhill direction, and then moves along that line to the minimum energy in that direction. Hence successive steps are at right angles. However, this can be very inefficient, as you can spend a lot of time zigzagging across an energy valley without making much progress downstream. With conjugate gradient, the search direction is chosen to be in a conjugate direction to the previous direction.

A. Powell - Beale Conjugate Gradient

In Conjugate gradient algorithms the search direction is periodically reset to the negative of the gradient. The standard reset point occurs when the number of iterations is equal to the number of network parameters weights and biases. One such

Page 243: Keynote 2011

243

reset method was proposed by Powell and Beale. This technique restarts if there is very little orthogonality left between the current gradient and the previous gradient. This is tested with the following inequality

2

1 2.0 kk

T

k ggg ≥−

If this condition is satisfied, the search direction is reset to the negative of the gradient [7][8].

B. Fletcher - Reeves Conjugate Gradient

All the conjugate gradient algorithms start out by searching in the steepest descent direction that is in the negative of the gradient on the first iteration.

00 gp −=

A line search is then performed to determine the optimal distance to move along the current search direction

kkkk pXX α+−=+1

Then the next search direction is determined so that it is conjugate to previous search directions. The general procedure for determining the new search direction is to combine the new steepest descent direction with the previous search direction [15][16]

1−+−= kkkk pgp β

The various versions of the conjugate gradient algorithm are distinguished by the manner in which the constant

kβ is

computed. For the Fletcher-Reeves update the procedure is

11 −−=

k

T

k

k

T

kk

gg

ggβ

C. Polak - Ribiére Conjugate Gradient

Another version of the conjugate gradient algorithm was proposed by Polak and Ribiére [15][16]. As with the Fletcher-Reeves algorithm, the search directions at each iteration is determined by

1−+−= kkkk pgp β

For the Polak-Ribiére update, the constant

kβ is computed by

11

1

−−

−∆=

k

T

k

k

T

kk

gg

ggβ

This is the inner product of the previous change in the gradient with the current gradient divided by the norm squared of the previous gradient.

IX. V. METHODOLOGY AND EFFORT

A. Simulation Environment

The primal dual algorithm was converted into a program. The language used was Turbo C. The data required for simulation and testing was collected and segregated. The data set was formed for testing and simulation separately. The data was simulated with MATLAB 7.0.4 on a computer with Intel Core 2 Duo processor with 2.53GHz speed, 1 GB of RAM and 160 GB of Hard disk capacity.

X. B. PROPOSED WORK

We solved Integer Linear Programming Problems analytically and obtained 110 data sets. Out of these 110 data sets we took 100 data sets for training purpose and the remaining 10 data set was used for simulating the network with new inputs to predict the output. The data set consists of 8 input variables namely a11, a12, b1, a21, a22, b2, c1, c2, and 3 output variables namely x1, x2 and Z.

Page 244: Keynote 2011

244

The designed neural network was trained with the 100 data sets formed. After the training process the network was simulated with the 10 testing data set that was kept reserved.

C. Proposed Work

The size of hidden layer in a Neural Network can impact how effectively it learns. The size of the hidden layer was varied from 10 to 80 nodes for all the three conjugate gradient algorithms. The convergence was achieved for all the three algorithms with different number of hidden nodes, with different epoch values and with different goal value.

The network designed for all the three algorithms were then simulated with the 10 data sets that was kept as reserved for predicting. The neural network was good in predicting the solutions with acceptable error percentage.

The three conjugate gradient algorithms in predicting the results of ILPP problems was thus implemented with the help of three conjugate gradient backpropagation algorithms. The solution to the Integer Linear Programming Problem by neural simulation of these algorithms was within acceptable limits.

XI. VI. CONCLUSION

The analysis of the results obtained

through the three conjugate gradient methods stated that it produces results within acceptable limits. The convergence showed that the computational efficiency of

the Fletcher-Reeves Conjugate Gradient algorithm took more number of epochs to converge than that of the Polak-Ribiére and Powell and Beale conjugate gradient algorithms. Thus the implementation of the proposed mathematical method for solving ILPP with neural network will produce acceptable results than that of the regular mathematical solutions.

XII.

XIII. REFERENCES

[1] Narendra, P.M and Fukunaga, K, “A

Branch and Bound Algorithm for Feature Subset Selection”,IEEE Transaction on Computers”, Volume: C-26 Issue:9 ,2006, p.p. 917 - 922

[2] K. Karthikeyan, V.S. Sampath kumar. “A Study on a Class of Linear Integer Programming Problems”, ICOREM 2009. pp. 1196-1210.

[3] Gupta A. K., Sharma. J.K. “Integer Quadratic Programming. Journal of the Institution of Engineers”, Production Engineering Division Vol. 70, (2). pp. 43 – 47.

[4] M. Ida, K. Kobuchi, and R. ChangYun Lee. “Knowledge-Based Intelligent Electronic Systems”, 1998 Proceedings KES Second International Conference on Vol. 2. Issue, 21-23.

[5] S. Cavalieri. “Solving linear integer programming problems by a novel neural model”. Institute of Informatics and Tele- communications, Italy

[6] Meera Gandhi, S.K. Srivatsa. “Improving IDS Performance Using Neural Network For Anomalous Behavior Detection”, Karpagam JCS. Vol. 3. Issue 2. Jan-Feb 2009.

[7] Foo Yoon-Pin Simon Takefuji. T. “Integer linear programming

neural networks for job-shop scheduling”, IEEE International Conference on Neural Networks1988. pp. 341-348 Vol. 2.

[8] Haixiang Shi. “A mixed branch-and-

bound and neural network approach

for the broadcast scheduling

Page 245: Keynote 2011

245

problem”, 2003. ISBN:1-58603-394-8 pp. 42-49.

[9] Dan Roth, “Branch and bound algorithm for a facility location problem with concave site dependent costs”, International Journal of Production Economics, Volume 112, Issue 1, March 2008, Pages 245-254.

[10] H. J. Zimmermann, Angelo Monfroglio. “Linear programs for constraint satisfaction problems”. European Journal of Operational Research, Vol. 97, Issue 1, pp. 105-123.

[11] Mohamed El-Amine Chergui, et. al, "Solving the Multiple Objective Integer Linear Programming Problem”, Modeling Computation and Optimization in Information Systems and Management Sciences, 2008, Volume 14.

[12] Ming-Wei Chang, Nicholas Rizzolo, “Integer Linear Programming in NLP,

Constrained Conditional Models”, NAACL HLT 2010.

[13] Kartalopoulous, Stamatios V. “Understanding neural networks and fuzzy logic”, Prentice hall 2003.

[14] AVan Ooyen and B. Nienhuis, “Improving the convergence of the backpropagation algorithm in Neural Networks”, 1992 pp. 450- 475.

[15] Yuhong Dai , Jiye Han , Guanghui Liu , Defeng Sun , Hongxia Yin , Ya-xiang Yuan, “Convergence Properties Of Nonlinear Conjugate Gradient Methods”, “ftp://lsec.cc.ac.cn/pub/home/yyx/papers/p984-net.p”.

[16] Y. H. Dai , Y. H. Dai , Y. Yuan , Y. Yuan, “A Three-Parameter Family Of Nonlinear Conjugate Gradient Methods”, “http://www.ams.org/mcom/2001-70-235/S0025-5718-00“, DBLP.

XIV.

32. A Survey on Meta-Heuristics Algorithms Ms. S. Jayasankari1, Dr. G.M. Nasira2, Mr. P. Dhakshinamoorthy3, Mr. M. Sivakumar4

1,3,4 Lecturers/ Dept. of Computer Applications, Sasurie College of Engineering, Viyayamangalam, Tirupur (Dt), Tamil Nadu, India.

2 Asst. Prof./Dept. of Computer Science, Govt. Arts College(Autonomous), Salem-7. Tamil Nadu, India

Abstract — Scheduling problem is one of the difficult combinatorial problems, and classified as NP-hard problem. The objective is to find a sequence of operations that can minimize the completion time or optimized time. These types of problems can be solved by meta heuristics algorithms or combination of meta heuristics algorithms in an efficient way. This paper, outlines the algorithms and particulars about meta-heuristic and combination of meta-heuristics

to achieve the optimal solution. Index Terms— Evolutionary algorithms, Genetic Algorithms, Meta-heuristics, Optimization, PSO, Tabu search, Simulated Annealing.

1. Introduction

Page 246: Keynote 2011

246

According to Moore's Law, computer processing capacity doubles approximately every 18 months. This idea has more or less held since it's introduction in the early 1970s. Still, computational problems those are so complex that it will not be possible to optimally solve them in the estimated future. Most challenging problems are of combinatorial nature. Because of their complexity, deterministic search can not be used for solving such problems to optimality, but it is possible to find solutions that are good enough within computational limits.

2. Different types of Algorithms

2.1 Evolutionary Algorithms

In our daily life, we encounter problems that require an optimal arrangement of elements to be found. Examples are the order, in which a salesman visits his customers or the order, in which jobs are processed by a machine. In each case, costs and duration have to be minimized and certain constraints have to be hold. For these problems, the number of possible solutions grows exponentially with the number of elements (customers or jobs). More formally, we talk about combinatorial optimization problems. Finding solutions for this class of instances is expensive in most cases. However, combinatorial problems occur often in many areas of application. Finding (suboptimal) solutions is necessary to increase economical efficiency.

A. 2.2 Heuristics for Combinatorial Optimization Problems

Heuristics can be used to increase the quality of solutions that can be found. There exist many different types of heuristical methods, among which metaheuristics, which are algorithmic approaches to approximate optimal solutions for combinatorial problems. Some recent metaheuristics are special in that they draw inspiration from the natural world.

Heuristic refers to experience-based techniques for problem solving, learning, and

discovery. Heuristic methods are used to speed up the process of finding a good enough solution, where an exhaustive search is impractical. Examples of this method include using a "rule of thumb", an educated guess, an intuitive judgment, or common sense. In more precise terms, heuristics are strategies using readily accessible, though loosely applicable, information to control problem solving in human beings and machines.

The term heuristic is used for algorithms which find solutions among all possible ones ,but they do not guarantee that the best will be found, therefore they may be considered as approximately and not accurate algorithms. These algorithms, usually find a solution close to the best one and they find it fast and easily. Sometimes these algorithms can be accurate, that is they actually find the best solution, but the algorithm is still called heuristic until this best solution is proven to be the best. The method used from a heuristic algorithm is one of the known methods, such as greediness, but in order to be easy and fast the algorithm ignores or even suppresses some of the problem's demands.Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics that can provide approximate solutions to some optimization problems:

• Differential evolution • Dynamic relaxation • Genetic algorithms • Hill climbing • Nelder-Mead simplicial heuristic: A

popular heuristic for approximate minimization (without calling gradients)

• Particle swarm optimization • Simulated annealing • Tabu search

To solve NP-complete problems two different types of algorithms have been developed. Exact

algorithms guarantee to find an optimal solution for an optimization problem by using techniques cutting plane, branch & bound or branch & cut. These algorithms try to avoid exponential runtime complexity by cutting subtrees that cannot contain

Page 247: Keynote 2011

247

an optimal solution while traversing the search tree. Still, a large amount of time is required. E.g. to prove the optimality of a Traveling Salesman Problem (TSP) instance with 24978 cities, nearly 85 cpu years were necessary.

Alternatively, there are problem specific heuristic algorithms (e.g. 2-opt for the TSP), that, unlike exact algorithms, do not necessarily find an optimal solution. In exchange, heuristics can find good suboptimal solutions in a fraction of time that would be required to find an optimal solution. Heuristics utilize problem instance specific structure (like neighborhood structures) to construct (construction heuristics) or the improve (improvement heuristics) solutions. Typically, both types are used in combination. Improvement heuristics terminate when no local search step in the neighborhood structure results in a better solution (local optimum).

2.3 Meta-heuristics Metaheuristics embed simple heuristics (e.g. local search) to guide them from local optima to distant and more promising areas in the search space. The most simple metaheuristics restarts a improvement heuristic with different start solutions and returns the best local optimum as its result. The most prominent metaheuristics include simulated annealing, tabu search, Ant Colonoy Optimization (ACO), genetic and evolutionary algorithms and various hybrid variants (e.g. memetic algorithms [8]). Whereas heuristics work problemspecific, metaheuristics are unspezific and solve different problems depending on the embedded heuristic. By using metaheuristics, the solution quality of heuristics can be improved significantly. NP-complete problems are an interesting area of research, as there are no efficient solving algorithms known. Still, they have a high relevance for practical applications such as scheduling in commercial environments. 3. Different types of Meta-heuristics Algorithms

Simulated annealing, tabu search, evolutionary algorithms like genetic algorithms and evolution strategies, ant colony optimization, estimation of distribution algorithms, scatter search, path relinking, the greedy randomized

adaptive search procedure (GRASP), multi-start and iterated local search, guided local search, and variable neighborhood search are – among others – often listed as examples of classical metaheuristics.

3.1 Hybridization of mete-heuristics algorithms

The motivation behind such hybridizations of different algorithmic concepts is usually to obtain better performing systems that exploit and unite advantages of the individual pure strategies, i.e. such hybrids are believed to benefit from synergy. The vastly increasing number of reported applications of hybrid metaheuristics and dedicated scientific events, document the popularity, success, and importance of this specific line of research. In fact, today it seems that choosing an adequate hybrid approach is determinant for achieving top performance in solving most difficult problems.

Actually, the idea of hybridizing metaheuristics is not new but dates back to the origins of metaheuristics themselves. At the beginning, however, such hybrids were not so popular since several relatively strongly separated and even competing communities of researchers existed who considered “their” favorite class of metaheuristics “generally best” and followed the specific philosophies in very dogmatic ways. For example, the evolutionary computation community grew up in relative isolation and followed relatively strictly the biologically oriented thinking. It is mostly due to the no free lunch theorems that this situation fortunately changed and people recognized that there cannot exist a general optimization strategy which is globally better than any other. In fact, to solve a problem at hand most effectively, it almost always requires a specialized algorithm that needs to be compiled of adequate parts.

4. Conclusion

Page 248: Keynote 2011

248

Multiple possibilities of hybridizing individual metaheuristics with each other and/or with algorithms from other fields exist. A large number of publications documents the great success and benefits of such hybrids. Based on several previously recommended taxonomies, a unified classification and characterization of meaningful hybridization approaches has been presented. It helps in making different key components of existing metaheuristics explicit. We can then consider these key components and build an effective (hybrid) metaheuristic for a problem at hand by selecting and combining the most appropriate components.

References

[1]. Combinatorial optimization. From Wikipedia, the free encyclopedia.

[2]. Metaheuristic. FromWikipedia, the free encyclopedia.

[3]. Steven Alter. Information Systems, chapter 1, pages 23.24. Prentice Hall, 2002.

[4]. E Bonabeau,M Dorigo, and G Theraulaz. Inspiration for optimization from social insect behaviour.

Nature, 406(6791):39.42, July 2000.

[5]. Antonella Carbonaro and Vittorio Maniezzo. Ant colony optimization: An overview, June 1999.

[6]. Alberto Colorni, Marco Dorigo, and Vittorio Maniezzo. The ant system: Optimization by a colony

of cooperating agents. IEEE Transactions on

Systems, Man and Cybernetics, 26(1):1.13, 1996.

[7]. Peter Tarasewich and Patrick R. McMullen. Swarm intelligence: power in numbers. Commun.

ACM,

45(8):62.67, 2002.

[8]. Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: Overview and conceptual

comparison. ACM Computing Surveys 35(3) (2003) 268–308

[9]. Glover, F., Kochenberger, G.A.: Handbook of Metaheuristics. Kluwer (2003)

[10]. Hoos, H.H., St¨utzle, T.: Stochastic Local Search. Morgan Kaufmann (2005)

[11]. Glover, F.: Future paths for integer programming and links to artificial intelligence.

Decision Sciences 8 (1977) 156–166

[12]. Voß S., Martello, S., Osman, I.H., Roucairo, C.: Meta-Heuristics: Andvances andTrends in Local

Search Paradigms for Optimization. Kluwer, Boston (1999)

[13]. Blum, C., Roli, A., Sampels, M., eds.: Proceedings of the First International Workshop on Hybrid

Metaheuristics, Valencia, Spain (2004)

[14]. Blesa, M.J., Blum, C., Roli, A., Sampels, M., eds.: Hybrid Metaheuristics: Second International

Workshop. Volume 3636 of LNCS. (2005)

[15]. Wolpert, D., Macready, W.: No free lunch theorems for optimization. IEEE Transactions on

Evolutionary Computation 1(1) (1997) 67–82

[16]. Cotta, C.: A study of hybridisation techniques and their application to the design of evolutionary

algorithms. AI Communications 11(3–4) (1998) 223–224

Page 249: Keynote 2011

249

[17]. Talbi, E.G.: A taxonomy of hybrid metaheuristics. Journal of Heuristics 8(5) (2002) 541–565

[18]. Blum, C., Roli, A., Alba, E.: An introduction to metaheuristic techniques. In: Parallel Metaheuristics,

a New Class of Algorithms. John Wiley (2005) 3–42

[19]. Cotta, C., Talbi, E.G., Alba, E.: Parallel hybrid metaheuristics. In Alba, E., ed.: Parallel

Metaheuristics, a New Class of Algorithms. John Wiley (2005) 347–370

[20]. Puchinger, J., Raidl, G.R.: Combining metaheuristics and exact algorithms in combinatorial

optimization: A survey and classification. In: Proceedings of the First International Work-Conference

on the Interplay Between Natural and Artificial Computation, Part II. Volume 3562 of LNCS.,

Springer (2005) 41–53

[21]. El-Abd, M., Kamel, M.: A taxonomy of cooperative search algorithms. In Blesa, M.J., Blum, C., Roli,

A., Sampels, M., eds.: Hybrid Metaheuristics: Second International Workshop. Volume 3636 of

LNCS., Springer (2005) 32–41

[22]. Alba, E., ed.: Parallel Metaheuristics, a New Class of Algorithms. John Wiley, New Jersey (2005)

[23]. Moscato, P.: Memetic algorithms: A short introduction. In Corne, D., et al., eds.: New Ideas in

Optimization. McGraw Hill (1999) 219–234

[24]. Ahuja, R.K., Ergun, ¨ O., Orlin, J.B., Punnen, A.P.: A survey of very large-scale neighborhood search

techniques. Discrete Applied Mathematics 123(1-3) (2002) 75–102

[25]. Puchinger, J., Raidl, G.R.: Models and algorithms for three-stage two-dimensional bin packing.

Page 250: Keynote 2011

250

33. Distributed video encoding for wireless low-power surveillance network

Bhargav.G

*P.G.Student Researcher, Dept of Industrial Engineering & Management, DSCE, Bengaluru-78, India

Email:[email protected],mob.no:09986235909

Abstract— Transmission of real time video over a network is a very bandwidth intensive process placing large demand both on the network resource and the end terminal computing points. To improve throughput rates and reduce bandwidth requirement video encoding is resorted to. This project addresses the video encoding process for wireless low-power surveillance networks.Real time video encoding is a CPU intensive task. A number of algorithms exist to share the encoding process over multiple computer platforms. This is called as parallel processing. In this project I have attempted to split a video into multiple chunks and encode them separately in different computers and then merge them into a compressed video retaining all the major details but with reduced bandwidth occupancy. This project demonstrates setting up a distributed video encoder in a LAN network .To date this concept is prevalent only in high end grid computing platforms.Furthermore, and of particular interest to distributed processing, input video files can be easily broken down into work-units. These factors make the distribution of video encoding processes viable. The majority of research on distributed processing has focused on Grid technologies, however, in practicality, Grid services tend to be inflexible and offer poor support for ad-hoc interaction. Consequently it can be difficult for such technologies to efficiently exploit the growing pool of resources available around the edge of the Internet on home PCs. This project attempts to overcome these deficiencies.

Keywords: - Real time video encoding process, parallel processing, and high end grid computing platforms.

1. INTRODUCTION

The Distributed Video Encoder is a utility application tool used to encode conventional video with the assistance of a distributed network of desktop workstations.

Our main objective in this project is to implement an encoder that can make use of the processors of the computers connected in the network to do the video compression .This will increase the speed of the encoding process by about N times (if there are N computers in the network) .We will implement this encoder on a specific video format to demonstrate the feasibility of the application and to derive results that will later be used for a wider range of video formats [1].

1.1 Introduction to the Problem Area

Video encoding is a lengthy, CPU intensive task, involving the conversion of video media from one format to another. Furthermore, and of particular interest to distributed processing, input video file scan be easily broken down into work-units. These factors

Page 251: Keynote 2011

251

make the distribution of video encoding processes viable. The majority of research on distributed processing has focused on Grid technologies, however, in practicality, Grid services tend to be inflexible and offer poor support for ad-hoc interaction. Consequently it can be difficult for such technologies to efficiently exploit the growing pool of resources available around the edge of the Internet on home PCs. It is these resources that the DVE uses to provide a distributed video encoding service [2].

1.2 Overview of Video Encoding

A Video File contains both useful and redundant information about the video that it contains. Video files are quite large in size compared to other kinds of files like text files or music files. Thus it would be advantageous to compress it for the purpose of efficient storage or for transmitting it faster over the network. Video Encoding works by removing the redundant part of the information from the video file. If a very high degree of compression is required then the quality of the video file can be tweaked to achieve higher compression.

A compression encoder works by identifying the useful part of a signal which is called the entropy and sending this to the decoder. The remainder of the signal is called the redundancy because it can be worked out at the decoder from what is sent. Video compression relies on two basic assumptions. The first is that human sensitivity to noise in the picture is highly dependent on the frequency of the noise [3].

The second is that even in moving pictures there is a great deal of commonality between one picture and the next. Data can be conserved both by raising the noise level where it is less visible and by sending only the difference between one picture and the next.

In a typical picture, large objects result in low spatial frequencies whereas small objects result in high spatial frequencies. Human vision detects noise at low spatial frequencies much more readily than at high frequencies. The phenomenon of large-area flicker is an example of this. Spatial frequency analysis also reveals that in many areas of the picture, only a few frequencies dominate and the remainder is largely absent.

For example if the picture contains a large, plain object, high frequencies will only be present at the

edges. In the body of a plain object, high spatial frequencies are absent and need not be transmitted at all.

1.3 Overview of Distributed Computing

Over the past two decades, advancements in microelectronic technology have resulted in the availability of fast, inexpensive processors, and the advancements in communication technology have resulted in the availability of cost-effective and highly efficient computer networks. The net result of the advancements in these two technologies is that the price-performance ratio has now changed to favor the use of inter-connected, multiple processors in place of a single high-speed processor

Computer architectures consists of two types

1.3.1. Tightly Coupled Systems

In these systems, there is a single system wide primary memory (address space) that is shared by all the processors. Any communication between the processors usually takes place through the shared memory.

1.3.2 Loosely Coupled Systems

In these systems, the processors do not share memory, and each processor has its own local memory. All physical communication between the processor is done by passing messages across the network that interconnects the processors.

Tightly coupled systems are referred to as parallel processing systems and loosely coupled systems are referred to as Distributed Computing Systems.

1.4 Problem Statement

To create a parallel video encoder which utilizes the processing power of the computers in a Local Area Network to complete the Encoding Process in a lesser duration of time when compared to a single encoder run on a single computer .

2. REQUIREMENT ANALYSIS

The basic purpose of software requirement specification (SRS) is to bridge the communication

Page 252: Keynote 2011

252

between the parties involved in the development project. SRS is the medium through which the users’ needs are accurately specified; indeed SRS forms the basis of software development. Another important purpose of developing an SRS is helping the users understand their own needs.

2.1. System Requirements Specification

2.1.1 Introduction

2.1.1.1 Purpose

This Software Requirements Specification provides a complete description of all the functions and constraints of the Distributed Video Encoder, Version 1, developed for commercial and industrial use.

2.1.1.2 Intended Audience

The document is intended for the developers and testers working on the project and other independent developers who are interested in plug-in development for DVE.

2.1.1.3 Project Scope

The DVE is used to encode uncompressed video data into a compressed format in the shortest time possible by harnessing available distributed computing power in a network. It facilitates a hassle free environment to encode large volumes of video data quickly and efficiently. Similar software technology exits for large server farms that contain hardware that is dedicated specifically to video processing which prove to be expensive for individuals and small organizations that prefer to use only a part of such technology on a short time basis. Clearly they would opt for a cost effective technology that can be used with commercially available hardware and minimal technical know how on video compression.

4.1.2 Overall Description

4.1.2.1 Product Perspective

The DVE is a technology bridge between standalone encoders on workstations and massively distributed parallel hardware encoders. It combines the requirements of the former and the computing power of the latter with minimal compromise on productivity.

4.1.2.2 Product Features

The main features of the DVE are

• It can encode video files in parallel and hence produce results in a shorter duration of time that a standalone encoder is able to provide.

• It will consist of a GUI based interface for easy user interaction with the application. The GUI will be a Windows Application Form based that can be run directly on the Windows Platform.

• It will provide automated network connectivity which will display connections to systems that are ready to contribute resources to the encoding process.

• It will feature a software interface for external programs to run and perform the tasks asynchronously and provide means to retrieve results from the above processes.

4.1.2.3 Operating Environment

The project will be designed for the Intel 32 bit architecture range of workstations that requires Microsoft Windows NT based operating system coupled with the Microsoft .NET runtime environment.

2.1.2.4 Assumptions and Dependencies

The project requires having a fully working and scalable user network of workstations devoid of any issues regarding connectivity and usability. Tools regarding support for multimedia in the windows platform are present in the systems being used by the product. Up-to-date versions of .NET 2.0 runtime environment or higher must also be present. The project will utilize the features of the FFMPEG encoder and the MPGTX splitter that will provided along with the above mentioned.

The network on which the application will run is a LAN network which is completely connected over an Ethernet switch.

Limited resources will enable us to develop a distributed encoder for only a specific type of video file format known as MPEG -1. The encoder will convert the mentioned file format to that of the MPEG-4 standard format only.

2.2 System Features

Page 253: Keynote 2011

253

System Feature 1: Network Information

Description: Contains the information regarding the network id of a participating workstation connected to the distributed network. It is used to map network address to a user given host name for a workstation. This feature is critical in nature and is required for the establishment of the network.

Functional Requirements: IP Addressing scheme, DNS naming scheme. If these schemes are not present or have not been enabled or are inaccessible to the user then this feature should intimate the user as an error and suggest necessary action to be performed. Since this is a critical part in establishing the network, the application must halt and not proceed any further.

System Feature 2: Execute Command

Description: Will execute an external program that is required by the application for completing tasks such as splitting, joining and encoding of video. This feature is critical in nature and is necessary for a software interface to third party programs.

Functional Requirements: process name, process arguments. Error or failure of execution of the command must be checked and the cause of the issue must be identified and informed to the user.

System Feature 3: Send File

Description: A feature that utilizes the TCP protocol to send a file stored on a workstation across to any remote workstation accessible within the network directly.

Functional Requirements: Destination workstation, File to be transferred.

System Feature 4: Receive File

Description: A feature that utilizes the TCP protocol to receive an incoming file from a remote workstation in the network directly and store it on the specified location in the workstation.

Functional Requirements: Source workstation, File store location.

System Feature 5: File Information

Description: To store all the information regarding the details of a file

Functional Requirements: File Name, Directory Stored, Number of parts (if it is split)

System Feature 6: Split File

Description: To generate arguments to an external program to split pieces of files according to the given constraints.

Functional Requirements: File Information, Number of Workers connected mpgtx.

System Feature 7: Join File

Description: To generate arguments for an external program to combine multiple pieces of a single file which has been split and processed previously.

Functional Requirements: File Information, All parts of a single file entity, ffmpeg.

System Feature 8: Encode File

Description: To generate parameter arguments necessary for an external program to encode a given video file successfully.

Functional Requirements: Input file, Output file, input video type, output video type

System Feature 9: Helper Program Information

Description: This utility feature can be used to set and access the stored location of external programs necessary for the working of the application.

Functional Requirements: Directory access to external programs.

2.3 External Interface Requirements

Page 254: Keynote 2011

254

2.3.1User Interface:

2.3.1.1 Worker UI

Description: It is used to provide a user interface to a workstation. It contains all the details regarding on connecting to an existing server in the network. It must be implemented as a Windows Application Form

Fig 1 Worker User Interface

2.3.1.2 Server UI

Description: It is used to provide a user interface to a server. This will contain all the details regarding on connection initialization, Video file encoding and preview and also to manage external software (mentioned in the software interface) that will be required for the application.

Fig 2 Server User Interface

2.3.1.3 Interface for selecting the video file

Description: A file dialog has been provided for selecting the video file for encoding. This dialog enables the user to choose a file in a manner convenient to the user as this dialog is a standard for all windows systems

Page 255: Keynote 2011

Fig 3 Video File Selection Dialog

2.3.1.4 Interface for selecting the location of the utility programs

Description: A form with open dialog boxes have been provided to locate the utility programs. A filter has also been provided so that only executable files are selected.

Fig 4 Utility Program Location

2.3.1.5 Hardware Interface: None

2.3.1.6 Software Interface: MPGTX

Description: It is command line MPEG audio/video/system file toolbox that slices and joins audio and video files, including MPEG1, MPEG2 and MP3. It is used as a single independent executable that can be run as a process. It is an Open Source application released under the GPL. Its current version is 2.0.

2.3.1.7 FFMPEG

Description: FFmpeg is a complete solution to record, convert and stream audio and video. It includes libavcodec, the leading audio/video codec library. FFmpeg is developed under Linux, but it can compile under most operating systems, including Windows. Is is used as a single executable that uses the POSIX threads for Windows library. It is an Open Source application released under the GPL. Its current version is 0.4.9

FFPlay

Description: FFplay is a player that can be used o play MPEG videos files in the absence of Window Media Codec’s. This application is an extra utility for the project and is not necessary for the functioning of the application. This player is bundled along with the FFMPEG encoder.

2.3.1.8 Communication Interface

TCP

The network that the application will run on is the IEEE 802.3 LAN Technology. All systems in the network will be connected over a single switch. The communication will occur as transmission of raw bytes over the TCP Send/Receive protocols. It is a connection oriented protocol and can guarantee the transmission of data from source to destination.

2.4 Hardware Requirements • Pentium 4 2.0Ghz or above processor nodes • 10 / 100 mbps Network • Network card in the nodes • Graphics Processors in nodes(Separate or

Integrated) • Sound Card

Page 256: Keynote 2011

256

2.5 Software Requirements • Windows XP with SP2 • Microsoft .NET Framework 2.0 • Ffmpeg encoder • Mpgtx splitter • Network Drivers and Support in Win XP • Microsoft Visual Studio .NET 2005

3. IMPLEMENTATION

The purpose of this chapter is to give a brief description about the various implementation tools like the programming language, the modules created and the details of each module implementation. The goal of the implementation phase is to translate the design of the system produced during the design phase into code in a given programming language. This involves the details of the methods and interfaces used and their integration.

3.1 Modules

Module Name: Execute Command

This module is used for running an external program or a command. Our project consists of three helper programs which are used for the purpose of splitting the files, encoding the video files and for playing or previewing the file. This module is used for executing these programs. The module takes in the process name and arguments as input. Options for displaying the executing process have also been provided for monitoring its progress [4].

Algorithm 1: Execute Command

Input: Command, Arguments

Step1: Check if arguments present

Step 2: If present GOTO Step 2.1 else GOTO Step 3

Step 2.1: Start new process with command and arguments

Step 2.2: Execute Process and wait till end of process

Step 2.3: Complete execution and GOTO Step 3

Step 3: End

Module Name: FileEncoder

This module acts as a helper module for the Execute Command module. This modules function is to generate arguments for the file encoder (ffmpeg) and then pass this argument to the Execute Command process for execution [5].

Algorithm 1: FileEncoder

Input: Filename

Step1: Copy Filename to InputFIleName

Step 2: End

Algorithm2: Generate Argument

Step 1: Concatenate the filename; format required (“.avi”) and the output file name

Step 2: Return the above file as an argument.

Step 3: End.

Module Name: FileJoiner

This module also acts as a helper module for the Execute Command module. This modules function is to generate arguments for the copy command which is used for concatenating all the split pieces. The second function of this module is to generate the transcoding arguments for ffmpeg for generating the output file.

Algorithm1: Generate Argument

Input: NoofFiles, Filename

Step 1: Generate the filename; format required (“.avi”)

Step2: Do Step 1 for every piece of the file. If all pieces are done, GOTO Step 3

Step3: Return the above generated text as an argument.

Step 4: End.

Module Name: FileSplitter

This module is used for generating arguments for the file splitter (mpgtx). It takes the number of pieces as input. This data is obtained from the number of workers connected to the server.

Algorithm1: Generate Argument

Input: Filename

Page 257: Keynote 2011

257

Step 1: Generate the filenames for each peice; format required (“.avi”)

Step2: Do Step 1 for the whole file. If EOF is reached, GOTO Step 3

Step3: Return the above generated text as an argument and the number of pieces.

Step 4: End.

Module Name: NetworkInfo

This module is responsible is responsible for providing network related information like IPAddress and the Host Name of the computer.

Algorithm 1: Get Network Information

Input: None

Step1: Get IP Address of the system and store as List

Step2: Get the Host name of the system as store as text

Step 3: Done

Module Name: NodeList

This module is used to manage the worker nodes. Functions for adding, deleting nodes are provided. Each node stores information about a specific worker. This information is then used by the server to communicate with the workers.

Algorithm 1: Add Node

Input: Node

Step 1: Add node to an array of Nodes

Step 2: Done

Algorithm2: Delete Node

Input: Node

Step 1: Search for the given node in the array of Nodes and delete it.

Step 2: Done.

Algorithm 3: Count

Input: None

Step 1: Count the number of nodes in the array

Step 2: Return the count as an integer

Step 3: Done.

Algorithm 4: Pop Node

Input: None

Step 1: Remove the topmost node in the array of Nodes

Step 2: Return the node from the array and delete the node

Step 3: Done.

Module Name: ReceiveFile

This module is used for receiving a file which has been sent from a remote node. A connection is first established with the remote node. Then Information about the file is received, this is used to create a new file. The file contents are then received.

Algorithm 1: Receive File

Input: port, noofFiles

Step 1: Start a TCP Client to receive files. Initialize count to 0

Step 2: If count < noofFiles, then continue else GOTO Step 3

Step 2.1: Accept the incoming Node information as fname

Step 2.2: Accept the incoming data with a buffer of 2048 bytes.

Step 2.3: Write the data to a file as mentioned in the fname object

Step 2.4: Increment count and GOTO Step 2.

Step 3: Close the TCP client.

Step 4: Done.

Module Name: SendFile

This module is used for sending a file to a remote computer. This module is used for sending file chunks to the worker and for sending the compressed files back to the server.

Algorithm 1: Send

Input: FileName, Node

Step 1: Get the file name and the node information

Page 258: Keynote 2011

258

Step 2: Start a TCP Client and connect to the system that has to receive the file

Step 3: Transmit the Node directly over the network stream

Step 4: Do the steps till the EOF is reached else GOTO Step 5

Step 4.1: Copy contents of Filename to a 2048 byte size buffer

Step 4.2: Transmit buffer data as a byte stream to the system over a socket.

Step 4.3: Confirm transfer and GOTO Step 4.

Step 5: Close the File and Disconnect from the system

Step 6: Done.

4. STRUCTURAL TESTING

Structural testing - This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions [6].

4.1 Test Cases

Test Case 1

Module Name Network

Function Set Port Number

Input Input entered as a number greater than 65000

Expected Results Do not accept the number as a valid port number

Actual Results An error message is displayed and the port field is reset to the default value

Remarks Only valid port numbers are accepted

Test Case 2

Module Name Network

Function Set Port Number

Input Input entered consisted of a string of characters

Expected Results Do not accept the string as a port number

Actual Results Show an error message and reset the field

Remarks Only valid port numbers are accepted

Test Case 3

Module Name Network

Function Connect to Server

Input Provide Incorrect server port and then attempt to connect.

Expected Results Show a user friendly message about the error

Actual Results A Message is displayed that the server is not listening at that specified port.

Remarks Only those ports that the server is listening at can be used for connecting.

Test Case 4

Module Name Video

Function Select Video File

Input Choose a file that is not a MPEG 1/2 video file

Expected Results Inform the user that a valid file has not been selected and provide an option to choose the video file again.

Actual Results An error message is displayed after which the open dialog prompts to select a new file

Page 259: Keynote 2011

259

Remarks Only MPEG files can be selected as the input file

Test Case 5

Module Name Video

Function Preview

Input Choose a file that is not a MPEG 1/2 video file

Expected Results Inform the user that a valid file has not been selected and hence do not execute FFPLAY

Actual Results The file is not accepted and an error message is displayed

Remarks Only valid video files can be previewed

Test Case 6

Module Name Video

Function Encode

Input Encode when only 1 worker is present

Expected Results The splitting and joining phases are skipped. The video file is encoded directly

Actual Results The file is encoded in the single system

Remarks Encoding works even if there is only one worker

Test Case 7

Module Name Video

Function Encode

Input Encode when no workers are present

Expected Results The Encoding process will not occur

Actual Results The system shows a message informing the user that at least 1 worker must be present for the encoding to take place.

Remarks A minimum of 1 worker is needed for execution

Test Case 8

Module Name DVE

Function Encode

Input MPEG 1 video file

Expected Results Compressed MPEG 4 Video file

Actual Results Compressed MPEG 4 Video file

Remarks The encoder can compress MPEG 1 files

Test Case 9

Module Name DVE

Function Encode

Input MPEG 2 video file

Expected Results Compressed MPEG 4 Video file

Actual Results Compressed MPEG 4 Video file

Remarks The encoder can compress MPEG 2 files

Test Case 10

Module Name DVE

Function Encode

Page 260: Keynote 2011

260

Input AVI compressed File

Expected Results Error Message

Actual Results DVE does not accept the video file

Remarks Only MPEG files are currently supported

5. RESULTS

Fig 5 Server Main Screen

Fig 6 Server Utility Program Selection

Fig 7 Server Accepting 2 workers on ports 9000 and 9002

Fig 8 setting the server register, data ports and the worker port.

Fig 9 MPGTX splitting the video file into 3 pieces

Page 261: Keynote 2011

261

Fig 10 FFMPEG : Encoding the video chunk

Fig 11 FFMPEG : Used for Transco ding the Joined Chunk

Fig 12 Server Info Box Showing Status after the conversion is done

Fig 13 Worker after finishing encoding 3 pieces of video

Fig 14 FFPLAY - previewing video files before encoding

6. FUTURE ENHANCEMENTS

• The DVE is capable of handling further additional features that can improve the quality of encoding and also provide a wider spectrum of options for the application. The following enhancements are listed below as follows [7].

• The application can be made compatible for Real Media, QuickTime Video and H.264 video codecs which also can support parallelization.

• File formats such as Matroska (.mkv), Real format (.rm), Vobris (.ogg) can be used as a more efficient alternative for AVI file structures.

• Features such as HTTP, FTP and various application level protocols can be used for control and data transfer of video.

Page 262: Keynote 2011

262

• Custom made plug-ins for splitting video files and joining them can be written for various file formats that can be supported as mentioned above.

• The application can be mapped and re-implemented on to a Linux operating system and thus can be made to run in the UNIX environment giving more flexibility and freedom for development [8].

• System and Network Diagnostics can be implemented to automate the workload distribution across the entire system and can provide tracking results and work progress back to the operator.

• Decentralization of the entire model can be implemented on larger scaled applications with higher grade network management features such as session handling and metadata in feed-forward cycles across the network.

• The whole application can be scaled beyond the LAN network to the WAN and other networks to cover a larger region of users who need the services of the above application [9].

• The application can be plugged into a Cloud computing architecture that can enable a user to utilize the feature from a remote location. This also opens business prospects for large software and multimedia based companies.

VII. CONCLUSION

With the above seen results one can clearly guarantee that we have accomplished all the necessary objectives we initially laid out for the Distributed Video Encoder application. Academically, we have also managed to understand the latest technology available and have successfully used it to design and develop the application.

We have also spoken about how the product can be enhanced further and this project should lay a cornerstone for other developers and designers who are interested to work and develop on the related field. I hope that this paper serves as a beacon of light to those who want to venture along the path of technological development and progress for the betterment of mankind.

VIII. APPENDIX

1 Glossary

AVI Audio Video Interleaved. Microsoft format for digital audio and video playback from Windows 3.1. Somewhat cross-platform, but mostly a Windows format. Has been replaced by the ASF format but is still used by some multimedia developers.

CODEC A software or hardware device to encode and decode video, audio or other media. The encoder and decoder for a particular compression format. The word is contracted from Coder- Decoder. The codec refers to both ends of the process of squeezing video down and expanding it on playback. Compatible coders and decoders must be used and so they tend to be paired up when they are delivered.

DCT Discrete Cosine Transform algorithms are used in MPEG encoders to compress the video.

DIVX A popular standard for compressing video to a reasonably good quality at low bit rates. A 2-hour movie compresses to a size that comfortably fits on a CD-ROM. This is derived from the MPEG- 4 standard with some additional enhancements that make it noncompliant in some minor respects.

JPEG Joint Photographic Experts Group.

Page 263: Keynote 2011

263

LAN Local Area Network. Usually Ethernet running at 10 Megabits

or 100 Megabits. Sometimes 1000 Mbits are deployed and

referred to as Gigabit Ethernet.

Macroblock Video pictures are degenerated to blocks of 16 × 16 or 8 × 8 pixels so that similarities can be found.

MPEG Motion Picture Experts Group. A working group within the ISO/IEC standards body. This group is concerned with the standardization of moving-picture representations in the digital environment.

MPEG 1 The original MPEG video standard designed for CD-ROM applications and other similar levels of compression. This is a legacy format. Still useful for some solutions but new projects should use a later codec.

MPEG 2 Widely used for digital TV and DVD products. A much improved codec that supersedes MPEG-1 in many respects. It has been popularized by DVD content encoding and DVB transmission.

MPEG 4 A huge leap forward in the description of multimedia content.MPEG-4 is the essence container.

QuickTime A platform for developing and playing rich multimedia, video, and

interactive content. It is not a codec but a platform. It has some unique codecs that are not available anywhere else but you should be using open-standard codecs rather than these.

When it is referring to a video format, it is talking about a codec jointly developed by Apple and Sorenson. It gets improved from time to time as successive versions of QuickTime are released.

This is a popular name for the Sorenson codecs.

TCP / IP Transmission Control Protocol/Internetworking Protocol describes the message format and connection mechanisms that make the Internet possible. It is built on top of UDP. It contains ack/nak (acknowledge/negative acknowledge) protocols to ensure that all packets have arrived It does this at the expense of timeliness.

(1) References

[1] A Practical Guide to Audio and Video Compression – Cliff Wooton

[2] Video Demystified - Keith Jack [3] MPEG encoding basics – Snell and Wilcox [4] C# and the .net Framework – Andrew Tolstein [5] Microsoft Developer Network (MSDN) [6] Software Engineering – Ian Sommerville [7] Object Oriented Analysis and Design – Ali

Bahrami [8] http://www.setiathome.com [9] http://folding.stanford.edu

Page 264: Keynote 2011

264

34. J2ME-Based Wireless Intelligent Video Surveillance

System using Moving Object Recognition Technology.

*Bhargav.G, ** R.V.Praveena Gowda,

*P.G.Student, Dept of Industrial Engineering & Management, DSCE, Bengaluru-78, India

Email:[email protected],mob.no:09986235909

**Assistant Professor, DSCE, Bengaluru-78,E-mail:[email protected]

Abstract— A low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology is proposed in this paper. The proposed solution can be applied not only to various security systems, but also to environmental surveillance. Firstly, the basic principle of moving object detecting is given. Limited by the memory consuming and computing capacity of a mobile phone, a background subtraction algorithm is presented for adaptation. Then, a self-adaptive background model that can update automatically and timely to adapt to the slow and slight changes of natural environment is detailed. When the subtraction of the current captured image and the background reaches a certain threshold, a moving object is considered to be in the current view, and the mobile phone will automatically notify the central control unit or the user through phone call, SMS (Short Message System) or other means. The proposed algorithm can be implemented in an embedded system with little memory consumption and storage space, so it’s feasible for mobile phones and other embedded platforms, and the proposed solution can be used in constructing mobile security monitoring system with low-cost hardware and equipments. Based on J2ME (Java2 Micro Edition) technology, a prototype system was developed using JSR135 (Java Specification Requests 135: Mobile Media API) andJSR120 (Java Specification Requests 120: Wireless Messaging API) and the test results show the effectiveness of proposed solution.

The moving object recognition technology led to the development of autonomous systems, which also minimize the network traffic. With good mobile ability, the system can be deployed rapidly in emergency. And can be a useful supplement of traditional monitoring system. With the help of J2ME

technology, the differences of various hardware platforms are minimized. All embedded platforms with camera equipped and JSR135/JSR120 supported can install this system without making any changes to the application.

Also, the system can be extended to a distributed wireless network system. Many terminals work together, reporting to a control center and receiving commands from the center. Thus, a low-cost wide-area intelligent video surveillance system can be built. Furthermore, with the development of embedded hardware, more complex digital image process algorithms can be used to give more kinds of applications in the future.

Keywords: - Mobile phone-based, moving object

detecting, background subtraction algorithm, Java

to Micro Edition technology, JSR135/JSR120,

distributed wireless network system

1. INTRODUCTION

The increasing need for intelligent video surveillance in public, commercial and family applications makes automated video surveillance systems one of the main current application domains in computer vision. Intelligent Video Surveillance Systems deal with the real-time monitoring of persistent and transient objects within a specific environment.

Page 265: Keynote 2011

265

Intelligent surveillance system has been evolution to third generation, known as automated wide-area video surveillance system. Combined computer vision technology, the distributed system is autonomic, which can also be controlled by remote terminals. A low-cost intelligent wireless security and monitoring solution using moving object recognition technology is presented in this paper. The system has good mobility, which makes it a useful supplement of traditional monitoring system. It can also perform independent surveillance mission and can be extended to a distributed surveillance system. Limited by the memory consuming and computing capacity in a mobile phone, background subtraction algorithm is presented to be adopted in mobile phones. In order to be adapted to the slow and slight changes of the natural environment, a self-adaptive background model updated automatically and timely is detailed. When the subtraction of the current captured image and the background reaches a certain threshold, a moving object is thought to be in the current view, and the mobile phone will automatically notify the central control unit or the user through phone call, SMS, or other means. Based on J2ME technology, we use JSR135 and JSR120 to implement a prototype [1].

1.1. Limitations

• Depends on the mobile phone camera used by the user (i.e. the better the pixels of the particular mobile phone better the clarity of the picture).

• Can be used only with mobile phones supporting J2ME applications like we can use all the major phone vendors (Nokia, Motorola, Sony-Ericsson, etc), which has OS like Symbian.

• On the server side a static IP address is required to receive pictures from the mobile phone (client) and also to send message to the mobile phone (administrator).

1.2 Problem Definition

To develop a J2ME-Based Wireless Intelligent Video Surveillance System using Moving Object Recognition Technology.

The purpose of this document is to demonstrate the requirements of the project “Motion Detector”. The

document gives the detailed explanation of both functional and non-functional requirements. With this application the mobile phone user should be able to do the following things:

• Can process the consecutive snapshots to find the motion detection.

• Can transmit the captured image to the server.

• Can reset or disconnect the mobile application from the server.

The application should have user-friendly GUI. The communication between the phone and the computer should take place through GPRS. This means that the mobile can transmit the information to a remote user.

The mobile part of the application should be written in J2ME as it provides very good GUI. 1.3 Requirement Specifications

The basic purpose of software requirement specification (SRS) is to bridge the communication between the parties involved in the development project. SRS is the medium through which the users’ needs are accurately specified; indeed SRS forms the basis of software development. Another important purpose of developing an SRS is helping the users understand their own needs.

In this chapter the requirements to develop program namely the hardware requirements, software requirements, functional requirements and non-functional requirements are specified so that the user program deploy the program with ease.

1.4 Functional Requirements

• To protect a room from intruder attack. • To respond when the room is under attack. • To prevent most attacks from succeeding.

1.5 Non Functional Requirements

B. 1.5.1 Hardware Interfaces

• Computer: PC installed with Internet connection with unique IP address. The application should be running to accept the request and to send response to mobile based on the option.

• Mobile: GPRS enabled mobile with CLDC 1.1 and MIDP 2.0 or higher version.

Page 266: Keynote 2011

XV. 1.5.2 SOFTWARE INTERFACES

• Network: Internet. • Application: Java, J2ME, NetBeans 1.6. • Operating System: Windows XP. • J2ME Wireless Toolkit 2.5.1

A. 1.5..3. Communications Interfaces

• GPRS will be used as Communication interface between server and mobile.

2. DETAILED DESIGN

2.1 Sequence Diagram

Fig 1: Sequence Diagram

2.2 Activity Diagram

Fig 2: Activity Diagram

2.3 Deployment Diagram

Fig 3: Deployment Diagram

2.4 Piximity Algorithm

This algorithm is used to calculate the difference between the first snap and consecutive snaps. We take the RGB value of each pixel of a image and store each different combination in a Colour array, we then calculate the Colour count of each of these values and store it in a Colour count array. These are explained below in figure

Page 267: Keynote 2011

267

Fig 4: Piximity Algorithm

3. Modules

3.1 Server Side

3.1.1 Module Name: MotionDetectorWelcome

This module is used for displaying the welcome screen when the application is launched on the server side.

Input: None

Step1: Obtain the image from the disk

Step 2: Set boundaries for the image

Step 3: Set time duration for which the image must be displayed

Step 4: End

3.1.2 Module Name: MotionDetectionProcessor

This module is used for receiving the image, storing the image, displaying it on the canvas & calling the SendMail module which sends a message to particular number.

Input: Command, Arguments

Step1: Obtain the inputstream from the client side

Step 2: Check the command received in the stream

Step 2.1: If command is to receive image then call getImage() function which receives the image

Step 2.2: If command is motion detection alarm then call the function MotionDetectionSuit which alarms at the server side and sends a message to the specified user

Step 3: End

3.1.3 getImage

Step 1: Display a transmitting bar in the canvas on server side

Step 2: Read the image in form of bytes from the input stream

Step 3: Call the StoreImage module which stores the byte array in the form of JPEG in the disk

Step 4: End

3.1.4 Module Name: MotionDetectionSuit

This module is a supporting module for Motion Detection Processor which is used to call the audio file placed in the disk, create the directory name for storing image.

Step 1: Call the audio file and play it thrice with a gap of 1 min between them

Step 2: Get the current date time and give that as the name to the directory

Step 3: Call the inbuilt File module to create the directory

Step 4: End

3.1.5 Module Name: StoreImage

This module is a supporting module for Motion Detection Processor which is used to store the image in the directory created and print on the screen that is command prompt if image is created

or not it is stored in jpeg format.

Step 1: Print on the command prompt which directory the image is being stored

Step 2: Use the FileOutputStream inbuilt module to store the jpeg image in the directory

Step 3: Using Write function we store the bytes in to jpeg file

Step 4: End

Page 268: Keynote 2011

268

3.1.6 Module Name: MotionDetectorNavigator

This module implements the server side GUI which gives option to Disconnect, Reset Connection, Play the stored video at three different speeds, and even run loop.

Step 1: Create the GUI by extending the module to JFrame

Step 2: Create the Checkbox, Create buttons for play video, change the connection

Step 3: End

3.1.7 Module Name: PlayVideoChooser

This is a supporting module for MotionDetectionServer which gives the user option to play from a set of videos stored

Step 1: This module extends JFrame which is used to displat the list.

Step 2: When the user clicks on a video file the file from it played using PlayVideo module

Step 3: End

3.1.8 Module Name: PlayVideo

This is a supporting module for PlayVideoChooser which obtain the video and plays it.

Step 1: Obtain the video from the disk

Step 2: Play the video by using DrawImage inbuilt function

Step 3: End

3.1.9 Module Name: SendMail

This module is used to send sms to a airtel number using a gmail id

Step 1: Establish a smtp connection with gmail server

Step 2: Mention user name and password of the gmail account

Step 3: Mention the mobile number

Step 4: Mention the message to be sent

Step 5: End

3.2 MOBILE SIDE

3.2.1 Module Name: WelcomeScreen

This module is used to display image on the mobile before the launch of the application with ok and exit as two options

Step 1: Obtain the image from the disk

Step 2: Align it to the center of the screen

Step 3: Add two commands OK and EXIT

Step 4: End

3.2.2 Module Name: RecordStorage

This module is used to store the values of the time, ipaddress,threshold

Step 1: If there is Records already stored then we take it from the records or we take it from user

Step 2: We would have three functions to receive time, threshold,ipaddress

Step 3: End

3.2.3 Module Name: MotionDetectionScheduler

This module is used to schedule the application so that it will capture the motion after some time as scheduled

Step 1: Here we use sleep the function to delay for the time as specified

Step 2: After the scheduled time we call the capture module

Step 3: End

Page 269: Keynote 2011

269

3.2.4 Module Name: MotionCapture

This module is used to calculate the difference between the two images of the first image and consecutive images if the difference between them is greater than threshold then alarm signal will be sent. This is a supporting module for above to modules it uses piximity algorithm.

Step 1: We check if the snap received is the first one we take it as test snap

Step 2: From then on we take the first image as back ground image & then calculate the difference of consecutive images using Piximity algorithm and isMotionDetected()

Step 3: End

3.2.5 Piximity Algorithm

int pixmityAlg(int image[])

int LENGTH = image.length;

int color_array[] = new int[LENGTH];

int color_count[] = new int[LENGTH];

int temp;

int array_count=1;

boolean color_found=false;

color_array[0]=image[0];

for(int i=0;i<LENGTH;i++)

for(int j=0;j<=array_count;j++)

color_found=false;

if(color_array[j]==image[i])

color_count[j]=color_count[j]+1;

color_found=true;

break;

if(!color_found)

array_count=array_count+1;

color_array[array_count]=image[i];

color_count[array_count]=1;

return array_count;

3.2.6 isMotionDetected().

Step 1: Here we use sleep the function to delay for the time as specified

Step 2: After the scheduled time we call the capture module

Step 3: End

3.2.7 Module Name: AlertMessage

This is a supporting module for MotionCapture used to send the AlertMessage

Step 1: We create a socket connection with the server using the ipaddress given by the user

Step 2: We use output data stream to write on the server side

Step 3: End

Step 6: Done.

Page 270: Keynote 2011

270

4. Results

4.1 Server Side

Fig 5: Start up screen

Fig 6: Motion detector viewer

Page 271: Keynote 2011

271

Fig 7: The image shows that a message was sent, the directory were image is being stored is mentioned

4.2 Mobile Side

Fig 8: The user has to launch the application

Fig 9: The welcome screen is being launched

Page 272: Keynote 2011

272

Fig 10: User has to choose between two modes

Fig 11: User enters the three options Capture After, Threshold, and IP Address

Page 273: Keynote 2011

273

Fig 12: The Difference between images is being calculated

Fig 13: SMS received by the user

5. FUTURE ENHANCEMENTS

The current implementation of security in Wireless Vigilance can be enhanced with the following features.

Some of the many possible future enhancements are:

• Storing images in other format than JPEG • Quality of the image captured can be improved • Messages can be sent to a group of users

instead of one There might be more scope for improvement,

which could not have come to our notice. We would definitely keep track of the feedback for people’s opinion and would try and implement to bring out the necessary improvements to appeal others.

VII. CONCLUSION

The moving object recognition technology led to the development of autonomous systems, which also minimize the network traffic. With good mobile ability, the system can be deployed rapidly in emergency. And can be a useful supplement of traditional monitoring system. With the help of J2ME technology, the differences of various hardware platforms are minimized. All embedded platforms with camera equipped andJSR135/JSR120 supported can install this system without making any changes to the application. Also, the system can be extended to a distributed wireless network system. Many terminals work together, reporting to a control center and receiving commands from the center. Thus, a low-cost wide-area intelligent video surveillance system can be built. Furthermore, with the development of embedded hardware, more complex digital image process algorithms can be used to give more kinds of applications in the future.

Page 274: Keynote 2011

274

(1)

(2) References

[1] M Valera, SA Velastin, Intelligent distributed surveillance systems: a review. IEE Proceedings on Visual Image Signal Processing, April. 2005, vol. 152, vo.2, pp.192-204.

[2] [M. Picardy, Background subtraction techniques: a review, IEEE International Conference on Systems, Man and Cybernetics, Oct. 2004, vol. 4, pp. 3099–3104.

[3] [Alan, M. McIvor, Background Subtraction Techniques, Proceedings of Image & Vision Computing New Zealand2000 IVCNZ’00, Reveal Limited, Auckland, New Zealan, 2000.

[4] C. Enrique Ortiz, The Wireless Messaging API developers.sun.com, 2002

[5] [Sun Microsystems, Inc., JSR 120 Wireless MessagingAPI (WMA) specification, 2002.

[6] Herbert Schildt JavaTM 2: The Complete Reference Java2 ,TATA McGraw Hill, Fifth edition,2002

[7] Patrick Naughton, The java Handbook, Tata McGraw-Hill Edition,1997

[8] http://computer.howstuffworks.com/program.htm

[9] ieeexplore.ieee.org/iel5/4566097/4566248/04566311.pdf?arnumber=4566311

[10] http://ieeexplore.ieee.org/iel5/90/4542818/04542819.pdf?arnumber=4542819

[11] http://www.geom.uiuc.edu/~daeron/docs/apidocs/java.io.html

[12] http://www.geom.uiuc.edu/~daeron/docs/apidocs/packages.html

[13] http://www.kbs.twi.tudelft.nl/Documentation/Programming/Java/jdk1.4/api/javax/swing/JComponent.html

[14] http://www.kbs.twi.tudelft.nl/Documentation/Programming/Java/jdk1.4/api/javax/swing/package-summary.html

Page 275: Keynote 2011

275

35. The NS-2 simulator based Implementation and Performance

Analyses of Manycast QoS Routing Algorithm J.Rajeswara Rao S.Chittibabu Dr.K.Sri Rama Krishna

M.TECH –2nd

Year ECE, Asst.Professor of ECE, Professor & Head of ECE,

V.R Siddhartha Engineering V.R Siddhartha Engineering V.R Siddhartha Engineering

College, Vijayawada, A.P College, Vijayawada, A.P College, Vijayawada, A.P

Email: [email protected] Email: [email protected] Email: srk_kalva @ yahoo.com

Abstract: - In network communications, there

are many methods to exchange packets or to deliver data from sources to destinations.

During the past few years, we have observed the emergence of new applications that use

multicast transmission. In many distributed applications require a group of destinations to

be coordinated with a single source. Multicasting is a communication paradigm to

implement these distributed applications. However in multicasting, if at least one of the

members in the group cannot satisfy the service requirement of the application, the

multicast request is said to be blocked. On the contrary in manycasting, destinations can join

or leave the group, depending on whether it satisfies the service requirement or not. This

dynamic membership based destination group decreases request blocking.

Manycasting over optical burst-switched networks (OBS) based on multiple quality of

service (QoS) constraints. These multiple constraints can be in the form of physical layer

impairments, transmission delay, and reliability of the link. Each application

requires its own QoS threshold attributes. Destinations qualify only if they satisfy the

required QoS constraints set up by the application. Due to multiple constraints, burst

blocking could be high. We propose one algorithm to minimize request blocking for the

manycast problems. Using NS2 based simulation results, we have analyses these

constraints for manycast algorithm and then focuses on introducing how it is simulated

based on NS2 and analyzing algorithm performance.

Key-words: - NS2 (Network Simulator-2), manycast, optical burst-switched networks (OBS), quality of service (QoS), simulation.

1. Introduction With the advent of many Internet-based distributed applications, there is a continued demand for a high capacity transport network. Networks based on wavelength division multiplexing (WDM) are deployed to tackle the exponential growth in the present Internet traffic. WDM network include optical circuit switching (OCS), optical packet switching (OPS), and optical burst switching (OBS).

In OBS the user data is transmitted all-optically as bursts with the help of an electronic control plane. One of the primary issues with OCS is that the link bandwidth is not utilized efficiently in the presence of bursty traffic. On the other hand, many technological limitations have to be overcome for OPS to be commercially viable. OBS networks overcome the technological constraints imposed by OPS and the bandwidth inefficiency of OCS networks. In this paper, we focus on the optical transport network being OBS. Most of the discussed algorithm can easily be modified to work for OCS and OPS networks. There has been recent emergence of many distributed applications that require high-bandwidth, such as video conferencing, telemedicine distributed interactive simulations (DIS), grid computing, storage area networks (SANs), and distributed content distribution networks (CDNs).These delay constraints can be met effectively by using OBS as the transport paradigm. These distributed applications require a

Page 276: Keynote 2011

276

single source to communicate with a group of destinations. Traditionally, such applications are implemented using multicast communication. A typical multicast session requires creating the shortest-path tree to a fixed number of destinations. The fundamental issue in multicasting data to a fixed set of destinations is receiver blocking. If one of the destinations is not reachable, the entire multicast request (say, grid task request) may fail. A useful variation is to dynamically select destinations depending on the status of the network. Hence, in distributed applications, the first step is to identify potential candidate destinations and then select the required number. This dynamic approach is called manycasting. Manycasting has caught the attention of several researchers during the recent past, due to the emergence of many of the distributed applications described above. The rest of the paper is organized as follows: we first discuss the problem Formulation for service attributes in Section 2. In Section 3, implementation of Manycast algorithm is explained. Section 4 discusses the analyses of simulation results. Finally, Section 5 concludes the paper. 2. Problem Formulation In this section, we explain the mathematical framework for manycasting. Our work focuses on selecting the best possible destinations that can meet the service demands effectively. Destinations chosen must be able to provide quality of service attributes. A destination is said to qualify as the member of quorum pool if it satisfies the service requirements of the application. The proposed methods are based on distributed routing, where each node individually maintains the network state information and executes the algorithm. Algorithms implemented in the centralized way, may fail due to a single failure and resulting in poor performance. Our proposed algorithm has the following functionality: 1) Handle multiple constraints with the help of link state information available locally. 2) Service differentiated provisioning of manycast sessions.

3) Find the best possible destinations in terms of service requirements for the manycast sessions.

A. Notations The algorithm description of simulated

manycast QoS routing algorithm is as follows Input: network topological graph G=(V,E), Gs(A)=S1,…,Sq(q<n), G(A)=d1,…,dk(k<n), crossover rate Pc, mutation rate Pm, evolution algebra maxgen Output: optimal path that satisfies constraint conditions

gen = 0;

Initializing population; Calculating the fitness value of population individual; while (dissatisfying pausing condition or gen<maxgen)

gen++;

Selection operation();

Crossover operation();

Repair operation();

Mutation operation();

Repair operation();

Fitness calculation (); Retaining the optimal population individual; Output of the path of optimal individual

B. Service Attributes

We define, , and as noise factor, reliability factor, and end-to-end propagation delay for the Link j , respectively. The noise factor is defined as the ratio of input optical signal to noise ratio OSNRi/p = OSNRi and output optical signal to noise ratio OSNRo/p = OSNRi+1; thus we have

/ / ----- (1)

Where OSNR is defined as the ratio of the average signal power received at a node to the average amplified spontaneous emission noise

Page 277: Keynote 2011

277

power at that node. The OSNR of the link and q

factor are related as

√"

"# $√$% ------(2)

Where Bo and Be are optical and electrical bandwidths, respectively. The bit-error rate is related to the q-factor as follows:

BER = 2 erfc &

√' -----(3)

We choose a route that has a minimum noise factor. Thus the overall noise factor is given by ∏ ) ------- (4) The other two parameters considered in our approach include the reliability factor and the propagation delay of the burst along the link. The reliability factor of the link j is denoted . This value indicates the reliability of the link, and its value lies in the interval (0, 1]. The overall reliability of the route is calculated as the multiplicative constraint and is given by ∏ ) -------(5) The propagation delay on the link j is denoted, and the overall propagation delay of the route

is given by ∑ ) --------(6) 3. Implementation of Manycast algorithm There are 2 levels in the simulation of NS2: one is based on configuration and construction of Otcl, which can use some existing network elements to realize the simulation by writing the Otcl scripts without modifying NS2; the other is based on C++ and Otcl. Once the module resources needed do not exist, NS2 must be upgraded or modified to add the required network elements. Under these circumstances, the split object model of NS2 is used to add a new C++ class and an Otcl class, and then program the Otcl scripts to implement the simulation. The process of network simulation in NS2 is shown in Fig. 1.

A. Extension of C++ level

Since manycast is a new network model and there is little simulation implementation with NS2, our simulation experiment was conducted basing on the C++ and Otcl levels. NS2 needs first to be extended to support the simulation of manycast QoS routing algorithm. The main steps are as follows: 1) Declare relevant variables In programming, the relevant variables of routing algorithm shall be first declared, which can be extended in /ns2.34/routing/route.h. 2) Add compute route function When the compute_routes ( ) function is extended in /ns2.34/routing/route.cc, we add a compute_routes _delay ( ) function, and the manycast QoS routing algorithm is embedded into this compute_routes () function. 3) Modify command ( ) fuction In accordance with the separation of C++’s compiled class and Otcl’s interpreted class in NS2, the configuration of nodes and structure of network topological graph are implemented on Otcl level, and some related parameters are transmitted to C++ level through Tclcl mechanism to assist the implementation of C++ function. When the manycast QoS routing algorithm is implemented in C++, we need to know the numbers of request nodes and service nodes, and the command () function is used to transmit the contents of relevant parameter argv [2] and argv [3] in Otcl to C++. Since the data types on Otcl level belong to character types, the data types need to be transferred when they are transmitted. After C++ is modified, the work on compiling level is basically finished.

Page 278: Keynote 2011

278

Fig. 1: The simulation process of NS2

B. Program Otcl test scripts 1) Create NS simulation object and define routing simulation module. Set ns [new Simulator] 2) Define the way of routing for simulation object. $ns multicast $ns mrtproto manycast 3) Create node object. In this experiment, we use a for circle of tcl grammar to create node_n simple nodes. for set i 0 $i < $node_n incr i set n($i) [$ns node] 4) Create connections between different nodes. $ns simplex-link <node0> <node1> <bandwidth> < delay> <queue_type> 5) Create tracking object. set f [open out.tr w] $ns trace-all $f $ns namtrace-all [open out.nam w] Where an out.tr file is created to record the trace data during simulation process and make variable f pointing at this file; an out, nam file is created to record the trace data of nam. 6) Create receiving source. set group [Node allocaddr] set rcvr1 [new Agent/LossMonitor] set rcvr2 [new Agent/LossMonitor] 7) Create the sending and finishing time of packets.

$ns at 0.2 “$n(1) color blue “ $ns at 0.2 “$n(1) label Request “ $ns at 0.2 “$n(1) request-anycast $rcvr1 $group” $ns at 1.2 “finish” 8) Finish nam. Proc finish global ns $ns flush-trace puts "running nam..." exec nam out.nam & exit 0 To transfer the finish process of various nodes at 1.2 seconds to end the whole program and to close out.tr and out.nam files. 9) Run Tcl script. $ns run

4. Analyses of simulation Results The OTCL script ex-manycast.tcl uses a simple and randomly generated network topological structure with 30 nodes as its basic experimental environment, of which the link delay and cost are randomly generated. After program running, we can use Nam tool and Xgraph tool to obtain clearer experimental results. The following experimental results are obtained basing on this basic experimental environment.

4.1 Analyses of Nam animated results

Fig. 2: Animated demonstration of 30 nodes (Before running)

Page 279: Keynote 2011

279

Fig. 2 shows the network topological graph that is randomly generated, and the regular network check-up is necessary before the running of manycast routing algorithm program. As demonstrated in the Figure, the data transmission between node 1 and node 3 show that the network works well. Under this circumstance, the node 1 requests a service, and the nodes 3 and 7 are service nodes, the simulated algorithm is to select an optimal node from service nodes 3 and 7 to communicate with the node 1.

Fig. 3: Animated demonstration of 30 nodes (During running) Fig. 3 shows that the simulated manycast routing program finally chooses the node 7 that can provide optimal service to communicate with the request node 1. 4.2 Xgraph curve

1) Average end-to-end propagation delay of data message

Fig. 4: End-to-end propagation delay of data message

Fig. 4 shows the average delay of packet transmission between 1 and 7. As shown in Fig. 4, a routing path is obtained from the simulation of manycast routing algorithm, on which the average delay of packet transmission remains a basically stable value, which shows the performance stability of this algorithm. 2) Delay jitter

Fig. 5: Delay jitter. The results show that, under circumstances of network stability, the delay jitter on the routing path obtained from the simulation of manycast routing algorithm tends to be 0. Sequentially we can conclude that the manycast path based on this algorithm is a path with relatively good performance.

3) Throughput

Fig. 6: Throughput of the neighbor nodes

The results show that the throughput of all nodes, except for the source node and the destination node, on the routing path obtained from the simulation of manycast routing algorithm remain a stable value. Once the routing is formed, there is

Page 280: Keynote 2011

280

little packet loss among different nodes, what is transmitted into equals to what is sent out. 4) Comparison with traditional routing algorithm

Fig. 7: Comparison of path finding time between Dijkstra and Manycast Fig. 7 shows that, under the same manycast transmission and the same network environment, when the number of nodes is small, the path finding time of manycast algorithm is relatively slower than that of Dijkstra. However, as the number of nodes increases, manycast shows more advantages. It not only uses less time compared with Dijkstra but also finds an optimal QoS path to provide services. When the path finding time and path fitness are comprehensively considered, the manycast QoS routing algorithm has good application perspective in manycast communication models. 5. Conclusion Since manycast is a new network model, many researchers are researching various routing protocols and routing methods to satisfy QoS demands in application layer and network layer. It is important for the application development of manycast routing software to simulate various QoS manycast routing algorithm in NS2 so as to implement the evaluation and analyses of algorithm performances. This shall be an important perspective in the research of manycast routing algorithm.

6. Acknowledgements

We sincerely thank Dr. Balagangadhar G. Bathula for his valuable guidance, suggestions and encouragement. References:

[1] R.Hinden, S.Deering IP version 6 Addressing Architecture in RFC2373, July 1998.

[2] M. Tan, Y. Lin and Ch. Lu. “Simulation and results analyses of distance vector algorithm based on NS2”. Computer Simulation, vol. 4, 2005, pp: 165-168.

[3] X. Wang and M. Zheng. “Research and application of network simulation based on NS2,” Computer Simulation, vol. 12, 2004, pp: 128-131.

[4] B. G. Bathula and J. M. H. Elmirghani” Constraint-Based Anycasting Over Optical Burst Switched Networks”, VOL. 1, NO. 2/JULY 2009/ J. IEEE Transaction on

OPT. COMMUN. NETW.

[5] D. Simeonidou and R. Nejabati (editors), “Grid Optical Burst Switched Networks (GOBS)”, Global Grid Forum Draft, Jan 2006.

[6] Pushpendra Kumar Chandra, Ashok Kumar Turuk, Bibhudatta SOO,” Survey on Optical Burst Switching in WDM Networks”, Intemational Conference on Industrial and Information Systems, IClIS 2009,28 - 31 December 20M, Sri Lanka.

[7] Ayman Kaheel, Tamer Khattab, Amr Mohamed and Hussein Alnuweiri,” Quality-of-Service Mechanisms in IP-over-WDM Networks”, IEEE

Communications Magazine • December 2002

[8] Wei Tan, Y.H.Pan, Du Xu, Sheng Wan& Lemin Li, Zhizhong Zhang,” A QoS-based Batch Scheduling Algorithm in Optical oBurst Switching WIPM Networks” 0-7803-8647-7/04/5208.2004 IEEE,

[9] P. DU, "QoS Control and Performance Improvement Methods for Optical Burst Switching Networks," PhD thesis, Department of Informatics, School of Multidisciplinary Sciences, The Graduate

Page 281: Keynote 2011

281

University for Advanced Studies (SOKENDAI), March 2007.

[10] Admela Jukan,” Optical Control Plane For The Grid Community”, 3rd Quarter 2007, Volume 9, No. 3.

[11] H. Alshaer J.M.H. Elmirghani,” Differentiated resilience services support in heterogeneous IP over wavelength division multiplexing networks”, IET

Commun., 2010, Vol. 4, Iss. 6, pp. 645–662.

[12] Elio Salvadori, Roberto Battiti,” Quality of Service in IP over WDM: considering both service differentiation and transmission quality”, IEEE

Communications Society, 2004 IEEE. [13] A. Kaheel, T. Khattab, A.

Mohamed, and H. Alnuweiri. Quality-of- Service Mechanisms in IP-over-WDM Networks. IEEE Communications

Magazine, pages 38–43, December 2002. [14] M. Yoo, C. Qiao, and S. Dixit, “

QoS Performance of Optical Burst Switching in IP-Over-WDM Networks,” IEEE J. Selected Areas in Commun., vol. 18, pp. 2062- 2071, Oct. 2000.

[15] B. Lannoo, Jan Cheyns, Erik Van Breusegem, Ann Ackaert, Mario Pickavet, and Piet Demeester, "A Performance Study of Different OBS Scheduler

Implementations," In Proceeding of Symposium IEEE/LEOS Benelux Chapter, Amsterdam, 2002.

[16] C. Qiao and M. Yoo, “Optical Burst Switching (OBS) - A New Paradigm for an Optical Internet”, Journal of High Speed Networks, 8(1), Jan 1999.

[17] V. Vokkarane, J. P. Jue, and S. Sitaraman, "Burst Segmentation: An Approach for Reducing Packet Loss in Optical Burst Switched Networks," In Proceeding of IEEE ICC 2002, pp. 2673-2677,2002.

[18] Y. Xiong, Marc Vandenhoute, and Hakki C. Cankaya, "Control Architecture in Optical Burst Switched WDM Networks," IEEE JSAC, vol. 18(10), pp. 1838-1851, October 2000.

[19] K. Dozer, C. Gauger, J .Spath, and S. Bodamer, "Evaluation of Reservation Mechanisms for Optical Burst Switching," AEU International Journal of Electonics and Communications, vol. 55(1), pp. 2017- 2022, January 2001.

36. Study of PSO in Cryptographic Security Issues

Ahalya.R.

1M.Tech., Bharathidasan University, Tiruchirappalli.

Abstract:

Particle Swarm Optimization (PSO) algorithm is

a stochastic search technique, which has

exhibited good performance across a wide range

of applications. Encryption is an important issue

in today’s communication since it is carried out

over the air interface, and is more vulnerable to

fraud and eavesdropping. In security issues, that

is cryptographic based encryption, it reduces the

number of keys to be stored and distributed. Also

it is difficult for the cryptanalyst to trace the

original message.The time taken for encryption

by Message is less. PSO has been successfully

applied in many areas: function optimization,

artificial neural network training, fuzzy system

control.

Page 282: Keynote 2011

282

Keywords: Cryptography, PSO, Security.

1. INTRODUCTION

Cryptography is a fundamental building block for building information systems, and as we enter the so-called “information age” of global networks, ubiquitous computing devices, and electronic commerce, we can expect that the cryptography will become more and more important with time. Cryptography is the science of building new powerful and efficient encryption and decryption methods. It deals with the techniques for conveying information securely. The basic aim of cryptography is to allow the intended recipients of a message to receive the message properly while preventing eavesdroppers from understanding the message.

A stream cipher encrypts one individual character in a plaintext message at a time, using an encryption transformation which varies with time [11]. Stream cipher is a symmetric key encryption where each bit of data is encrypted with each bit of key. The Crypto key used for encryption is changed randomly so that the cipher text produced is mathematically impossible to break. The change of random key will not allow any pattern to be repeated which would give a clue to the cracker to break the cipher text.

2. PARTICLE SWARM OPTIMIZATION

James Kennedy and Russell C. Eberhart [1] originally proposed the PSO algorithm for optimization. PSO is a population-based search algorithm. The particle swarm concept originated as a simulation of simplified social system. The original intent was to graphically simulate the choreography of bird of a bird block or fish school. However, it was found that particle swarm model can be used as an optimizer. Although originally adopted for balancing weights in neural networks [2].So PSO as soon became a very popular global optimizer.

Reynolds [12] proposed a behavioral model in which each agent follows three rules:

Separation: Each agent tries to move away from its neighbors if they are too close.

Alignment: Each agent steers towards the average heading of its neighbors.

Cohesion: Each agent tries to go towards the average position of its neighbors.

Kennedy and Eberhart [6] included a ‘roost’ in a simplified Reynolds-like simulation so that:

• Each agent was attracted towards the location of the roost.

• Each agent ‘remembered’ where it was closer to the roost

• Each agent shared information with its neighbors (originally, all other agents) about its closest location to the roost.

Two are the key aspects by which we believe that PSO has become so popular:

1. The main algorithm of PSO is relatively simple (since in its original version, it only adopts one operator for creating new solutions, unlike most evolutionary algorithms) and its implementation is, therefore, straightforward. Additionally, there is plenty of source code of PSO available in the public domain.

2. PSO has been found to be very effective in a wide variety of applications, being able to produce very good results at a very low computational cost [3, 4].

In order to establish a common terminology, in the following we provide some definitions of several technical terms commonly used.

• Swarm: Population of the algorithm.

Page 283: Keynote 2011

283

• Particle: Member (individual) of the swarm. Each particle represents a potential solution to the problem being solved. The position of a particle is determined by the solution it currently represents.

• Pbest (personal best): Personal best position of a given particle, so far. That is, the position of the particle that has provided the greatest success (measured in terms of a scalar value analogous to the fitness adopted in evolutionary algorithms).

• lbest (local best):Position of the best particle member of the neighborhood of a given particle.

• gbest (global best): Position of the best particle of the entire swarm.

• Leader: Particle that is used to guide another particle towards better regions of the search space.

• Velocity (vector)): This vector drives the optimization process, that is, it determines the direction in which a particle needs to “fly” (move), in order to improve its current position.

3. ENCRYPTION AND DECRYPTION

SYSTEM

Figure.1 shows the encryption and decryption process. The sender sends the plain text which is encrypted sing a set of keys by the encryption system. The cipher text is given to the decryption system which decrypts using the same set of keys and the plain text is given to the receiver.

4. ENCRYPTION TECHNIQUE USING

PARTICLE KEY GENERATION

The Particle key generation is used to generate the key stream for encryption based on the character distribution in the plain text. The character in the key stream denotes the keys to be used for encryption. To ensure security the same sequence of keys is never used more than once [5]. The plain text and the characters in the key stream that are present in the plain text are replaced by the values what we evaluated.

The plain text with a character code table will enhance the security of the system. Even if the characters in the key stream are found, the values will not be known to the hackers since an encoding operation is done. The plain text is divided into blocks of size of key stream length. The values of the character in the key stream form the keys for the first block. In this way reduces the computational load and the key storage.

5. PARTICLE SWARM OPTIMIZATION

ALGORITHM OPERATION

PSO optimizes an objective function by undertaking population-based search .The collective behaviors of simple individuals interacting with their environment and each other. Someone called it as swarm intelligence. The population consists of potential solutions, named particles, which are a metaphor of birds in flocks. These particles are randomly initialized and freely

Page 284: Keynote 2011

284

fly across the multi-dimensional search space. During flight, each particle updates its own velocity and position based on the best experience of its own and the entire population. The updating policy drives the particle swarm to move toward the region with the higher objective function value, and eventually all particles will gather around the point with the highest objective value. When PSO is applied to real-life problems, the function evaluations themselves are the most expensive part of the algorithm.

The detailed operation of Particle Swarm Optimization is given below:

Initialization:

The velocity and position of all particles are randomly set.

Velocity Updating:

At each iteration, the velocities of all particles

are updated according to:

Vi = WVi + C1R1 (Pi,best –Pi) + C2R2 (gi,best –Pi)

where Pi and Vi are the position and velocity of particle i, respectively; Pi,best and gi,best is the position with the ‘best’ objective value found so far by particle i and the entire population respectively; w is a parameter controlling the

flying dynamics; R1 and R2 are random variables in the range [0,1]; C1 and C2 are factors controlling the related weighting of corresponding terms.

The inclusion of random variables endows the PSO with the ability of stochastic searching. The weighting factors, C1 and C2, compromise the inevitable tradeoff between exploration and exploitation. After updating, Vi should be checked and secured within a pre-specified range to avoid violent random walking.

Position Updating:

Assuming a unit time interval between successive iterations, the positions of all particles are updated according to:

P1 = Pi +Vi

After updating, Pi should be checked and limited to the allowed range.

Memory Updating:

Update Pi,best and gi,best when condition is met.

Pi,best = Pi if f (Pi) > f (Pi,best)

gi,best =gi if f (gi) > f (gi,best)

where f(x) is the objective function subject to maximization.

After finding the two best values, the particle updates its velocity and positions with following equation (a) and (b) .

v [ ] = v [ ] + c1 * rand( ) * ( pbest [ ] -

present [ ] ) +

c2 * rand( ) * ( gbest[ ] – present [ ] ) (a)

present [ ] = persent [ ] + v [ ] (b)

v[] is the particle velocity, persent[] is the current particle (solution). pbest[] and gbest[] are defined as stated before. rand () is a random number between (0,1). c1, c2 are learning

factors. Usually c1 = c2 = 2.

Particles' velocities on each dimension are clamped to a maximum velocity Vmax. If the

The pseudo code of the procedure is as follows

For each particle

Initialize particle

END

Do

For each particle

Calculate fitness value

If the fitness value is better than the best fitness value (pBest) in

history

set current value as the new pBest

End

Choose the particle with the best fitness value of all the particles as

the gBest

For each particle

Calculate particle velocity according equation (a)

Update particle position according equation (b)

End

While maximum iterations or minimum error criteria is not attained

Page 285: Keynote 2011

285

sum of accelerations would cause the velocity on that dimension to exceed Vmax, which is a parameter specified by the user. Then the velocity on that dimension is limited to Vmax.

Termination Checking:

The algorithm repeats Velocity Updating, Position Updating, and Memory Updating until certain termination conditions are met, such as a pre-defined number of iterations or a failure to make progress for a certain number of iterations. Once terminated, the algorithm reports the values of gbest and f (gbest) as its solution.

6. RELEATED WORK

There have been several proposals to extend PSO to handle multiple objectives. We will review next the most representative of them:

Ray and Liew[6]: This algorithm uses Pareto dominance and combines concepts of evolutionary techniques with the particle swarm. The approach uses crowding to maintain diversity and a multilevel sieve to handle constraints.

Hu and Eberhart[7]: In this algorithm, only one objective is optimized at a time using a scheme similar to lexicographic ordering. In further work, Hu et al.[8] adopted a secondary population (called “extended memory”) and introduced some further improvements to their dynamic neighborhood PSO approach.

Fieldsend and Singh[9]: This approach uses an unconstrained elite archive (in which a special data structure called “dominated tree” is adopted) to store the nondominated individuals found along the search process. The archive interacts with the primary population in order to define local guides. This approach also uses a “turbulence” (or mutation) operator.

Coello et al.[10]: This approach uses a global repository in which every particle deposits its flight experiences. Additionally, the updates to the repository are performed considering a geographically-based system defined in terms of the objective function values of each individual; this repository is used by the particles to identify a

leader that will guide the search. It also uses a mutation operator that acts both on the particles of the swarm, and on the range of each design variable of the problem to be solved. In more recent work, Toscano and Coello[11] adopted clustering techniques in order to divide the population of particles into several swarms in order to have a better distribution of solutions in decision variable space. In each sub-swarm, a PSO algorithm is executed and, at some point, the different sub-swarms exchange information: the leaders of each swarm are migrated to a different swarm to variate the selection pressure.

Zhiwen Zeng, Ya Gao, Zhigang Chen, Xiaoheng Deng[12]: As there are many malicious nodes spreading false information in P2P networks, it is of great importance to build a sound mechanism of trust in the P2P environments. To avoid the shortage of the existing trust model, this paper provides a Trust Path-Searching Algorithm based on PSO. In this algorithm, after initializing the particle swarm, each particle can update its speed and location according to its information, and then produce a new particle with better value. Doing that process continuously, implementing the global search of the space, finally, we can get a better overall value, which is the better trust path in the networks. The simulation results show that it can get the overall optimum solution in a relatively short time after many times of iteration, and the more times it iterates, the better the trust path is. It can be proved that to a certain extent, the algorithm can prevent the fraud.

A. M. Chopde, S. Khandelwal, R. A. Thakker, M. B. Patil, K. G. Anil [15]:Efficient DC parameter extraction technique for MOS Model 11, level 1100 (MM11) is outlined. The parameters are extracted step-by-step depending upon the characteristics where they play a major role. We have used Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to extract parameters for NMOS device with 65 nm technology. To the best of the authors knowledge, this is the first application of PSO algorithm for MOSFET parameter extraction. It has been observed that PSO algorithm performs much better as compared to GA in terms of accuracy

Page 286: Keynote 2011

286

and consistency. The proposed extraction strategy has been verified for the same technology for 150 nm and 90 nm devices.

Heinz Muhlenbein and Thilo Mahnig[16]For discrete optimization, the two basic search principles prevailing are stochastic local search and population based search. Local search has difficulties to get out of local optima. Here variable neighborhood search outperforms stochastic local search methods which accept worse points with a certain probability. Population based search performs best on problems with sharp gaps. It is outperformed by stochastic local search only when there are many paths to good local optima.

7. CONCLUSION

In this paper, the Study of PSO and its Cryptographic security issues are considered in it .At some point in the near future; we believe that there will be an important growth in the number of applications that adopt PSOs as their search engine.

8. REFERENCES

[1] James Kennedy and Russell C. Eberhart.

Particle swarm optimization. In proceedings of

the 1995 IEEE International Conference on

Neural Networks, Pages 1942-1948, Piscataway,

New Jersey, 1995.IEEE Service Center.

[2] Russell C.Eberhart, Roy Dobbins, and Patrick

K. Simpson. Computational Intelligence PC

Tools. Morgan Kaugmann Publishers, 1996.

[3] Andries P.Engelbrecht. Fundamentals of

Computational Swarm Intelligence. John Wiley &

Sons,2005.

[4] James Kennedy and Russell C. Eberhart.

Swarm Intelligence. Morgan Kaufmann

Publishers, San Francisco, California, 2001.

[5] B.Schneier, Applied Cryptography, Second

Edition protocols, algorithms and source code in

C. pp 15-16 John Wiley and Sons, 1996.

[6] Ray, T.,Liew ,k.: A Swarm Metaphor for

Multiobjective Design Optimization. Engineering

Optimization 34(200) 141-153.

[7] Hu, X., Eberhart, R.: Multi-objective

Optimization Using Dynamic Neighborhood

Particle Swarm Optimization. In: Congress on

Evolutionary Computation (CEC’2002). Volume

2., Piscataway, New Jersey, IEEE Service

Center(2000) 1677-1681.

[8] Hu,X., Eberhart, R.c., Shi, Y.: Particle Swarm

with Extended Memory for Multi-objective

Optimization. In:2003 IEEE Swarm Intelligence

Symposium Proceedings, Indianapolis, Indiana,

USA,IEEE Service Center (2003) 193-197.

[9] Fieldsend, J.E., Singh, S.: A Multi-objective

Algorithm based upon Particle Swarm

Optimisation, an Efficient Data Structure and

Turbulence. In: Proceeding of the 2002 U.K.

Workshop on Computational

Intelligence,Birmingham, UK92002) 37-44.

[10] Coello Coello, C.A., Toscano Pulido, g.,

Salazar Lechuga, M.: Handling Multiple

Objectives With Particle Swarm Optimization.

IEEE Transaction on Evolutionary Computation

8(2004) 256-279.

[11] Biham, E. and Seberry, J.”Py (Roo): A Fast

and Secure Stream Cipher”. EUROCRYPT'05

Rump Session, at the Symmetric Key

Encryption Workshop (SKEW 2005), 26-27 May

2005.

[12] Zhiwen Zeng, Ya Gao, Zhigang Chen,

Xiaoheng Deng, "Trust Path-Searching Algorithm

Based on PSO," Young Computer Scientists,

International Conference for, pp. 1975-1979,

2008 The 9th International Conference for Young

Computer Scientists, 2008.

[13] Kennedy, J., and Eberhart, R. C. (1995).

Particle swarm optimization. Proceedings of the

1995 IEEE International Conference on Neural

Networks (Perth, Australia), IEEE Service Center,

Piscataway, NJ, IV: 1942-1948.

[14] Ozcan, E., and Mohan, C. K. (1998). The

Behavior of Particles in a Simple PSO System.

ANNIE 1998 (in review).

Page 287: Keynote 2011

287

[15] R. van Langevelde, A.J. Scholten and D.B.M.

Klaassen, “Physical Background of MOS Model

11,” Nat.Lab. Unclassified Report, April 2003.

[16] E.H.Aarts, H..M. Korst, and P.J.Van

Laarhoven, Simulated annealing, in E.Aarts and

J.K.Lenstra, editors, Local Search in

Combinatorial Optimization, pages 121-136,

Chichester, 1997, Wiley.

[17] Peter J. Angeline. Using selection to improve

particle swarm optimization. In Proceedings of

the 1998 IEEE Congress on Evolutionary

Computation, pages 84–89, Piscataway, NJ, USA,

1998. IEEE Press.

[18] Ioan C. Trelea.The particle swarm

optimization algorithm: Convergence analysis

and parameter selection.Information Processing

Letters, 85(6):317–325, 2003.

37. Enhancing Iris Biometric Recognition System with Cryptography and Error Correction Codes

J.Martin Sahayaraj,PG Scholar,Prist University,Thanjavur-613403,Tamilnadu.

jjmartin48@ yahoo.com,Ph.No:9941165641 Abstract—Major challenge on iris and most biometric

identifier is the intra user variability in the acquired identifiers. Iris of the same person captured in different time may differ due to the signal noise of the environment or the iris camera. In our proposed approach, Error Correction Code, ECC is therefore introduced to reduce the variability and noise of the iris data. In order to find best system performance, the approach is tested using 2 different distance metric measurement functions for the iris pattern matching identification process which are Hamming Distance and Weighted Euclidean Distance. An experiment with the CASIA version 1.0 iris database indicates that our results can assure a higher security with a low false rejection or false acceptance rate.

I. INTRODUCTION

In today society, security has been a major concern and is becoming increasingly important. Cryptography has become one of the most effective way and has been recognized as the most popular technology for the security purposes. History shows that human can remember only short password [1], most of the user even tend to choose password that can be

easily guessed using dictionary or brute force attack.This limitation has triggered the utilization of biometric to produce strong cryptographic key. Biometric is unique to each individual and it is reliable. For years, securities to the world are mostly based on what one knows such as password, PIN, or a security question such as mother’s maiden name. This security feature is easily forgotten, stolen, shared and cracked. Biometric is a measurement of the human body biological or physical characteristics to determine the human identity of (who you are?). There are different types of biometric technologies available today which include fingerprints, face, iris/retina, hand geometry, signature, DNA, keystroke, tongue and etc. Biometric offered an inextricably link from the authenticator to its owner, which cannot done by passwords or token, since it cannot be lent or stolen [2]. Biometric is used to enhance the privacy and security weaknesses exist in the current security technology such as simple password or PIN authentication. Another advantage of biometric is that it can detect and prevent multiple ID’s. Although biometric is unique among all individuals, it is not possible to use biometric as a direct cryptography key for the

Page 288: Keynote 2011

288

system due to the difference bits occur in the template during every authentication. Biometric images or templates are variable by nature which means each new biometric sample is always different. Noises and error may occur in the captured image due to burst or background error and hence the

generated template is different during every authentication. There is also awareness concerning the privacy and the security of personal information due to the storage of the biometric templates. The loss or compromise of biometric templates may end up to unusable biometric. Human physiological biometric is hardly change or do modification if the image template has been stolen or compromised.

This paper presented an ideal biometric authentication system which fulfils the important properties such as non-repudiation, privacy, and security. The important idea to achieve these properties is by combining both the iris biometrics and password. The intra personal variation (noises and error occurred in the genuine iris code) is solved by introducing the Error Correction Codes (ECC). The Error Correction Codes helps to correct errors on the genuine identification making the threshold value to be smaller which significantly helps in improving the user separation of the impostor verification. In this paper, two different distance metric function used in the template matching process will be compared. The next section provides the review of the literature work, followed by the methodology and experimental results. Finally, the last section gives the conclusion remarks and the future work of this paper.

II. LITERATURE REVIEW

A. Biometric Technologies

Biometric has become a very popular topic and gained high interest from the researcher for around 25 years. Biometric refers to the identification and verification of a human identity by measuring and analyzing the physiological or the biological information of a human. The term “biometric” come from Greeks gives the meaning of “the measurement of life”. In recent century, there is a wide varieties of identification and authentication options for the security which is something you have (key, smartcard, ID badges) and something you know (password, PIN, security questions). So why do most security professionals will still choose biometrics to be the “best” method for the identification and authentication purpose? The answer is biometric come from the nature and it is the determination of “who you are”. The uniqueness of biometric is an undeniable reality known by all. Biometric can be basically divided into two categories which are structural (face, hand geometry or veins patterns, fingerprints, iris patterns, retina patterns, DNA) and behavioral (voice, signature, keystroke). The diagram is shown in Figure 1.

Page 289: Keynote 2011

289

F

a

c

e

F

B. Error Correction Codes

Error Correction Codes are widdely used in the field of

Biometrics

Structural

Behavioral

Fi

ng

ge

rp

rin

t

P

a

l

m

Iri

s/

/R

eti

na

D

V

o

i

c

e

Si

g

n

n

at

u

r

e

K

e

yy

st

ro

k

e

P

V

communication channels. Role of the Error Correction Codes is to correct errors that could occurr during the transmission over a noisy channel. Changing onee bit in the ciphertext of a good cryptography system may caause the plaintext to be unreadable. Therefore, a way of dettecting and correcting the error occur is needed. To overcomme the noisiness of the biometric data, Error Correctionn Codes technique has therefore been suggested. Dependiing on the type of error occur on the biometric identifier, errror correction codes were use to eliminates the differences bettween the enrollment and verification readings [4]. Reed Soloomon Code is introduced by Irving Reed and Gus Solomon on January 21, 1959 on their paper title “Polynomial Coddes over Certain Finite

Fig. 1. Different types of biometriic

Each biometric has its own strength and weaknesses, Jain et.al. have identified seven factors of the bbiometric[3]. The comparison of various biometric accordinng to these seven factors is shown in Table 1.

TABLE I.

COMPARISON OF VARIOUS BIOMETRIC TECHNNOLOGIES[1]

Biometric

Page 290: Keynote 2011

290

Face H L M H L H L

Fields” [5]. Reed Solomon Codes is a block based error correcting codes which is able to straaight forward decode and have an excellent burst correction caapability. This block code described by a tuple (n,k,t) where n is the length of Reed Solomon codes, k is the length of tthe message, and t is the number of error to be corrected.

C. Template Matching

There are several matching methhod for image biometric recognition. Some example of the fefeature matching methods are such as Euclidean Distance [7]], Absolute Distance [7], Standard Related coefficient [7][8][[9], Neural Network [10] and etc. In this research, two diffeerent matching distances metric which is Hamming Distance and Weighted Euclidean Distance are being applied to the ssystem and each of their performance to corresponding threshhold value is recorded and

Fingerprint M H H

Hand M M M Geometry Keystroke L L L

Hand veins M M M

Iris H H H

M H

H M M L

M M

M H

M H

M M M

H

L H

evaluated.

D. Past Works

The earliest history work on iris biiometrics is by Daugman. The concept of Daugman has becommes a standard reference model for the researcher in this parrticular field [11]. There have been several researches on integgrating cryptography into biometric in these recent years. Thee major challenge that we identify on biometric cryptography is the intra user variability

Retinal scan H H M L H L H in the acquired biometric identifiers. Traditional

Signature

Voice

Facial thermograph

L L L

M L L

H H L

H L

M L

H M

H L

H L

H

cryptography method such as RSA, AES or DES is not ideal or suitable to use for storing the bioometric templates in and encrypted form and perform the mmatching directly. Minor changes in the bits of the featuree set extracted from the biometric may lead to a huge differrence in the results of the encrypted feature. To solve the above problems, some

Odor H H H

DNA H H H

L L

L H

M L

L L

techniques have been introduced by several researchers. These techniques can be basically caategorized into three types which is Cancellable biometric, Keyy Binding Cryptosystem,

Gait M L L H L H M and Key Generating Cryptosystem. Cancellable biometric possessed tthe characteristic of that if

Ear Canal M M H M M M a biometric template is being comproomised; a new one can be issues using different transform [12]. In this paper [12],

H=High, M= Medium, L=Low Davida store the error correction bbits in the database that might have lead to some leakage off the user biometric data. The assumption of iris code bit cchanges among different

Un

iv ersa lit y

Un

iqu

e nes s

Perm

an

ence

Co llect a

b ilit y

Perfo

rma

nce

Perfo

rma

nce

Accep

ta bility

Circu

mven

tion

Page 291: Keynote 2011

291

presentation of the iris of a person in this work is only 10%. However the fact is 30% bits of the iris code could be different due to the presentations of the same iris [21]. [13] and [14] are also referred to cancellable biometric, which uses one way transformation to convert the biometric signal into irreversible form. Soutar et al. [15] proposed a key binding algorithm in the fingerprint matching system. The user fingerprints image is bind with the cryptographic key during the time of enrolment and the key is only retrieved during successful authentication. Another well known key binding cryptosystem is proposed by Juels and Wattenberg in the Fuzzy Commitment Scheme [16]. Key Generation from voice based on a user spoken password is also introduced in [17]. Another similar approach using key generation scheme based on palmprints feature is also introduced in [19]. Hao et al. [18] also proposed a key regeneration scheme on iris biometric by combining the Hadamard Code and Reed Solomon Error Correction Code. In this scheme, the biometric matching problem is transformed into an error correction issue. The shortcoming of their system is of low security [20]. Hence, in order to improve the security and increase the total success rate of the system, we propose a secured iris biometric template using iris biometric and passwords. In our approach, Error Correction Code, ECC is introduce and adopted in our system to reduce the variability and minimize the signal noise of the biometric data. Two different template matching distance metric will also being applied and experimented for results. Details of the approach are discussed in Section III.

III. METHODOLOGY

This section discussed the overall process for the iris biometric verification. It consists of two main phases which is the enrollment and verification process. Dataset and the template matching distance metric that is use for the experimental result will be describe in detail in this particular section.

A. Datasets

Datasets used for the experimentation is CASIA Iris Database Version 1.0. The Chinese Academy of Sciences – Institute of Automation (CASIA) eye image database version 1.0 contains 756 grayscale iris images taken from 108 candidates and with 7 different images for each candidate collected in two sessions. Image captured is especially for iris recognition research using specialized digital optics developed by the National Laboratory if Pattern Recognition, China and also educational purpose. All iris images are 8 bit gray-level JPEG files, collected under near infrared illumination.

B. Enrollment Process

Fig. 2 Enrollment Process Diagram

The flow of the enrollment process is shown in the Figure 2.

The enrollment steps are described below: Step 1: Iris is extracted through iris Segmentation, Iris

Normalization and Feature Extraction process to generate the iris template and to produce the iris binary code.

Step 2: The binary code for the iris image will then undergo the Reed Solomon Code Encoding Process.

Step 3: A RS Code is then generated after the enrolment process.

Step 4: The RS Code is then encrypted with the enrolment password using Advanced Encryption Standard Cryptography Algorithm to generate a cipher text.

Step 5: The generated cipher text is then stored into the database.

C. Verification Process

Page 292: Keynote 2011

292

Fig. 3 Verification Process Diagram

Page 293: Keynote 2011

293

The flow of the verification process is shown in the Figure

3 and the steps to verify are described as below. Step 1: A tested iris image is extracted using iris

segmentation, normalization, and feature extraction to obtain the iris binary code and the iris template of the testing iris image

Step 2: A password is required to authenticate the user, and this password is use to decrypt with the cipher text obtain from the database using AES decryption process to obtain the RS code.

Step 3: The RS code is then use to decode with the testing iris template code to obtain the enrolled iris template code using Reed Solomon decoding process. Reed Solomon decoding process is also used to correct the error of the testing iris template code in this process.

Step 4: The both iris template code is then undergoes the template matching process using Distance Metric Function (Hamming Distance, Weighted Euclidean Distance).

Step 5: If the iris template is found match under a corresponding threshold value, the user will be authenticate,

distance metrics are being applied for this iris recognition approach. The distance metric used is Hamming Distance and Weighted Euclidean Distance.

1. Hamming Distance Hamming Distance is a distance measurement metric

which gives a measure of how many same bits occur between two given bit patterns. Comparing the bit pattern A and B, HD is calculated as the total of different bits over the total number of bits in the bit pattern, n. The equation is shown in (1).

=1 (1)

2. Weighted Euclidean Distance

Another distance measurement metric used in this research for the pattern matching is Weighted Euclidean Distance. The minimum value obtained from the Weighted Euclidean Distance determines whether the two iris pattern is match. Equation of the Weighted Euclidean Distance is shown in (2).

( )

otherwise the system will be exit.

D. Reed Solomon Code

=

F. Performance Measure ( ) (2)

The False Rejection Rate (FRR) measures the probability of an enrolled individual not being identified by the system. Equation to determine the FRR is shown in (3).

=

(3)

Fig. 4 Diagram of Reed Solomon Error Correction Process on Iris

Verification

Figure 4 shows the diagram of Reed Solomon Error

Correction Process on the Iris Verification. The difference bit

The False Acceptance Rate (FAR) measures the probability of an individual being wrongly identified as another individual using equation (4).

of the two iris images is treated as the corrupted codeword (noisy code) and is corrected during the decoding process. There are several important parameters in Reed Solomon

= (4)

Coding algorithm, i.e. number of bit per symbol, m; length of Reed Solomon codes, n; length of the message, k and the number of error to be corrected, t.

There is another performance measurement which is called Total Success Rate (TSR) is obtained from FAR and FRR. It can also represent the verification rate of the iris cryptography system and is calculated as follow [22].

m =? (Number of bit per symbol) n = 2m – 1 k = n – 2t

= 1

IV. EXPERIMENTAL RESULTS

100%

n represent length of Reed Solomon codes k represent the length of the message t represent the number if error to be corrected. In our research, the length of RS code, n is 210– 1 = 1023, t

is 250, and k= 523. That is the (1023, 523,250) RS coding algorithm is selected.

E. Template Matching Distance Metric

There are many different template matching method can be used for pattern recognition purposes. For this research, two

Page 294: Keynote 2011

294

. The experiment has been performed by conducting altogether 648(108×6) genuine identification and 80892(108×107×7) imposter verification to show the False Rejection Rate (FRR) and False Acceptance Rate (FAR) with

its corresponding threshold value. The approach has been implemented and results were analyzed using MATLAB 7.0. The set of images used for testing is obtained from CASIA Database version 1.0.

Page 295: Keynote 2011

295

TABLE II.

EXPERIMENT RESULT FOR DIFFERENT THRESHOLD VALUES USING

HAMMING DISTANCE

False False Total Acceptance Rejection Success

Threshold Rate, FAR% Rate, FRR% Rate, TSR value % 0 0 100 0 0.05 0 93.17 6.83 0.1 0 77.19 22.81 0.15 0 55.55 44.45 0.2 0 33.52 66.48 0.25 0 15 85 0.3 0 6.43 93.57 0.35 0 2.92 97.08 0.4 0.028 0.39 99.58 0.45 10.2 0 89.8 0.5 99.4 0 0.6

Table II shows the FAR and FRR on different threshold

values for the measurement using Hamming Distance. From the above result, 0.35 gives the best FAR and FRR and hence, it is chosen as the threshold value for this proposed iris cryptography system. Although threshold value of 0.4 gives result of higher total success rate, it is not chosen because the system not allowed successful decryption from imposter, therefore the FAR must be 0.

TABLE III.

EXPERIMENT RESULT FOR DIFFERENT THRESHOLD VALUES USING

WEIGHTED EUCLIDEAN DISTANCE

On the other hand, Table III above shows the FAR and

FRR on different threshold values for the measurement using Weighted Euclidean Distance. The result shows that 30 gives the best FAR and FRR value. Comparing both of the results obtained, Hamming Distance with Total Success Rate of 97.08% is better compared to Weighted Euclidean Distance with Total Success Rate of 96%. Figure 5 and 6, shows the histogram of Hamming Distance while figure 7 and 8 shows the histogram of Weighted Euclidean Distance between the iris codes for both genuine and imposter testing. From the graph, we can see the range different between threshold value for genuine and imposter is large making it easier to differentiate between both imposter and genuine. This is due to the powerful correction capability of the Error Correction Codes in minimizing the intra variance of the iris.

Fig. 5 Hamming Distance Between Iris Code for Genuine Testing

Threshold value

0

False Acceptance Rate, FAR%

0

False Rejection Rate, FRR%

98.6

Total Success Rate, TSR %

1.4 10 0 15 0 20 0 25 0 30 0 35 0.12 2.16 40 4.24 1.24 45 19.66 1.08 50 40.85 0.77 55 59.39 0.31 60 73.43 0.15 65 82.13 70 88.13 75 92.22 80 94.9 85 96.89 90 97.86

95 98.52

100 98.96

49.85 23.3

15.28 7.72 4.01

0 0 0 0 0 0

0 0

Page 296: Keynote 2011

296

50.15 76.7

84.72 92.28

96 97.72 94.52 79.26 58.38

40.3 26.42 17.87 11.87

7.78 5.1

3.11 2.14 1.48 1.04

Fig. 6

Hamming

Distance

Between

Iris Code

for

Imposter

Testing

Fig. 7 Weighted Euclidean Distance Between Iris Code for Genuine Testing

Page 297: Keynote 2011

297 | P a g e

Fig. 8 Weighted Euclidean Distance Between Iris Code for Imposter Testing

V. CONCLUSION

The main aim of this research is to produce a robust and reliable iris recognition approach by minimize the intra variance (FRR) and maximize the inter variance of the iris. Conventional biometric system store templates directly in database, hence if stolen, they will lose permanently and become unusable in that system and other system based on that biometric. In our approach, an iris biometric template is secured using iris biometric and passwords. Error Correction Code, ECC is also introduce to reduce the variability and noisy of the biometric data. The Error Correction Code used in this research is Reed Solomon Codes. Reed Solomon Codes is proven as a very powerful cryptography error correction codes and very suitable for iris biometric cryptography Using Reed Solomon Codes, the FRR is reduced from our previous work [23] of 26% to around 2.92% which enhanced the recognition performance of the iris. Both of the performance is nearly the same between two of the distance metric functions. Among the two distance function which is Hamming Distance and Weighted Euclidean Distance, we found that Hamming Distance provides the best result. Advanced Encryption System (AES) is utilized to assure a more secured transaction on the password. With the combination of password usage, the security of the iris biometric authentication has also increase to a higher level. The security of AES has been identified as a world standard algorithm that has been used for many protections on sensitive information. Since this is the preliminary works, only one dataset has been used for the experimental results, different dataset of the iris will be tested in the future.

ACKNOWLEDGEMENT

The authors would like to acknowledge the funding of MOSTI (Sciencefund) Malaysia, and Universiti Teknologi Malaysia under the project title Iris Biometric for Identity Document. Portions of the research in this paper use the CASIA iris image database.

REFERENCES

[1] Kuo, C., Romanosky, S., and Cranor, L. F., ”Human selection of mnemonic phrase-based passwords”. In Proceedings of the Second

Page 298: Keynote 2011

298 | P a g e

Symposium on Usable Privacy and Security (Pittsburgh, Pennsylvania, July 12 - 14, 2006). SOUPS '06, vol. 149. ACM, New York, NY, 67-78.

Page 299: Keynote 2011

299 | P a g e

[2] Teoh, A.B.J., D.C.L. Ngo, and A. Goh, Personalised cryptographic key

generation based on FaceHashing. Computers & Security, 2004. 23(7): p. 606-614.

[3] Jain A.K., Bolle R. and Pankanti S., “Biometrics: Personal Identification in Networked Society,” eds. Kluwer Academic, 1999.

[4] Sheikh Ziauddin, Matthew N. Dailey, Robust iris verification for key management, Pattern Recognition Letters, In Press, Corrected Proof, Available online 4 January 2010, ISSN 0167-8655

[5] Reed I. S. and Solomon G., “Polynomial Codes Over Certain Finite Fields” SIAM Journal of Applied Mathematics, Volume 8, pp. 300-304, 1960

[6] Bowyer, K.W., K. Hollingsworth, and P.J. Flynn, Image understanding for iris biometrics: A survey. Computer Vision and Image Understanding, 2008. 110(2): p. 281-307.

[7] Pattraporn A., Nongluk C., “An Improvement of Iris Pattern Identification Using Radon Transform,” ECTI Transactions On Computer And Information Technology, vol. 3, No.1 May 2007, pp. 45–50.

[8] Wildes, R.P., Asmuth, J.C., Green, G.L., Hsu, S.C., Kolczynski, R.J., Matey, J.R., McBride, S.E.: A System for Automated Iris Recognition. In: Proc. of the 2nd IEEE Workshop on Applicant Computer Vision, pp. 121–128 (1994)

[9] Wildes, R.P.: Iris Recognition: An Emerging Biometric Technology. Proceedings of IEEE 85, 1348–1363 (1997)

[10] Chen, C., Chu, C.: Low Complexity Iris Recognition Based on Wavelet Probabilistic Neural Networks. In: Proceeding of International Joint Conference on Neural Networks, pp. 1930–1935 (2005)

[11] John Daugman, Biometric personal identification system based on iris analysis. U.S. Patern No. 5,291,560, March 1994.

[12] Davida G. I., Frankel Y., and Matt B. J., “On enabling secure applications through off-line biometric identification,” in Proc. 1998 IEEE Symp. Privacy and Security, pp. 148–157.

[13] Ratha N. K., Chikkerur S., Connell J. H., and Bolle R. M., “Generating cancelable fingerprint templates,“ IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.29, no. 2, pp. 561-572, April 2007.

[14] Savvides M., Kumar B. V., and Khosla P., “Cancelable biometric filters for face recognition,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR04), vol. 3, August 2004, pp. 922–925.

[15] Soutar C., Roberge D., Stojanov S. A., Gilroy R., and Vijaya B. V. K. Kumar, “Biometric encryption using image processing,” in Proc. SPIE, Optical Security and Counterfeit Deterrence Techniques II, vol. 3314, 1998, pp. 178–188.

[16] Juels A. and Wattenberg M., “A Fuzzy Commitment Scheme,” in Proceedings of Sixth ACM Conference on Computer and Communications Security, Singapore, November 1999, pp. 28–36.

[17] Monrose F., Reiter M., Li Q., and Wetzel S., “Cryptographic key generation from voice,” in Proceedings of the IEEE Symposium on Security and Privacy, May 2001, pp. 202–213.

[18] Hao F., Anderson R., and Daugman J., “Combining crypto with biometrics effectively,” IEEE Transactions on Computers, vol. 55, no. 9, pp.1081–1088, 2006.

[19] Xiangqian Wu, Kuanquan Wang, David Zhang, “A cryptosystem based on palmprint feature,” ICPR 2008, pp. 1-4.

[20] Wu, X., Qi, N., Wang, K., and Zhang, D., “ A Novel Cryptosystem Based on Iris Key Generation”. In Proceedings of the 2008 Fourth international Conference on Natural Computation - Volume 04 (October 18 - 20, 2008). ICNC. IEEE Computer Society, Washington, DC, 53-56.

[21] Daugman J. G., “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Patterm Anal. Machine Intell., vol. 15, pp. 1148–1161, Nov. 1993.

[22] Tee C., Andrew T., Michael G., David N., “Palmprint Recognition with PCA and ICA”, Faculty of Information Sciences and Technology, Multimedia University, Melaka, Malaysia, Palmerston North, November 2003.

[23] Sim Hiew Moi, Nazeema Binti Abdul Rahim, Puteh Saad, Pang Li Sim, Zalmiyah Zakaria, Subariah Ibrahim, "Iris Biometric Cryptography for Identity Document," 2009 International Conference of Soft Computing and Pattern Recognition, IEEE Computer Society Conference on, pp. 736-741, Dec 2009.

Page 300: Keynote 2011

300 | P a g e

38. The New Perspectives in Mobile Banking

P.Sathya1, R.Gobi2, Dr.E.Kirubakaran3

[email protected], [email protected], [email protected]

Lecturer1, Research Scholar2, Sr. Deputy General Manager3

Jayaram College of Engg. & Tech1, Bharathidasan University2, BHEL Tiruchirappalli3

Abstract:

Today the mobile is a way of life.

Mobile Phones have brought the world to

your fingertips and promise a revolution in

communication. The mobile offers

‘anytime, anywhere’ convenience not only

for communication but increasingly for

mobile financial services - even for parts of

the world where traditional banking does

not exist. The number of worldwide mobile

phone users who perform mobile banking

will double from 200 million this year to

400 million in 2013, according to Juniper

Research. This paper will examine current

mobile banking technology adoption and

deployment trends, and provide a roadmap

to achieve strategic value from the new

perspectives of mobile banking.

1. Introduction:

Demand for mobile business technology is growing at a tremendous rate. Using mobile phones as multi-functional computing, entertainment and communication devices is certainly the new trend in the wireless world. Today mobile

phones that integrate voice, video, music, gaming, and web browser in one device are common, mobile phones are increasingly resemble personal computers in terms of functionality. Such devices will undoubtedly provide users with unprecedented convenience to users, and could be disruptive to certain segments of the business oriented industries such as digital purchases from mobile devices, mobile banking, mobile ticketing, mobile advertising and mobile shopping. However, as an extension of existing technologies (computers and personal digital assistants) and their services are similar to mobile in the sense that they are more evolutionary than revolutionary, and more a supplement than a substitute to existing technologies and services

The advantage of mobile

penetration enables mobile operators to provide value added service like mobile banking, location based services and etc. The rapid pace of advancement in mobile commerce applications makes a revolutionary change in the banking services by offering anytime anywhere

Page 301: Keynote 2011

301 | P a g e

banking. Mobile banking is the service that allows a mobile client to request and receive information about a personal account, or to transfer funds between accounts using the personal mobile phone.

As mobile services and applications have become a significant part of individual and organizational life in the last two decades, research on mobile business has also flourished. A wide spectrum of Mobile/branchless banking models is evolving. However, no matter what business model, if mobile banking is being used to attract low-income populations in often rural locations, the business model will depend on banking agents, i.e., retail or postal outlets that process financial transactions on behalf telecoms or banks.

Page 302: Keynote 2011

302 | P a g e

2. Mobile Banking:

Mobile banking is the service that allows a

mobile client to request and receive information about a personal account, or to transfer funds between accounts using the personal mobile phone. The earliest mobile banking services were offered via SMS. With the introduction of the first primitive smart phones with WAP support enabling the use of the mobile web in 1999, the first European banks started to offer mobile banking on this platform to their customers.

Today most of the banks offer the application based solutions which are interactive solutions. In mbanking, the customer not only carries out banking transactions but also interacts with the bank databases, files, records etc. indirectly to get relevant details. Since data at customer end and databases at server end is very sensitive and the mobile devices are vulnerable to threats and attacks, security is a major area of concern.

Due to the wide customer acceptance,

mobile banking is now widely used. It is with these devices that the users can perform portable computing and gain instant access to their account. With the help of mobile devices and technologies, people are able to access services like website surfing, m-banking, m-shopping and voice-mail etc. anywhere anytime While it is important to understand the factors that influences user adoption or acceptance of mobile services and application. In the recent years, the third generation and beyond 3G mobile telecommunications networks have been widely deployed or experimented. These networks offer large bandwidths and high transmission speeds to support wireless data services besides traditional voice services. For circuit-switched voice services, mobile operators have provided security protection including authentication and encryption. On the other hand such as mobile banking are likely to be offered by the third parties (e.g., banks) who cannot trust the security mechanisms of mobile operators.

Some of the reasons for preference of m-banking over e-banking are No place restriction; High penetration coefficient; fully personalized; Availability. However, in general, the mobile banking has been well received as it increases the convenience of the customers and reduces banking costs. The banking services are divided into two groups of mobile agency services and mobile banking services. Mobile banking services are the same as the conventional banking services and are generally divided into the following four categories:

1. Notifications and alerts: These services are offered to inform the customer of the transactions done or to be done with his account.

2. Information: Information on the transactions and statements are sent in specific periods. 3. Applications: An application is sent by the customer to the service provider regarding his account or a special transaction. 4. Transfer: Transfer of money between different accounts of the customer or payment to third parties. As mentioned earlier, there are two types of services offered in m-banking, i.e. A) notifications and alerts and B) information, in which the bank sends messages containing information or notification needed by the customer.

In developing countries, the role of the mobile phone is more extensive than in developed countries, as it helps bridge the digital divide. The mobile phone is the one device that people already carry at all times, and services beyond voice and text messaging are booming all over the globe. But cost is an important factor, as new services will be widely adopted, but pricing must be carefully considered, since Internet users have come to view the service as free.

3. State of the art of m-banking: The diagram above reflects a typical

mobile banking architecture. It would require integration into an MNO (Mobile Network Operator) to facilitate the usage of the network’s bearer channels and in order to access the consumer’s mobile phone.

Page 303: Keynote 2011

303 | P a g e

Fig.1 Overall architecture of M-Banking

The mobile banking infrastructure thus sits in a similar technical environment to the banks ATMs, POS, branch and internet banking service offerings. The mobile banking channel can be delivered to the consumer through two bearer or application environments.

Client-side applications are applications that reside on the consumers SIM card or on their actual mobile phone device. Client-side technologies include J2ME and SAT. Server-side applications are developed on a server away from the consumer mobile phone or SIM card. Server-side technologies include USSD2, IVR, SSMS and WAP.

The bank would only need to select one of these Access channel for a consumer from their mobile phone, such as SSMS, WAP, USSD2, etc. , or bearer channel strategies, for implementation. However, in some markets it would be wise to implement more than one bearer channel in order to manage consumer take up and the risk associated with non-take up of a specific technology.

In client-side applications, the consumer data is typically stored on the application, or entered by the consumer, and encrypted by the application in the SIM or handset. In server-side applications, consumer data that enables the processing of transactions, such as account/card details, are typically stored in a secured environment, on a server at a bank or at their allocated service provider.

4. Hackers Gateway in m-banking:

The mobile bearer channels have been

seen to be ‘in-the clear’ largely due to the understanding that your data travels across the air (literally) and thus can’t possibly be safe. The concerns are that a fraudster is able to tap-in or listen in to your call, or data transmission, and record the data for malicious or fraudulent reasons.

The facts prove differently. GSM security is as strong, and if not stronger, than that of your traditional fixed line communications. It would take money, time and effort to be able to actually penetrate the GSM network, steal data, and then use it fraudulently. Data carried across the mobile network is protected by the standard GSM security protocols at the communication layer. The subscriber identity is also protected across this chain.

The risk in transporting data across the GSM channel may be found in the number of stops the data makes before reaching the bank. Unlike fixed line communication, data being carried across the mobile network jumps from one base station to the next, which means that the chain of encrypted communication is broken. The data is also unencrypted when it hits the network operator. Thus, there is a broken encryption between the consumer and the bank. It therefore seems not feasible for a fraudster to expend his money and energies on attempting to break the communication layer. The various security solutions and their issues in the mobile applications are discussed below

Amir Herzberg has discussed some of the challenges and opportunities involved secure payments in banking transactions. He proposed a modular architecture for secure transactions. This architecture consists of three independent processes. In the first process, the device identifies the user through physical possession, passwords or biometrics. The second process is authentication, which states that, the mobile provider authenticates the transaction request

Page 304: Keynote 2011

304 | P a g e

from the device via either subscriber identification or cryptographic mechanisms. The third process is secure performance, according to which the transaction is performed by the mobile transaction provider. The above processes do not provide the secure implementation procedure of mobile banking transaction.

Buchanan et al. propose an approach, combining the SET protocol with the TLS/WTLS protocols in order to enforce the security services over the WAP 1.X for the payment in the m-commerce.

Kungpisdan et al. propose a payment protocol for account-based mobile payment. It employs symmetric-key operations which require lower computation at all engaging parties than existing payment protocols. In addition, the protocol also satisfies transaction security properties provided by public-key based payment protocols such like SET.

The Mobile Banking solution provides end-to-end security and confidentiality of data by ciphering information in the SIM for secure transfer over the mobile phone, the GSM network, the operator’s infrastructure and the connection to the financial institution. The information entered by the user is collected and encrypted by the applet residing in the tamper-proof SIM card.

5. Regulatory Impact: Some regulators appear to be pro-active in

supporting mobile banking, as well as open to discussion on how relevant regulations can be implemented or existing regulation can be changed to support implementations, yet still protect the consumers.

The regulatory impact results from the

actual application of the technology. When a bank decides to add the mobile option, it will need to consider the necessary KYC, AML, branchless banking (agency), electronic money, and data privacy regulations that are relevant to the implementation. There may also be a need to assess the potential risk of fraud and to make

provision for it, especially in the light of the Basle II implementation.

As a first step towards building a mobile

banking framework in India to ensure a smooth rollout of mobile banking in the country, the Telecom Regulatory Authority of India (TRAI) and the Reserve Bank of India (RBI) have reached an understanding on its regulation. Early rollout of mobile banking will speed up the government’s plans towards financial inclusion. These guidelines are meant only for banking customers – within the same bank and across the banks. It would be the responsibility of the banks offering mobile banking service to ensure compliance to these guidelines.

To ensure inter-operability between banks

and mobile service providers, it is recommended that banks may adopt the message formats being developed by Mobile Payments Forum of India (MPFI). Message formats such as ISO 8583, which is already being used by banks for switching of mobile transactions, may be suitably adapted for communication between switches where the source and destination are credit card/ debit cards/pre-paid cards.

6. Mobile Banking Vendor The Mobile Banking Vendor plays the

pivotal role of integrating the bank and the MNO and technically delivering the application to the consumer.

The Mobile Banking environment requires both a bank and a MNO to deliver a transactional or informational banking service to a consumer through the mobile phone. In this description, neither the bank, nor the MNO, can deliver the solution to the consumer in isolation.

In some cases, the bank has delivered a bank branded mobile banking application to the consumer. However the bank has had to make use of, or partner with the MNO for its infrastructure to provision the application and for ongoing financial transactions. Often times, the debate as to who owns the customer for a mobile financial service is the deal breaker between the two parties.

Page 305: Keynote 2011

305 | P a g e

Fig.2 Role of the Mobile Banking Vendor

The MNO’s role in the delivery of Mobile Banking Applications varies from that of bearer channel provisioning where the MNO’s role is marginal to that of full platform provisioning where the MNO is central to the delivery of the banking application and creating value for the MNO in the service offering, but is not much of a competitive differentiator as most MNOs would be able to offer similar solutions.

7. Conclusion

Mobile banking is moving up on the adoption curve, which is evident in the number of implementations known in the world and the level of interest and discussion around the technology and its implementation. It is also evident in the number of technology providers emerging in the mobile banking space. There are several choices when considering how to implement mobile banking. These choices include whether or not to develop the technology within the bank, use a shared infrastructure, or purchase the enabling technology from one of many vendors.

The selected implementation option including bearer channel, vendor and value proposition, should be driven by consumer adoption of the technology, technical capability of the handsets in the target market, affordability of the bearer channel, and the consumers ease of accessing the service.

The ideal vendor would be able to deliver an adequate transaction set, on a broad number of bearer channels/ applications that the consumer could adopt within the market. This vendor would have shown proof of a successful implementation, and if not in our market, would show proof of ability to implement. This would have completed a bank and MNO integration and complied with industry standards for security and financial transaction processing.

Mobile banking is poised to be a big m-commerce feature and if it could well be the driving factor to increase revenues. Nevertheless, Bank's need to deep look into the mobile usage patterns among their target customers and enable their mobile services on a technology with reaches out to the majority of their customers. The users in the advantages of reduced integration by leveraging common interface messages, maintenance and deployment costs and ensured their trust on technology.

REFERENCES

1) Xing Fang, Justin Zhan “Online Banking

Authentication Using Mobile Phones”

978-1-4244-6949-9/10/ ©2010 IEEE.

2) C. Kaplan, ABA Survey: “Consumers

Prefer Online Banking”, American Bankers Association, September 2009.

3) C. Narendiran, S. Albert Rabara, N.

Rajendran “Public Key Infrastructure for

mobile banking security” ISBN978-1-4244-5302-3 Global Mobile Congress ©2009 IEEE.

4) Gavin Troy Krugel , Troytyla mobile for development (FinMark Trust) “ Mobile

Banking Technology Options” Aug 2007.

5) GSMA Mobile Banking www.gsmworld.com/.../mobile.../mobile_

money_for_the_unbanked/.

6) RBI Operative Guidelines to Banks for Mobile Banking

Page 306: Keynote 2011

306 | P a g e

http://www.rbi.org.in/Scripts/bs_viewcon

tent.aspx?Id=1365

7) http://en.wikipedia.org/wiki/Mobile_ban

king

8) Juniper Research " Mobile Banking Strategies“ http://juniperresearch.com/mobile_banki

ng_strategies

9) http://www.marketingcharts.com/

10) Mobile Payments Forum of India (MPFI),

Mobile Banking in India - Guidelines

39. Performance Evaluation of Destination Sequenced Distance Vector and Dynamic

Source Routing Protocol in Ad-hoc Networks

Dr. G. M. Nasira

Assistant Professor in CA

Department of Computer

Science,

Government Arts College,

Salem,

[email protected]

S. Vijayakumar

Assistant Professor, Dept. of CA,

Sasurie College of Engineering,

Vijayamangalam Tirupur,

[email protected]

P. Dhakshinamoorthy

Lecturer, Dept. of CA,

Sasurie College of Engineering,

Vijayamangalam, Tirupur,

[email protected]

Abstract—Ad-hoc network is an

infrastructure less network where the nodes themselves are responsible for routing the

packets. The routing protocol sets an upper limit to security in any packet network. If

routing can be misdirected, the entire network can be paralyzed. The main

objective of this paper is to discuss about performance evaluation of Ad- hoc routing

protocols. We define some criteria that can be used to evaluate Ad-hoc routing protocols.

Keywords-Ad-hoc network, Table driven

routing protocol, on-demand routing

protocol.

XVI. INTRODUCTION

An ad hoc network is a collection of

wireless mobile nodes dynamically forming a temporary network without the use of existing network infra-structure or centralized administration. Due to the limited transmission range of wireless network interfaces, multiple

Page 307: Keynote 2011

307 | P a g e

network hops may be needed for one node to exchange data with another across the network. In such a network, each mobile node operate not only as a host but also as a router, forwarding packets for other mobile nodes in the network, that may not be within the direct reach wireless transmission range of each other. Each node participates in an ad hoc routing protocol that allows it to discover multi hop paths through the network to any other node. The idea of an ad hoc network is sometimes also called an infrastructure-less networking, since the mobile nodes in the network dynamically establish routing among themselves to form their own network on the fly.

Some examples of the possible use of ad hoc networks include students using laptop computers to participate in an interactive lecture, business associates sharing information during a meeting, soldiers relaying information for situational awareness on the battlefield, and emergency disaster relief personnel coordinating efforts after a hurricane or earthquake.

XVII. ROUTING

Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network (Circuit switching) , electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology.

In packet switching networks, routing directs packet forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes, typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may

suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths.

Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments.

Routing is said to contain three elements:

a) Routing protocols, the things that allow information to be gathered and distributed

b) Routing algorithms, to determine paths c) Routing databases to store information

that the algorithm has discovered. The routing database sometimes corresponds directly to routing table entries, sometimes not.

A. Routing Protocols in Ad-hoc Networks

There are different criteria for designing and

classifying routing protocols for wireless ad-hoc networks. For example, what routing information is exchanged when and how the routing information is exchanged, when and how routes are computed etc.

In Proactive vs. reactive routing, proactive Schemes determine the routes to various nodes

Page 308: Keynote 2011

308 | P a g e

in the network in advance so that the route is already present whenever needed. Route Discovery overheads are large in such schemes as one has to discover all the routes. Examples of such schemes are the conventional routing schemes, Destination Sequenced Distance Vector (DSDV). Reactive schemes determine the route when needed. Therefore they have smaller route discovery overheads.

In Single path vs. multi path there are several criteria for comparing single-path routing and multi-path routing in ad hoc networks. First, the overhead of route discovery in multi-path routing is much more than that of single-path routing. On the other hand, the frequency of route discovery is much less in a network which uses multi-path routing, since the system can still operate even if one or a few of the multiple paths between a source and a destination fail. Second, it is commonly believed that using multi-path routing results in a higher throughput.

In Table driven vs. source initiated, the table driven routing protocols, up-to-date routing information from each node to every other node in the network is maintained on each node of the network. The changes in network topology are then propagated in the entire network by means of updates. DSDV and Wireless Routing Protocol (WRP) are two schemes classified under the table driven routing protocols head.

The routing protocols classified under source initiated on-demand routing create routes only when desired by the source node. When a node requires a route to a certain destination, it initiates what is called as the route discovery process. Examples include DSR and AODV. B. Challenges faced in Ad-hoc Networks

Regardless of the attractive applications, the

features of Ad-hoc introduce several challenges that must be studied carefully before a wide commercial deployment can be expected.

1. Internetworking: the coexistence of routing protocols for the sake of internetworking

a Ad-hoc with a fixed network in a mobile device is a challenge for the mobility management.

2. Security and Reliability: An ad-hoc network has its particular security problems due to e.g., nasty neighbor relaying packets. Further, wireless link characteristics introduce also reliability problems because of the limited wireless transmission range, the broadcast nature of the wireless medium (e.g., hidden terminal problem), mobility-induced packet losses and data transmission errors.

3. Routing: Since the topology of the network is constantly changing, the issue of routing packets between any pair of nodes becomes a challenging task. Most protocols should be based on reactive routing instead of proactive.

4. Quality of Service (QoS): Providing different quality of service levels in a constantly changing environment will be a challenge.

5. Power consumption: For most of the lightweight mobile terminals, the communication-related functions should be optimized for less power consumption. A lot of work has already been done in the area of unicast routing in ad hoc networks. I proposed only in the following two categories:

1. Table driven 2. Source initiated (demand driven)

XVIII. TABLE DRIVEN ROUTING PROTOCOL

Table driven routing protocols attempt to maintain consistent, up to date routing information from each node to every other node in the network. These protocols require each node to maintain one or more tables to store routing information, and they respond to changes in network topology by propagating updates throughout the network in order to maintain a consistent network view. The areas in which they differ are the number of necessary routing related tables and the methods by which changes in network structure are broadcast.

Page 309: Keynote 2011

309 | P a g e

Some of such routing protocols are: 1. Destination Sequence Distance Vector

Routing 2. Cluster Head Gateway Switch Routing 3. Wireless Routing Protocol

Destination Sequence Distance Vector Routing

DSDV is a table-driven routing scheme for ad hoc mobile networks based on the Bellman-Ford algorithm. It was developed by C. Perkins and P.Bhagwat in 1994. The main contribution of the algorithm was to solve the routing loop problem. Each entry in the routing table contains a sequence number, the sequence numbers are generally even if a link is present; else, an odd number is used. The number is generated by the destination, and the emitter needs to send out the next update with this number. Routing information is distributed between nodes by sending full dumps infrequently and smaller incremental updates more frequently.

Fig.3.1 Routing information is distributed between nodes For example the routing table of Node A in this network is Destination

Next Hop

Number of Hops

Sequence Number

Install Time

A A 0 A 46 001000

B B 1 B 36 001200

C B 2 C 28 001500

Naturally the table contains description of

all possible paths reachable by node A, along with the next hop, number of hops and sequence number.

A. i)Selection of Route

If a router receives new information, then it uses the latest sequence number. If the sequence number is the same as the one already in the table, the route with the better metric is used. Stale entries are those entries that have not been updated for a while. Such entries as well as the routes using those nodes as next hops are deleted.

B. ii) Advantages

DSDV was one of the early algorithms available. It is quite suitable for creating ad hoc networks with small number of nodes. Since no formal specification of this algorithm is present there is no commercial implementation of this algorithm. DSDV guarantees for loop free path.

C. iii) Disadvantages

DSDV requires a regular update of its routing tables, which uses up battery power and a small amount of bandwidth even when the network is idle. Whenever the topology of the network changes, a new sequence number is necessary before the network re-converges; thus, DSDV is not suitable for highly dynamic networks. (As in all distance-vector protocols, this does not perturb traffic in regions of the network that are not concerned by the topology change.)

The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are

Page 310: Keynote 2011

310 | P a g e

deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations.

XIX. DEMAND DRIVEN ROUTING PROTOCOL

A different approach from table driven routing is source initiated on demand routing(Demand Driven Routing). This type of routing creates routes only when desired by the source node. When a node requires a route to a destination, it initiates a route discovery process within the network. This process is completed once a route is found or all possible route permutations have been examined. Once a route has been established, it is maintained by a route maintenance procedure until either the destination becomes inaccessible along every path from the source or until the route is no longer desired.

The following protocols fall in this category: a) Dynamic Source Routing b) Ad Hoc On Demand Routing Protocol c) Associativity Based Routing

Dynamic Source Routing

Dynamic Source Routing (DSR) is a routing protocol for wireless mesh networks. It is similar to AODV in that it forms a route on-demand when a transmitting computer requests one. However, it uses source routing instead of relying on the routing table at each intermediate device. Many successive refinements have been made to DSR, including DSRFLOW.

Determining source routes requires accumulating the address of each device between the source and destination during route discovery. The accumulated path information is cached by nodes processing the route discovery packets. The learned paths are used to route packets. To accomplish source routing, the routed packets contain the address of each

device the packet will traverse. This may result in high overhead for long paths or large addresses, like IPv6. To avoid using source routing, DSR optionally defines a flow id option that allows packets to be forwarded on a hop-by-hop basis.

This protocol is truly based on source routing whereby all the routing information is maintained (continually updated) at mobile nodes. It has only two major phases, which are Route Discovery and Route Maintenance. Route Reply would only be generated if the message has reached the intended destination node (route record which is initially contained in Route Request would be inserted into the Route Reply).

To return the Route Reply, the destination node must have a route to the source node. If the route is in the Destination Node's route cache, the route would be used. Otherwise, the node will reverse the route based on the route record in the Route Reply message header (this requires that all links are symmetric). In the event of fatal transmission, the Route Maintenance Phase is initiated whereby the Route Error packets are generated at a node. The erroneous hop will be removed from the node's route cache; all routes containing the hop are truncated at that point. Again, the Route Discovery Phase is initiated to determine the most viable route.

DSR is an on-demand protocol designed to restrict the bandwidth consumed by control packets in ad hoc wireless networks by eliminating the periodic table-update messages required in the table-driven approach. The major difference between this and the other on-demand routing protocols is that it is beacon-less and hence does not require periodic hello packet (beacon) transmissions, which are used by a node to inform its neighbors of its presence. The basic approach of this protocol (and all other on-demand routing protocols) during the route construction phase is to establish a route by flooding RouteRequest packets in the network. The destination node, on

Page 311: Keynote 2011

311 | P a g e

receiving a RouteRequest packet, responds by sending a RouteReply packet back to the source, which carries the route traversed by the RouteRequest packet received.

Consider a source node that does not have a

route to the destination. When it has data packets to be sent to that destination, it initiates a RouteRequest packet. This RouteRequest is flooded throughout the network. Each node, upon receiving a RouteRequest packet, rebroadcasts the packet to its neighbors if it has not forwarded it already, provided that the node is not the destination node and that the packet’s time to live (TTL) counter has not been exceeded. Each RouteRequest carries a sequence number generated by the source node and the path it has traversed. A node, upon receiving a RouteRequest packet, checks the sequence number on the packet before forwarding it. The packet is forwarded only if it is not a duplicate RouteRequest. The sequence number on the packet is used to prevent loop formations and to avoid multiple transmissions of the same RouteRequest by an intermediate node that receives it through multiple paths.

Thus, all nodes except the destination

forward a RouteRequest packet during the route construction phase. A destination node, after receiving the first RouteRequest packet, replies to the source node through the reverse path the RouteRequest packet had traversed. Nodes can also learn about the neighboring routes traversed by data packets if operated in the promiscuous mode (the mode of operation in which a node can receive the packets that are neither broadcast nor addressed to itself). This route cache is also used during the route construction phase. If an intermediate node receiving a RouteRequest has a route to the destination node in its route cache, then it replies to the source node by sending a RouteReply with the entire route information from the source node to the destination node.

A. i) Advantages

B. This protocol uses a reactive approach

which eliminates the need to periodically flood

the network with table update messages which

are required in a table-driven approach. In a

reactive (on-demand) approach such as this, a

route is established only when it is required

and hence the need to find routes to all other

nodes in the network as required by the table-

driven approach is eliminated. The

intermediate nodes also utilize the route cache

information efficiently to reduce the control overhead.

C. ii)Disadvantages

The disadvantage of this protocol is that the route maintenance mechanism does not locally repair a broken link. Stale route cache information could also result in inconsistencies during the route reconstruction phase. The connection setup delay is higher than in table-driven protocols. Even though the protocol performs well in static and low-mobility environments, the performance degrades rapidly with increasing mobility. Also, considerable routing overhead is involved due to the source-routing mechanism employed in DSR. This routing overhead is directly proportional to the path length.

XX. V. PERFORMANCE ANALYSIS OF DSDV AND

DSR

In comparing the two protocols, the evaluation could be done in the following two metrics:

1) The packet delivery ratio defined as the number of received data packets divided by the number of generated data packets

2) The end to end delay is defined as the time a data packet is received by the destination minus the time the data packet is generated by the source.

Page 312: Keynote 2011

312 | P a g e

A. Packet Delivery Ratio

Figure 5.1 shows packet delivery ratio with pause time varying from 2 to 10 for DSDV and DSR routing protocol. The red line shows graph for DSDV and the green line shows the graph for DSR protocol. The delivery ratio for both the protocols is always greater than 90 percent. The basic difference between the two protocols is very less. But generally the graph for the DSR protocol lies above than that of DSDV for most cases. However in certain cases the DSDV protocols is also better.

Figure 5.1 Packet delivery ratio verses pause time

It is more likely for the mobile nodes to have fresher and shorter routes to a gateway and thereby minimizing the risk for link breaks. Link breaks can result in lost data packets since the source continues to send data packets until it receives a RERR message from the mobile node that has a broken link. The longer the route is

(in number of hops), the longer time it can take before the source receive a RERR and hence, more data packets can be lost. When the pause time interval increases, a mobile node receives less gateway information and consequently it does not update the route to the gateway as often as for short advertisement intervals. Therefore, the positive effect of periodic gateway information is decreased as the advertisement interval increases.

Average End To End Delay

The average end-to-end delay is less for the DSDV approach than for the DSR approach. The reason is that the periodic gateway information sent by the gateways allows the mobile nodes to update their route entries for the gateways more often, resulting in fresher and shorter routes. With the DSR (reactive approach) a mobile node continues to use a route to a gateway until it is broken. In some cases this route can be pretty long (in number of hops) and even if the mobile node is much closer to another gateway it does not use this gateway, but continues to send the data packets along the long route to the gateway further away until the route is broken. Therefore, the end-to-end delay increases for these data packets, resulting in increased average end-to-end delay for all data packets

The average end-to-end delay is decreased slightly for short pause time intervals when the advertisement interval is increased. At the first thought this might seem unexpected. However, it can be explained by the fact that very short advertisement intervals result in a lot of control traffic which lead to higher processing times for data packets at each node

Page 313: Keynote 2011

313 | P a g e

(1) FIGURE 5.2 END-TO-END

DELAY VERSES PAUSE TIME

(2)

XXI. CONCLUSION

This paper does the realistic comparison of two routing protocols DSDV and DSR. The significant observation is, simulation results agree with expected results based on theoretical analysis. As expected, reactive routing protocol

DSR performance is the best considering its ability to maintain connection by periodic exchange of information. DSDV performs almost as well as DSR, but still requires the transmission of many routing overhead packets. At higher rates of node mobility it’s actually more expensive than DSR.

.

(1)

(2) References

[1] D. Kim, J. Garcia and K. Obraczka, “Routing Mechanisms for Mobile Ad Hoc Networks based on the Energy Drain Rate”, IEEE Transactions on Mobile Computing. Vol 2, no 2, 2003, pp.161-173

[2] C.E. Perkins & P. Bhagwat, “Highly Dynamic Destination Sequence-Vector Routing (DSDV) for Mobile Computers”, Computer Communication Review, vol. 24, no.4, 1994, pp. 234-244.

[3] C.E. Perkins and E.M. Royer, “Ad-Hoc on-Demand Distance Vector Routing,” Proc. Workshop Mobile Computing Systems and Applications (WMCSA ’99), Feb. 1999 pp. 90-100

[4] David B. Johnson and David A. Maltz. “Dynamic source routing in ad hoc wireless networks”, Mobile Computing, Kluwer Academic Publishers. 1996 pp.153–181, 1996.

[5] M. S. Corson, J. P. Maker and G. H. Cirincione , "Internet-Based Mobile Ad Hoc Networking," IEEE Internet Computing, Vol. 3, no. 4, July-August 1999, pp. 63-70

Page 314: Keynote 2011

314 | P a g e

40. ONLINE VEHICLE TRACKING USE GIS&GPS

V.NAGA SUSHMITHA S.MEENAKSHI

Student of III Year B.E(ECE), M Kumarasamy College of Engineering, Karur

ABSTRACT: This paper presents a new charging scheme based on distance vehicle traveled in different areas determined by GPS positioning and digital road network database in GIS. The system architecture is designed and system modules are presented. The new system not only more consistent with road pricing principles and objectives of reducing traffic congestion and air pollution, but also more flexible for the integration of ERP system with other Intelligent Transportation Systems (ITS). The capital investment and operating costs are less than traditional ones and is relatively simple to implement and modifying. 1. Introduction

Road pricing has been effective in managing congestion on roads in the metropolis of Asia and all over the world. Changes have been made to the road-pricing scheme since its implement from a manual scheme based on paper permits to an electronic version. Technologies like Global Positioning System (GPS) and Geographical Information System (GIS) make the expansion and more effective of the road pricing scheme possible. This paper presents a new charging scheme for the next phase of Electronic Road Pricing (ERP) system based on GPS and GIS. The system architecture is designed and some basic modules are

discussed. The new scheme charging will be based on distance vehicle traveled in different areas determined by GPS positioning and digital road network database in GIS. This makes the new system not only more consistent with road pricing principles and objectives of reducing traffic congestion and air pollution, but also more flexible for the integration of ERP system with other Intelligent Transportation Systems (ITS) such as emergency assistant and dynamic traffic assignment. At the same time, it has lower capital investment and operating costs, and is relatively simple to implement and modifying. 2 System Design The new designed system consists of the management center, the in-vehicle unit, and the communication link, and some program modules.

2.1 The management center The proposed management center of next ERP system is functioned to tracking, monitoring, charging and guiding all the vehicles traveling on the road. Charging and guiding scheme is on the basis of collected traffic information and map database. Figure 1 shows the architecture of management center. Communication server is used to exchange information through Internet, which is the interface between the in-vehicle unit transceiver and the

Page 315: Keynote 2011

315 | P a g e

management center. It is not only used for vehicular position to be sent to the management center from the vehicle side, but also for the road map to be transmitted to the vehicle when it requires to display but has no map itself. The guidance command and charging scheme are also transmitted to the vehicle from the communication server. Traffic information

analysis provides the statistics of traffic flow on the road on the basis of the real time vehicle position, makes charging scheme for each road and guiding scheme for the vehicle. Tracking and Monitoring provides an interface for the operator in the management center.

Fig. 1 The diagram of the next ERP system management Center

2.2 In-vehicle unit

The proposed in-vehicle unit in next ERP system is shown as in figure 2. Integrated GPS/Dead Reckoning (DR) positioning provide continuous vehicular position output even in urban area where dense high buildings or the tunnel will block the GPS signal. To further improve the positioning accuracy, Map Matching (MM) could be applied in the in-vehicle unit if there is map database and enough processing power or implemented in the management center. The accurate positioning information could be used to

assist the traffic flow analysis on the exact road. The Data fusion is mainly for fusing all the sensors’ data to get more accurate, robust positioning information. Status provides the vehicular or road status such as alarm, breaking down or congestion. Display shows some related information to the driver including guidance, charging information or electronic map if there has. Transceiver provides two-way data link via wireless network and IP based packet data transmission. The data would be broadcast into the Internet so that any authorized user can make full use of them to provide value-added services. Control card is the crucial part for organizing all these information. Smart card interface is for charging via smart card inserted into the in-vehicle unit.

Page 316: Keynote 2011

316 | P a g e

Fig. 2 Design of the in-vehicle unit for ERP

2.3 Communication link Wireless communication is used to transmit all the vehicles position and status to management center, and broadcast the updates of the traffic information when required or periodically. For Singapore next ERP system, current cellular mobile systems or third generation wireless communication could be employed and the mobile set could be embedded in the IU. As 2.5 Generation wireless communication, General Packet Radio Service (GPRS) has quicker session setup, permanent connection, lower cost, higher data rate performance[5]. Furthermore, it is IP based data transmission so the transceiver is unnecessary in the management center, which is connected with Internet. It is better choice for the next ERP system and can be easily upgraded into the next generation wireless communication. 3 System Modules Based on the proposed architecture of in-vehicle unit and management center, the main modules include electronic map database, integrated positioning, and map matching. 3.1 Electronic map database In next ERP system in Singapore, electronic road map could be used for assist positioning, charging scheme and route guidance. The contents and data structure in the electronic map database are described as follows. 1) Spatial database: This includes node, which may

be the junction and cross point of road, link representing road segment. The points between two adjacent nodes are also included in the spatial database. 2) Non-spatial database: This includes address name, road name and attribute etc. 3) Topological relationship: It includes relationship between node and node, node and link, among links.

3.2 Positioning of vehicle In next ERP system, widespread installation of expensive gantries is not necessary because the position of vehicle will be used for charging. There are many positioning systems are available for the civil application; among them GPS could provide 10 meters accuracy without SA and can work in all weather, at any time of the day, and under specified conditions of radio-frequency interference, signal availability. However, in an urban environment, satellite signals are blocked by high buildings or heavy foliage, and the urban canyon limits satellite visibility. To solve these problems, many aided positioning methods such as dead reckoning (DR), wireless communication were adopted [3]. In comparison to DR, wireless aided method is still dependant on the radio environment. DR uses the distance and direction changes to provide the vehicular position relative to the original point. As DR has error accumulated over time and reliable GPS solution are often not available in urban area, multi-sensor integration systems, therefore, are

Page 317: Keynote 2011

317 | P a g e

required for next ERP systems, which is shown in the diagram of figure 1. The GPS receiver is for the absolute positioning and set up the original point for DR, while the distance and direction sensor provide the distance and direction information for DR algorithm. When the GPS data is available, the fusion algorithm predicts the errors of distance and direction sensor; the DR algorithm will provide the positioning while GPS data is not available. The traffic flow statistics will be dependant on the road where vehicle travels while its position is transmitted to management center. However, sometimes the GPS and/or DR cannot give an accurate position, which cannot determine which road the vehicle travels on. Therefore map matching is needed to integrate with GPS/DR in next ERP system. Map matching is simply a method of using stored electronic map data information about a region to improve the ability of a position determination system to handle errors[4]. Its algorithm has basically two categories: as a search problem or as a statistic estimation problem. As a search problem, map matching is to match the measured vehicle position to the closest node or point within a predefined range in the road network. This approach is fast and fairly easy to implement and it is suitable for the vehicles that have fixed track such as buses or travel in accordance with planning route. As a statistics estimation problem, map matching use probabilistic approaches to make the match more accurate through modeling errors in an ellipse or

rectangle of uncertainty that represents the area in which the vehicle is likely to be located. It is

suitable for any vehicle that does not have fixed route in traveling. 3.3 Map matching Map matching algorithm in CCS processes the received vehicle position and searches the map database to create the list of road links the vehicle traveled. The

required payment is calculated according to the road link price in database and deduction is made from

prepaid card or stored to the vehicle ID account for monthly-based payment. Different from vehicle navigation system[6], map matching for ERP does not require real time calculation. This means that the searching of road link vehicle traveling on current can use not only the past and present vehicle position but also the following vehicle position. The off time matching makes it much more accurate and reliable. The matching processes (as shown in figure 3) are: initialization (O), position between intersections (M-N), position near intersection and no turn detected (M), position near intersection and turn detected (P), and re-initialization.

Fig. 3. Map matching process

Page 318: Keynote 2011

318 | P a g e

A probabilistic algorithm is designed to match the initial position of the vehicle. Since the actual location of the vehicle is never precisely known, so we determine an error ellipse, i.e. confidence region, that vehicle is likely to be within. From estimation theory, the input and output signals can be modeled as stochastic process. Variable associated with the true and measured values can be modeled as random variables. Variance-covariance information is propagated through appropriate algorithms to derive the variances and co-variances as functions of the original random variables or as functions of parameters estimated from the original observations. These variances and covariances are used to define confidence region. The determination of the

confidence region should also consider the map accuracy as well as the road width. Searching process proceed until there are candidates within the region. A match completed if there is only one road link cross or within the region. If more than one candidates exist, the candidates are eliminated with the following standards until the only correct link is matched: direction difference between road link and vehicle traveling, traffic restrictions such as one-way road, distances between vehicle position and candidate link. One matched link can be used to verify the immediate past link with their topology relationship.

After a valid start point is known, only three situations are to be considered: vehicle on a road link between intersections, vehicle near an intersection while no turn is detected, and vehicle near an intersection while a turn detected. Suppose the route traveled is O-M-N-P-Q, the map matching processes are as following (fig. 3).

Matching the start position to initial location O on link L1, record the distance OM by GPS and DR distance sensor.

When vehicle near node M, three possible connections are considered and match to link L2 while no turn is detected at M.

Among 3 possible connections at N, when a turn is detected, link L3 is selected according to azimuth measurements or angle turned.

Repeat the same processes above. The vehicle trace recorded is L1-L2-L3-

L4, …

Suppose the recorded links in one month for a vehicle are Li (i=1,2,3, … , m), then the monthly payment P for the vehicle can be calculated according to price scheme in the database,

4 Conclusions When vehicular position is collected via wireless communication, traffic flow analysis will be made in the management center to provide dynamic route guidance for vehicle on the road and then improve the quality of service of the transportation. If MM is done in the management center, no additional cost is added into the in-vehicle unit. When MM is done in the in-vehicle unit, additional storage space is required for map database, but the storage media

Page 319: Keynote 2011

319 | P a g e

is so cheap even for hundreds of megabytes memory that it can be neglected. But How to make the charging scheme and accurately analysis the traffic flow to provide efficient

dynamic route guidance need to be further studied.

Page 320: Keynote 2011

320 | P a g e

41. USAGE OF DRIVE TRAINS AS AUXILLARY

ENGINES IN TRUCKS

P. Venkataramanan, B. Vasanth

Electrical and Electronics Department, Rajalakshmi Engineering College Thandalam, Chennai-602105.

Page 321: Keynote 2011

321 | P a g e

ABSTRACT:

Fuel cost and pollution is a major issue today. The price hike is also a major cause economically and the pollution has also leaded to the global warming. So the alternate fuels have come into existence which helps in saving the world from global warming. Thus the pollution is decreased and management of the fuel price is also easier. This method of employing drive trains in trucks helps in towing heavy tonnes of load which a normal induction machine needs more modification.

INTRODUCTION:

Drive trains are DC series motors employed in high speed locomotives. Thus these drive trains can be employed in long travelling trucks thus helping them to tow up the tonnes of load. Roadways are considered to be the cheapest mode of transport. These trucks carrying loads nearly travel upto 3000-4000 kms. Travelling these much kms continuously leads to more fuel consumption and pollution because it is a diesel. Diesel is most commonly used because it is most efficient than gasoline. Diesel engines give more power upto 320hp.So these power output will be matched by a drive train only.

BASIC PRINCIPLE:

An electric motor can be used as a auxiliary engine in trucks where the output power from the drive train can be fed to a auxiliary battery which is storage device.

Fig1. Chassis of model vehicle

The red line shows the auxiliary battery placement and yellow line shows the placement position of a drive train. Regenerative braking is a major phenomena used here. When the truck works in the

diesel mode it drives the drive train coupled to it works a generator and charges the battery. So this battery can be used as a power source to drive the truck in electric mode. So regenerative braking is the basic concept used in this.

DIESEL AS FUEL:

Diesel is most often used a s a fuel in heavy earth moving vehicles since it can drive the engine to high torque and gain high efficiency. Even locomotives uses diesel as a fuel for its working.

Fig2. Diesel Engine

TRUCKS AS MAJOR ROAD TRANSPORT:

Trucks which we see all along are considered as the heavy carrier of loads. These nearly carry about load of 3tonnes-5tonnes based upon their build up. As truck engines are built up with an aim to yield up a greater horse power of nearly 300hp.These trucks cover up to 3000-4000kms.

COMPONENTS:

1) Chassis.

2) Drive train.

3) Auxiliary battery. COST AND DECREASED EFFICIENCY:

As seen already due to a long drive there is fuel scarcity since the maximum mileage of the engine is 6.5-7.Considering an important fact that even due to

Page 322: Keynote 2011

322 | P a g e

long run there is a decreased efficiency of the engine. Thus the engine needs to be stopped for a time-while. This increases the travelling time also.

WHY TRACTION MOTORS?

We use traction motors of locomotives such that they can easily drive the vehicle with load where a normal induction machine needs a more special design.

Fig3. Truck in a highway

REGENERATIVE PRINCIPLE:

Regenerative braking as we are very familiar is used here. Accordingly when the truck runs in diesel mode the drive train can pull it up to generate power which is there by stored in a auxiliary battery.

Fig4. Block Diagram

Fig5. Truck Battery

DRIVE TRAIN CHARACTERISTICS:

Fig6. Test Results

USAGE:

Thus by this principle, when the electric modes are used the diesel engine is left idle .So the fuel consumption decreases a high extent because the truck can run in this mode for nearly a long way. By this the engine efficiency is increased.

ADVANTAGES:

The engine’s heating effect is reduced.

Pollution less travel.

Page 323: Keynote 2011

323 | P a g e

Saving wasted power by regenerative principle.

DISADVANTAGES:

The motor couldn’t pull up the whole truck when it

is subjected to heavy load.

Exact input supply to the battery is not possible

since it needs a special circuit which may be

complex.

CONCLUSION:

Employing this method can lead to fuel saving to the large

extent. This method can also help to tackle the fuel

scarcity. Pollution less environment will be possible.

REFERENCE:

[1] www.howstuffworks.com

[2] www.energysystems.com

42. Implementation of Zero Voltage Switching Resonant DC–DC

Converter with Reduced Switching Losses Using Reset Dual Switch for High Voltage

Applications

1 N.Divya and

2 M.Murugan

1- P.G scholar, 2- Asst.Professor

K.S.Rangasamy College of Technology, Tiruchengode, Namakkal. 637215.

Email:[email protected]

Contact no:9791997671

Page 324: Keynote 2011

324 | P a g e

Abstract—This paper presents a new topology named implementation of zero-voltage switching (ZVS) resonant DC-

DC converter with reduced switching losses using reset dual switch for high voltage applications, which, compared

with resonant reset single switch forward dc–dc converter, maintains the advantage that duty cycle can be more

than 50%, at the same time disadvantages of high voltage stress for main switches and low efficiency are

overcome. In addition, ZVS is achieved for all switches of the presented topology. Therefore, this proposed

topology is very attractive for high voltage input, wide range, and high efficiency applications. In this paper, the

operation principle and characteristic of this topology are analyzed in detail. Finally the closed loop controller is

used to obtain the regulated output voltage.

Keywords —Zero Voltage Switching (ZVS), Voltage Stress, PI Controller, duty cycle.

I. INTRODUCTION

The resonant reset single switch forward dc–dc converter, whose merit is specified as having a duty-ratio that

can exceed 50%, is appropriate for wide range applications. However, while similar to the other single switch forward

converters, this topology is not favorable for high-line input applications due to quite high voltage stress of the main

switch as well. This is the first

Page 325: Keynote 2011

325 | P a g e

disadvantage of this topology. In addition, the voltage across the resonant capacitor *+ is input voltage before

the main switch is turned on. Once the main switch is turned on, the power, which is stored in both *+ . The value of

*+ , which is a resonant capacitor paralleled with , is much larger than the parasitic capacitor of the main switch,

therefore, it can be considered that the equivalent switching dissipation of this converter is greatly increased, which

results in the decrease of efficiency. This is the second disadvantage of this topology [1]-[3].

For reducing voltage stress across the main switches, the dual switch forward converter is proposed . The

voltage stress of each switch contained in the topology is the same as the input value, which is approximately half that

of the single switch forward topology. Therefore, this topology is appropriate to high-line input applications and it is

categorized into one of three-level topologies. However, the cost of ZVS working for the dual switch forward

converter is rather high. In general, a complex auxiliary circuit is added to achieve ZVS, which belongs to the snubber

type soft switch and not the control type soft switch. In addition, the duty-ratio of the dual switch forward converter

must be less than 50%, which is disadvantageous for wide range applications [4].

A new active clamping Zero-Voltage Switching Pulse-Width Modulation Current-Fed Half-Bridge converter.

Its Active Clamping Snubber can not only absorb the voltage surge across the turned-off switch, but also achieve

the ZVS of all power switches. Since auxiliary switches in the snubber circuit are switched in a complementary way

to main switches, an additional PWM IC is not necessary. In addition, it does not need any clamp winding and

auxiliary circuit besides additional two power switches and one capacitor while the conventional current-fed half

bridge converter has to be equipped with two clamp windings, two ZVS circuits, and two snubbers. Therefore, it can

ensure the higher operating frequency, smaller sized reactive components, lower cost of production, easier

implementation, and higher efficiency. The high-power step-up DC-DC conversion technique increasing necessities

and power capability demands in applications such as electric vehicles, Uninterruptible Power Supplies (UPS),

servo-drive systems, semiconductor fabrication equipments, and photovoltaic systems.

By using an auxiliary switch and a capacitor, ZVS for all switches is achieved with an auxiliary winding in one

magnetic core. A small diode is added to eliminate the voltage ringing across the main rectifier diode. In hard

switching boost converter the reverse recovery of the rectifier diode is a problem, especially at high switching

frequency. Therefore, the reverse recovery of the diode prevents the increases of the switching frequency, the

power density, and the efficiency. With two coupled input inductors the reverse recovery of the diode can be

reduced, and the circuit is simple, but the switching loss is still high, and the voltage ringing across the boost diode

is large. With an auxiliary clamp diode the voltage ringing can be eliminated, but the ZVS ranges for the main switch

and the auxiliary switch are reduced because part of current is freewheeling in the clamping diode especially at

light load, and this circulating current also increases the conduction loss. Using other active clamping techniques all

switches can achieve ZVS, but the conduction losses of the switches are high due to the large circulating current, or

the main switches and auxiliary switch are connected in series. Further, the voltage stress of rectifier diode is

doubled, which results in high cost and conduction loss [5].

A modified Pulse Width Modulation Control technique for Full Bridge Zero Voltage Switching DC-DC

Converter with equal losses for all devices. It has the disadvantage that the switching devices IGBT’s or MOSFET’s

have different switching and conduction losses which becomes a drawback at high power level. This modified PWM

control technique provides equal switching and conduction losses for all the devices. The quasi-resonant and multi-

resonant converters do not reduced frequency range for control but the high component stress makes them

impractical for high power applications. The constant frequency resonant converters eliminate the disadvantage

Page 326: Keynote 2011

326 | P a g e

presented by the varying frequency but the increased component stress still limits its applicability to the relatively

low power level. The phase shifted zero voltage switching converter uses the circuit parasitic elements in order to

achieve resonant switching. The soft switching is achieved is for all switches. This operates all switches in a full

bridge configuration with a 50% duty-cycle [6].

A Zero Voltage Switched (ZVS) Pulse Width Modulated (PWM) boost converter with an energy feed forward

auxiliary circuit. The auxiliary circuit, which is a resonant circuit consisting of switch and passive components and

the converter’s main switch and boost diode operate with soft switching. This converter can function with PWM

control because the auxiliary resonant circuit operates for a small fraction of the switching cycle. Since the auxiliary

circuit is a resonant circuit, the auxiliary switch itself has both a soft turn on and turn off, resulting in reduced

switching losses and Electro Magnetic Interference (EMI). Peak switch stresses are only slightly higher than in a

PWM boost converter because part of the energy that would otherwise circulate in the auxiliary circuit and

drastically increase peak switch stresses is fed to the load.

During the diode’s reverse recovery time, which occurs whenever the switch is turned on, a high and sharp

negative spike appears in the diode current because there is a temporary shoot through of the output capacitor to

ground (as the full output voltage is placed across the switch). The turn-off losses are caused by the switch. This

converter has nearly lossless auxiliary switch turns off is achieved with a soft instead of a hard turn off, which limits

EMI. It include PWM control with soft switching for all active and passive switches, and low peak switch stresses

due to a simple energy feed forward auxiliary circuit[7].

An RCD reset dual switch forward converter is proposed . The main advantage of this topology is that duty

cycle can be more than 50%. Therefore, it is suitable for wide input voltage range applications. However, hard

switching operation shows in the topology. A ZVS resonant reset forward converter is proposed, which can overcome

the second disadvantage of the resonant reset forward converter. However, the structure of this converter is quite

complex, where two diodes, two inductors, two capacitors, one transformer, and one switch are added. The

converter controls bi-directional Direct Current (DC) flow of electric power between a voltage bus that connects an

energy generating device such as a fuel cell stack, a photovoltaic array, and a voltage bus that connects an energy

storage device such as a battery or a super capacitor, to provide clean and stable power to a DC load.

A resonant reset dual switch forward converter within the feature of soft switching, which, being the same

case as the resonant reset single switch forward converter, has the advantage that duty-ratio can be more than

50%. Furthermore, the voltage stress of the main switch of this converter is greatly reduced due to the adoption of

the dual-switch structure. Another merit of the proposed topology is that soft switching can be achieved for all

switches, which improves efficiency of the converter [8].

II. BLOCK DIAGRAM

The block diagram of zero voltage switching resonant converter with reduced voltage stress using reset

dual switch is shown in Fig. 1

Page 327: Keynote 2011

327 | P a g e

Fig. 1 Block diagram of ZVS Resonant DC-DC Converter Using Reset Dual Switch

In Fig.1 DC supply is given to an ZVS resonant converter. It converts the DC power into AC power. The DC input

voltage is converted to AC in order to increase the efficiency of the converter. The boosting operation is done in the

converter circuit, which is also a boost converter. Each MOSFET acts as a switch without any switching losses and

facilitates the operation of the converter. Now the boosted voltage is further passed through an isolation

transformer to isolate output from the input variations if any and also the input from the load variations. The

isolation transformer is connected to the rectifier circuit on the other half, where the AC voltage is converted to the

DC output voltage. PI controller with closed loop is used to produce constant output voltage.

III. PROPOSED CIRCUIT DIAGRAM

The circuit diagram of the ZVS resonant reset dual switch forward converter is given in Fig.1 *,-- ,*,--' , and

*,--. are respectively, the parasitic capacitor of /,/', and /.. *+ is the resonant capacitor, which is paralleled

Page 328: Keynote 2011

328 | P a g e

between the drain and source of /' . Because the value of *+ is much larger than the parasitic capacitor of /',

*,-- can be ignored. Lm is the magnetizing inductance and 0- is the saturable inductance, which is added to

prevent the transfer of magnetizing current to the secondary side and assures the ZVS of / .e converter, it is

assumed that the circuit operation is in steady state and the output capacitor is large enough to be considered as a

constant voltage source. During one switching period of the proposed converter, it can be divided into eight

operation modes. It is defined that 121.is the dead time, 13 2 14 is the dead time, 13215is the descending time of

67-, 13218 is the blocking time of 0-.

Fig. 2 Circuit Diagram of the Proposed Circuit

Fig. 3 Control Pulse Waveform

ZVS for both the main switch and auxiliary switch can be achieved. The key of the realization of ZVS for / is that

0- prevents the transfer of magnetizing current to the secondary side until the moment of 18, when 0- is out of

Page 329: Keynote 2011

329 | P a g e

saturation. So the magnetizing current of the primary side can drastically discharge the charge stored in parasitic

capacitor of /.

where ;< is the time of discharging for /,;7' is the dead time mentioned above, and ;- is the blocking time of 0-.

Actually, the period of dead time can’t be very long, therefore an acceptable maximum of must be determined. By

experience, is approximately from 1% to 5% of switch period. Assuming the output diodes are ideal, the delay

caused by saturable inductance and reverse recovery of output rectifier will not be considered. Therefore, the

dimension of the saturable reactor can be greatly decreased. However, a transistor and a diode is added.

Input and output of the forward converter satisfy the following expression:

Put (3) into (2)

The voltage stress of / and /. is input voltage, while the voltage stress of /'is peak of reset voltage of the

transformer, which is defined as 6+=-=> .

Page 330: Keynote 2011

330 | P a g e

IV. SIMULATION RESULT ANALYSIS

Fig. 4 Simulation Circuit of ZVS Resonant DC-DC Converter with Reduced Voltage Stress Using Reset Dual Switch

The control system block is shown below, it consists of PI controller. The closed loop controller is used to obtain

the constant output voltage.

Page 331: Keynote 2011

331 | P a g e

Fig. 5 Simulation Circuit of Control System

V. INPUT & OUTPUT WAVEFORMS

The input voltage waveform for ZVS resonant converter is shown in Fig. 6. The constant DC voltage of 250volt

is given to the zero voltage switching resonant converters.

Fig. 6 Input Voltage Waveform

The switching pulse waveform for pulse generator1 is shown in Fig. 7

Page 332: Keynote 2011

332 | P a g e

Fig. 7 Switching Pulse Waveform

The switching pulse waveform for pulse generator2 is shown in Fig. 8

Fig. 8 Switching Pulse Waveform

Page 333: Keynote 2011

333 | P a g e

The output voltage waveform for ZVS resonant converter is shown in Fig. 9

Fig. 9 Output Voltage Waveform

The output current waveform for ZVS resonant converter is shown in Fig. 10

Page 334: Keynote 2011

334 | P a g e

Fig. 10 Output Current Waveform

V. CONCLUSION

The new topology presented in this paper, which, compared with a resonant reset single switch forward

dc–dc converter, maintains the advantage that duty cycle can be more than 50%, besides, priority of the dual

switch structure can be utilized and therefore voltage stress of the switches can be reduced greatly. According to

the previous analysis, main switches and bear respective input voltage and reset voltage, auxiliary switch bears

input voltage as well. In addition, soft switching is achieved for all switches of the presented topology, which results

in improvement of conversion efficiency and power density. The closed loop controller is used to obtain the

regulated output voltage. Therefore, this proposed topology is very attractive for high voltage input, wide range,

and high efficiency applications.

REFERENCES

[1] N. Murakami and M. Yamasaki, “Analysis of a resonant reset condition for a single-ended forward converter,”

in Proc. IEEE PESC’98 Conf., 1998, pp. 1018–1023.

[2] Xie, J.Zhang, and G. Luo et al., “An improved self- driven synchronous rectification for a resonant reset

forward,” in Proc. IEEE APEC’03 Conf., 2003, pp. 348–351.

[3] G. Spiazzi, “A high-quality rectifier based on the forward topology with secondary-side resonant reset,” IEEE

Trans. Power Electron., vol. 18, no. 3, pp. 725–732, May 2003.

[4] L. Petersen, “Advantages of using a two-switch forward in single-stage power factor corrected power supplies,”

in Proc. INTELEC’00 Conf., pp. 325–331.

[5] Sang-Kyoo Han, “A New Active Clamping Zero-Voltage Switching PWM Current-Fed Half-Bridge Converter” IEEE

Transaction on Power Electronics, Vol. 20, No. 6, pp.1271-1279.

Page 335: Keynote 2011

335 | P a g e

[6] Liviu Mihalache, (2004) “A Modified PWM Control Technique For Full Bridge ZVS DC-DC Converter With Equal

Losses For All Devices” in Proc. IEEE, pp.1776-1781.

[7] Yan-Fei Liu, (1999) “A Zero Voltage Switched PWM Boost Converter with an Energy Feed forward Auxiliary

Circuit” IEEE Transaction on Power Electronics, Vol. 14, No. 4, pp.653-662.

[8] Y. Gu, X. Gu, and L. Hang et al., “RCD reset dual switch forward dc–dc converter,” in Proc. IEEE PESC’04 Conf.,

2004, pp. 1465–1469.

43. Three-Port Series-Resonant DC–DC Converter to Interface

Renewable Energy Sources With Bidirectional Load and Energy Storage Ports

P.KARTHIKEYAN , FINAL YEAR DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING,

K S R COLLEGE OF ENGINEERING, THIRUCHENCODE Email:[email protected] ph:9789626844

Abstract—In this paper, a three-port converter with

three active full bridges, two series-resonant tanks,

and a three-winding transformer is proposed. It uses a

single power conversion stage with high-frequency

link to control power flow between batteries, load, and

a renewable source such as fuel cell. The converter has

capabilities of bidirectional power flow in the battery

and the load port. Use of series-resonance aids in high

switching frequency operation with realizable

component values when compared to existing three-

port converter with only inductors. The converter has

high efficiency due to soft-switching operation in all

three bridges. Steady-state analysis of the converter is

presented to determine the power flow equations, tank

currents, and soft-switching region. Dynamic analysis

is performed to design a closed-loop controller that

will regulate the load-side port voltage and source-side

port current. Design procedure for the three-port

converter is explained and experimental results of a

laboratory prototype are presented.

Index Terms—Bidirectional power, phase-shift

control at constant switching frequency, series-

resonant converter, soft-switching operation, three-port converter, three-winding transformer.

I. INTRODUCTION

FUTURE renewable energy systems will need to interface several energy sources such as fuel cells, photovoltaic (PV) array with the load along with battery backup. A three-port converter finds applications in such systems since it has advantages of reduced conversion stages, high-frequency ac-link, multiwinding transformer in a single core and centralized control [1]. Some of the applications are in fuel-cell systems, automobiles, and stand-alone self-sufficient residential buildings. A three-port bidirectional converter has been proposed in [2] for a fuel-cell and battery system to improve its transient response and also ensure constant power output from fuel-cell source. The circuit uses phase-shift control of three active bridges connected through a three-winding transformer and a network of inductors. To extend the soft-switching operation range in case of port voltage variations, duty-ratio control is added in [3]. Another method to solve port voltage variations is to use a front-end boost converter, as suggested in [4] for ultra capacitor applications. To increase the power-handling capacity of the converter, three-phase version of the converter is proposed in [5]. A high-power converter to interface batteries and ultra capacitors to a high voltage dc bus has

Page 336: Keynote 2011

336 | P a g e

been demonstrated in [6] using half bridges. Since the power flow between ports is inversely proportional to the impedance offered by the leakage inductance and the external inductance, impedance has to be low at high power levels.

To get realizable inductance values equal to or more than the leakage inductance of the transformer, the switching frequency has to be reduced. Hence, the selection of switching frequency is not independent of the value of inductance. A series-resonant converter has more freedom in choosing realizable inductance values and the switching frequency, independent of each other. Such a converter can operate at higher switching frequencies for medium and high-power converters. A three-port series resonant converter operating at constant switching frequency and retaining all the advantages of a three-port structure is proposed in this paper. Other circuit topologies [7]–[12] are suggested in the literature for a three-port converter such as the current-fed topologies [11] that have more number of magnetic components and flyback converter topologies [12] that are not bidirectional. Aconstant-frequency phase-controlled parallel-resonant converter was proposed in [13], which uses phase shift between input bridges to control the ac-link bus voltage, and also between input and output bridge to control the output dc voltage. Such high-frequency ac-link systems using resonant converters have been extensively explored for space

applications [14] and telecommunications applications [15]. The series-resonant three-port converter proposed in this paper uses a similar phaseshift control but between two different sources. The phase shifts can be both positive and negative, and are extended to all bridges, including the load-side bridge along with bidirectional power flow. A resonant converter topology is suggested in [16] but phase-shift control is not utilized for control of power flow; instead, the converters are operated separately based on the power flow direction In this paper, a three-port bidirectional series-resonant converter is proposed with the following features. 1) All ports are bidirectional, including the load port for applications, such as motor loads with regenerative braking. 2) Centralized control of power flow by phase shifting the square wave outputs of the three bridges. 3) Higher switching frequencies with realizable component values when compared to three-port circuits with only inductors. 4) Reduced switching losses due to soft-switching operation. 5) Voltage gain increased by more than two times due to the phase-shifting between input and output bridges as opposed to a diode bridge at the load side. Analysis of the converter in steady state is presented in the following section. A control method for regulating the output voltage and input power of one of the sources is is proposed in, along with a dynamic model.

Three-port series-resonant converter circuit.

Page 337: Keynote 2011

337 | P a g e

II. STEADY-STATE ANALYSIS The proposed circuit is shown in Fig. 1. It has two series resonant tanks formed by L1 and C1 , and L2 and C2 ,

respectively. The input filter capacitors for port 1 and port 2 are Cf 1 and Cf 2, respectively. Two phase-shift control variables φ13 and φ12 are considered, as shown in Fig. 1. They control the phase shift between the square wave outputs of the active bridges. The phase shifts φ13 and φ12 are considered positive if vohf lags v1hf, and v2hf lags v1hf, respectively. The converter is operated at constant switching frequency fs above resonant frequency of both resonant tanks. Steady-state analysis is performed assuming sinusoidal tank currents and voltages due to filtering action of resonant circuit, under high quality factor. The waveforms of the tank currents and port currents before the filter are shown in Fig. 2(b) and (c), respectively. The average value of the unfiltered output current iolf in Fig. 2(c) is given by (1). Phasor analysis is used to calculate ˆI3 (2), which is the peak of the transformer winding 3 current. After substitution, the resultant expression for output current is (3):

Fig. 2. (a) PWMwaveforms with definitions of phase-shift variables φ13 and φ12 . (b) Tank currents and transformer winding 3 current. (c) Port currents before filter

If the load resistance is R, the output voltage is Vo = IoR,

as given by

Where

The average value of port 1 current i1lf from Fig. 2(c) can be expressed as in (7). Substituting the peak resonant tank 1 current ˆIL1 from phasor analysis, the port 1 dc current is then given by (8). Similar derivation is done for port 2, and the results are given in (9) and (10)

Page 338: Keynote 2011

338 | P a g e

The analysis results are normalized to explain the characteristics of the converter, to compare with existing circuit topologies and to establish a design procedure. Consider a voltage base Vb and a power base Pb . For design calculations, the base voltage is used as the required output voltage and the base power as the maximum output power. The variable m1 (11) is defined as the normalized value of port 1 input voltage referred to port 1 side using turns ratio n13. Similar definition applies form2 (11). The per unit (p.u.) output voltage is then given by (12) If a diode bridge replaces the active bridge [17] in port 3, as shown in Fig. 1, the phase-shift angle φ13 is equal to the angle θ3 . The normalized output voltage expression using this condition is derived as follows:

i = 1, 2, j= 1, 2, i_= j (load-side diode bridge).

It is known that the voltage gain for a two-port series-resonant converter with load-side active bridge is more than the load-side diode bridge. This is due to the fact that the phase-shift angle between the input-side and load-side active bridge can reach a maximum angle of 90. The same is true for a three-port converter. A plot of per unit output voltage using (13) and (12) is shown in Fig. 3(a) and (b), respectively. In the plot, the values ofm1 andm2 are chosen as 1.0,Q1 andQ2 as 4.0, and F1 and F2 as 1.1. It is observed that the maximum possible output voltage for the load-side active bridge is 2.8 times the output voltage for the load-side diode bridge. Also the output voltage has a wider range in Fig. 3(b) due to the additional control variable φ13. Port 1 and port 2 power in per unit are given in (15) and (16), respectively. This is calculated using the average values of port currents [see (8) and (10)]

Fig. 3. Output voltage in per unit versus phase-shift angle φ12 for different values of φ13 . (a) Load-side diode bridge (13). (b) Load-side active bridge (12).

Fig. 4. Port power in per unit versus phase shift angle φ13 for different values of quality factor. (a) Port 1. (Note: The plot remains same for different values of quality factor.) (b) Port 2.

Page 339: Keynote 2011

339 | P a g e

Plots of port 1 and port 2 power in per unit as a function of phase shift φ13 are shown in Fig. 4(a) and (b) for three different quality factors. The phase shift φ12 is chosen such that the output voltage in per unit Vo,p.u. is kept constant at 1 p.u. The output power Po,p.u. at maximum load Q = 4.0 is 1 p.u. It is observed from Fig. 4(a) that the port 1 power does not vary with load as long as output voltage is maintained constant. The phase shift φ13 is kept positive for unidirectional power flow in port 1. Port 2 power P2,p.u. can go negative as seen from the plot, and hence used as the battery port. In the plots, the values ofm1 and m2 are chosen as 1.0, and F1 and F2 as 1.1.

Fig. 5. (a) Port 3 soft-switching operation boundary for various values of m1 and m2 . (b) Port 2 peak normalized current versus φ13 for various values of load quality factors The conditions for soft-switching operation in the active bridges can be derived from Fig. 2. If port 1 and port 2 tank currents lag their applied square wave voltages, then

all switches in port 1 and port 2 bridges operate at zero-voltage switching (ZVS). This translates to θ1 > 0 for port 1 and θ2 − φ12 > 0 for port 2. Note that in the analysis the angles are considered positive if lagging. Using phasor analysis, the soft-switching operation boundary conditions are given by (18) and (19) for port 1 and port 2, respectively

Fig. 6. Three-winding transformer model, including the leakage inductances. The steady-state analysis results can be modified to include the effect of magnetizing inductance Lm and the leakage inductance Llk3 of the load-side winding of the transformer, as shown in Fig. 6. The inductors L1 and L2 include the leakage inductances of port 1 and port 2 side windings of the transformer respectively. III. RESULTS

A 500-W prototype was realized in hardware with the controller implemented in Digilent Basys FPGA board. The pulses for the MOSFETs are generated using a ramp carrier signal and two control voltages for the two phase shifts. The pulse generation module was implemented also in FPGA. Since all the pulses are at 50%, pulse transformers are used for isolation in gate drive. The converter parameters are given in Table I. The battery port has three 12 V, 12 A・h lead-acid batteries connected in series. Port 1 uses a dc source with magnitude 50 V, to emulate a renewable energy source. The converter operates in closed loop with output voltage regulated at 200 V.

Results of applied tank voltage and tank currents for port 1 and port 2 are shown in Fig. 10(a) and (b), for operation around point B in Fig. 8, where the load is equally shared between port 1 and port 2. It is observed from these figures that the tank currents lag their applied voltages, and hence ZVS occurs in all switches of port 1 and port 2. In Fig. 10(b), the magnitude of current during

Page 340: Keynote 2011

340 | P a g e

the switching transition is 1 A, which is sufficient for lossless transition. The load-side high-frequency port voltage vohf , which switches between 200 V along with the current in winding 3 of the transformer is shown in Fig. 11(a). The current leads the voltage, which is the condition for ZVS in port 3 based on the current direction mentioned in Section II. The phase shift φ12 between the port 1 and port 2 applied voltages is zero, as observed in Fig. 11(b), and also from analysis in Fig. 9. At operating point C in Fig. 8, the power supplied by port 2 to the load is zero. The results of port 2 voltage and current around

Fig. 10. (a) Applied port 1 voltage v1hf (50 V/division) and tank current i1hf (5 A/division) around operating point B. (b) Applied port 2 voltage v2hf (50 V/division) and tank current i2hf (5 A/division) around operating point B.

The response of the converter for a step-load increase from 400 to 500 W is shown in Fig. 16. It is observed that the port 1 current I1 maintains its value of 6 A and the port 2 battery current I2 increases to supply the extra power to the load. This is useful when a slow dynamic response fuel cell is connected to port 1. The output voltage settles back to its original value after 30 ms as seen from the response.

Fig. 11. (a) Transformer winding 3 voltage vohf (200 V/division) and current iohf (5 A/division) around operating point B. (b) Applied port 1 voltage v1hf (50 V/division) and applied port 2 voltage v2hf (50 V/division) showing zero phase shift around operating point B.

Dynamic response of the converter for a step-load increase from 400 to 500WCh. 1 battery current (4 A/division), Ch. 2 port 1 current (4 A/division), Ch. 3 output voltage (100 V/division), and Ch. 4 trigger input.\

IV. CONCLUSION In this paper, a three-port series resonant converter was introduced to interface renewable energy sources and the load, along with energy storage. It was proven by analysis and experimental results that power flow between ports can be controlled by series resonance and phase-shifting the square wave outputs of the three active bridges. The converter has bidirectional power flow and soft-switching operation capabilities in all ports. Dynamic model and controller design were presented for centralized control of the three-port converter. A design procedure with normalized variables, which can be used for various power and port voltage levels, was presented. Experimental results verify the functionality of the three-port converter. REFERENCES [1] H. Tao, A. Kotsopulos, J. L. Duarte, and M. A. M. Hendrix, “Family of multiport bidirectional dc–dc converters,” Inst. Electr. Eng. Proc. Elect. Power Appl., vol. 153, no. 15, pp. 451–458, May 2006. [2] J. L. Duarte, M. A. M. Hendrix, and M. G. Simoes, “Three-port bidirectional converter for hybrid fuel cell systems,” IEEE Trans. Power Electron., vol. 22, no. 2, pp. 480–487, Mar. 2007. [3] H. Tao, A. Kotsopoulos, J. Duarte, andM. Hendrix, “Transformer-coupled multiport ZVS bidirectional dc-dc converter with wide input range,” IEEE Trans. Power

Electron., vol. 23, no. 2, pp. 771–781, Mar. 2008. [4] H. Tao, J. Duarte, and M. Hendrix, “Three-port triple-half-bridge bidirectional converter with zero-voltage switching,” IEEE Trans. Power Electron., vol. 23, no. 2, pp. 782–792, Mar. 2008.

Page 341: Keynote 2011

341 | P a g e

[5] H. Tao, J. L. Duarte, andM. A.M. Hendrix, “High-power three-port threephase bidirectional dc–dc converter,” in Proc. IEEE Ind. Appl. Soc. 42

nd Annu. Meet.

(IAS 2007), pp. 2022–2029. [6] D. Liu and H. Liu, “A zvs bi-directional dc–dc converter for multiple energy storage elements,” IEEE

Trans. Power Electron., vol. 21, no. 5, pp. 1513–1517, Sep. 2006. [7] H. Al Atrash, F. Tian, and I. Batarseh, “Tri-modal half-bridge converter topology for three-port interface,” IEEE Trans. Power Electron., vol. 22, no. 1, pp. 341–345, Jan. 2007. [8] F. Z. Peng, M. Shen, and K. Holland, “Application of z-source inverter for traction drive of fuel cell battery hybrid electric vehicles,” IEEE Trans. Power Electron., vol. 22, no. 3, pp. 1054–1061, May 2007. [9] Y.-M. Chen,Y.-C. Liu, S.-C. Hung, and C.-S. Cheng, “Multi-input inverter for grid-connected hybrid pv/wind power system,” IEEE Trans. Power Electron., vol. 22, no. 3, pp. 1070–1077, May 2007. [10] G.-J. Su and L. Tang, “A multiphase e,modular, bidirectional, triple-voltage dc-dc converter for hybrid and fuel cell vehicle power systems,” IEEE Trans. Power

Electron., vol. 23, no. 6, pp. 3035–3046, Nov. 2008. [11] H. Krishnaswami and N. Mohan, “A current-fed three-port bi-directional dc–dc converter,” in Proc. IEEE

Int. Telecommun. Energy Conf. (INTELEC 2007), pp. 523–526. [12] B. Dobbs and P. Chapman, “A multiple-input dc–dc converter topology,” IEEE Power Electron. Lett., vol. 1, no. 1, pp. 6–9, Mar. 2003. [13] F. Tsai and F. Lee, “Constant-frequency phase-controlled resonant power processor, ” in Proc. IEEE Ind.

Appl. Soc. 42nd Annu. Meet. (IAS 1986), 2009, pp. 617–622. [14] F.-S. Tsai and F. Lee, “High-frequency ac power distribution in space station,” IEEE Trans. Aerosp.

Electron. Syst., vol. 26, no. 2, pp. 239–253, Mar. 1990. [15] P. Jain and H. Pinheiro, “Hybrid high frequency ac power distribution architecture for telecommunication systems,” IEEE Trans. Aerosp. Electron. Syst., vol. 35, no. 1, pp. 138–147, Jan. 1999. [16] H. Pinheiro and P. K. Jain, “Series-parallel resonant ups with capacitive output dc bus filter for powering hfc networks,” IEEE Trans. Power Electron., vol. 17, no. 6, pp. 971–979, Nov. 2002. [17] H. Krishnaswami and N. Mohan, “Constant switching frequency series resonant three-port bi-directional dc–dc converter,” in Proc. IEEE P ower

Electron. Spec. Conf. (PESC 2008), pp. 1640–1645.

[18] D. Maksimovic, R. Erickson, and C. Griesbach, “Modeling of crossregulation in converters containing coupled inductors,” IEEE Trans. Power Electron., vol. 15, no. 4, pp. 607–615, Jul. 2000.

Page 342: Keynote 2011

342 | P a g e

Abstract— In this paper, a new zero-voltage

transition current-fed full-bridge converter with a

simple auxiliary circuit is introduced for high

step-up applications. In this converter, for the

main switches, zero voltage switching condition is

achieved at wide load range. Furthermore, all

semiconductor devices of the employed simple

auxiliary circuit are fully soft switched. The

proposed converter is analyzed and a prototype is

implemented. Finally, the proposed auxiliary circuit is applied to other current-fed topologies

such as current-fed push-pull and half-bridge converters to provide soft switching.

Index Terms—Current-fed dc–dc converters, zero-

voltage

switching (ZVS) .

I.INTRODUCTION

High step-up dc–dc converters are vastly used in industry as an interface circuit for batteries, fuel cells, solar cells, etc., where the low input voltage should be converted to high dc output voltage. In medium power applications, current- fed half-bridge or push-pull converter is applied, whereas current-fed full-

bridge converter is used for high power applications. These topologies have many desirable advantages such as high voltage conversion ratio, continuous input current, and isolation between input voltage source and load. Also, natural protection against short circuit and transformer saturation are other merits of these converters. Due to ever increasing demand for processing in smaller space, higher density power conversion is required. In order to achieve high-density power conversion, high switching frequency is essential in dc–dc converters. However, high switching frequency results in high switching losses and electromagnetic interference (EMI), especially in isolated converters due to the existence of leakage inductance. To reduce switching losses and EMI, soft switching techniques are vastly applied to dc–dc converters [1]-[8]. Zero-voltage transition (ZVT), zero-current transition (ZCT), and active clamp techniques can be applied to regular pulse width modulation (PWM) dc–dc converters, especially isolated converters, to improve the efficiency and overcome the mentioned problems caused by leakage inductance [1]–[8].

In these techniques, an auxiliary switch is added to regular PWM converters to provide soft switching condition [9]. The ZVT and active clamp techniques are applied to current- fed converters in [3 ]–[6] in

44. Implementation of Zero Voltage

Transition Current Fed Full Bridge PWM

Converter with Boost Converter

1 C.Deviga Rani ,

2 S.Mahendran and

3Dr. I Gnanambal

1- P.G scholar ,2- Asst. professor and 3-professor

Department of Electrical and Electronics Engineering

K.S.Rangasamy College of Technology, Tiruchengode, Namakkal. 637215.

Email:[email protected]

Contact no : 04288-276468

Page 343: Keynote 2011

343 | P a g e

order to provide zero-voltage switching (ZVS) condition for converter main switches. An active clamp and ZVS current-fed full-bridge converter is introduced in [9]. Active clamp circuits have several advantages, but soft switching condition is load-dependent and lost at light loads. In addition to the aforementioned disadvantage, the auxiliary circuit applied to the converter of [8] considerably increases the current stress of the main switches. An active clamp and ZVS current-fed half-bridge converter is introduced in [7]. This converter suffers from the same disadvantages. A ZVT current-fed full-bridge converter is introduced in [8]. In this converter, an auxiliary circuit is added to the transformer secondary side that provides ZVS for the main switches. However, this converter is not a proper solution when the output voltage is high since the auxiliary switch would be subjected to a high voltage stress. Also, in this converter, the snubber capacitor is placed at the secondary side of the transformer. Therefore, the snubber capacitor is incapable of limiting the voltage spikes on the main switches due to the transformer leakage inductance at turn-off time. In the implementation of this converter, a resistor– capacitor–diode (RCD) snubber is also added at the transformer primary side that can increase the converter losses. The other ZVT current-fed full-bridge converter introduced in [16] employs a rather complex auxiliary circuit at the primary side of the transformer. Furthermore, the auxiliary circuit cannot recover the transformer leakage inductance energy that becomes more important at high output voltages.

In this paper, a new ZVT current-fed full-bridge converter with a simple soft switching auxiliary circuit is introduced. In this converter, the snubber capacitor is placed at the primary side, and thus limits the voltage spikes caused by the transformer leakage inductance while recovering the leakage inductance energy. The auxiliary circuit can provide soft switching condition for all semiconductor devices at wide load range. No load condition or absolutely light load conditions are exceptional since current- fed converters become unstable at these conditions. Also, in this converter, the auxiliary circuit is in parallel with the main switches, and when the auxiliary switch is ON, energy is stored in the input inductor. In other words, this auxiliary circuit boosts the effective duty cycle, which is another advantage of this converter.

The proposed ZVT current-fed push-pull and half-bridge converters require only one auxiliary switch, whereas active clamp current-fed counterparts need two auxiliary switches. Also, in the proposed converters, the snubber capacitor is in series with a diode. This protects main switches from sudden discharge of snubber capacitor at turn-on instant when the auxiliary circuit has failed to operate correctly and discharge the snubber capacitor. This is an important advantage of the proposed converters. Furthermore, in the proposed converters, the auxiliary switch emitter pin (considering MOSFET element) is connected to ground that simplifies the switch driver circuit. The proposed ZVT current-fed full-bridge converter is analyzed in the second section and the design considerations are discussed in the third section. In the fourth section, a prototype converter is implemented and the presented experimental results confirm the validity of the theoretical analysis. In the fifth section, the proposed auxiliary circuit is applied to a current-fed push-pull and a half-bridge converter to provide soft switching.

II. BLOCK DIAGRAM

Fig. 1 Block diagram of a ZVT Current Fed Full Bridge PWM Converter for Different Load

Page 344: Keynote 2011

344 | P a g e

For this circuit, Voltage is 48v is given as the input to

the inverter. The inverter is used to convert the given

input DC in to AC. The inverter output is given to isolated

transformer. The high AC voltage is given to the

uncontrolled diode rectifier. The rectifier converts the

high AC into the high DC. The DC voltage is again boosted

up by using the boost converter at the end of rectifier.

The rectifier output is doubled at the boost converter.

The high voltage is used in industry as an interface circuit

for batteries, fuel cells, solar cells, etc.

III.PROPOSED CIRCUIT DIAGRAM

The converter is composed of an input inductor, the main switches /? /%, transformer T with primary to each secondary turns ratio of 1:n and its leakage inductance of 0 , rectifying diodes Aand A' and capacitor C , which is the converter output voltage filter. The auxiliary circuit is composed of auxiliary switch /B , two auxiliary diodes AB and AB' , two coupled inductors 0B and 0B' with turns ratio of 1:m and the snubber capacitor of the main switches *B. The voltage across and current through /. are defined, and definitions for all other switches are similar.

Fig. 2 Circuit diagram of ZVT current fed full bridge PWM converter

To simplify the converter analysis, it is assumed that all semiconductor devices are ideal, the input inductor current is almost constant and equal to CD and output capacitor voltage is constant and equal to 6, in a switching cycle. Thus only one half cycle of converter operation that is composed of nine different operating intervals is analyzed.

The Fig. 2 presents the equivalent circuit of each operating interval. Before the first interval, it is assumed that /,/' and A' is conducting and all other semiconductor device is OFF. In addition *B voltage is assumed to be 6 , which is slightly greater than 6,/n.

Fig. 3 Control pulse waveform

The 6, voltage is boosted up and the voltage is given to the high power application. The inverter operation can be explained in nine stages of interval. This interval starts by turning the auxiliary switch ON. By turning /B ON , voltage is placed across 0B and its current increases in a resonance fashion until voltage decreases to 6,/n. The auxiliary switch turns on is under zero current (ZC) condition due

Page 345: Keynote 2011

345 | P a g e

to0B. The 0B current equation during this interval is

CB1 GH sin (I(t-1)) (1)

6JB= 6 cos (I(t-1)) (2)

K LMNJM (3)

I, L MNJM (4)

6JB GOD +

GOP DPN

cos(I1 2 1)) (5)

where

I L MN⁄ $ R⁄JM (6)

I' L RJM (7)

CB(t) = C' S G HO sin (I (t21')) (8)

6JB(t) = 6' cos(I (t21')) (9)

6JB(t)=GD +K' CD+

TUV)sin(I'(t215)) (10)

IV. DESIGN CONSIDERATIONS

The input inductor, main transformer, and output capacitor filter are designed similar to a regular current-fed converter. Using Ca, charging interval can be calculated, which depends on the converter load (neglecting I1/ m in comparison to IL1a ). Then, according to the main switches turn-off speed and converter operating power Ca, can be selected like any turn-off snubber. The current slope Sa in depends on the value of La1, and thus, this inductor is selected similar to any turn-on snubber inductor. Also, the auxiliary switch current peak (I4 ) depends on the value of La1 . It can be observed that when all the main switches are ON, there is some additional current stress on every switch in comparison with its hard switching counterpart. This additional current stress can be reduced to a desirable value with proper selection of m. But m cannot be chosen very large because this will result in a greater value for La2 , and consequently, its current cannot reach zero in the sixth interval. The maximum duration of the sixth interval is equal to one of the main switches off time. Therefore, the following equation is obtained :(11) where is the switching period and is the maximum on time of each switch. According to (11), the maximum value of can be calculated.

V. SIMULATION RESULT ANALYSIS

The effectiveness of the proposed method is demonstrated through simulation results. The simulation result of the ZVT current fed full bridge PWM converter is shown in the Fig.4 and 5,6,7 it confirms the better output is obtained by using the proposed method.

Page 346: Keynote 2011

346 | P a g e

Fig. 4 Simulation circuit of a ZVT current fed full bridge PWM converter

Fig. 5 Control pulse waveform

Fig. 6(a) Input voltage waveform

Fig. 6(b) Input current waveform

Page 347: Keynote 2011

347 | P a g e

Fig. 7(a) Output voltage waveform

Fig. 7(b) Output current waveform

VI. CONCLUSION

In this paper, a new auxiliary circuit was introduced that can be applied to current-fed full-

bridge, half-bridge, and push-pull converters. The proposed auxiliary circuit provides ZVS condition for the main switches while its semiconductor elements are also soft switched. The proposed ZVT full-bridge current fed converter was analyzed. A prototype converter was implemented and the presented experimental results confirmed the theoretical analysis.

REFERENCE

[1] Adib.E and Farzanehfard.H, “Family of isolated zero voltage transition PWM converters” IET Power Electron., vol. 1, no. 1, pp. 144–153, 2008.

[2] Bin Su and Zhengyu Lu “An Improved Single-Stage Power Factor Correction Converter Based On Current-Fed Full- Bridge Topology” in proc, IEEE PESC conf., 2008,pp.1267-1271.

[3] Bor.c ,Bor-Ren.M.CLin,Chien-LanHuang,andJi FaWan”Analysis, Design, and Implementation of a Parallel ZVS Converter” IEEE Trans on Ind. Electronics, vol. 55, no. 4, April 2008

[4] Cho.J.G ” IGBT based zero voltage transition full bridge PWM converter for high power applications” in proc, IEEE PESC conf., 1996,pp.196-200.

[5] Ehsan Adib ,and Hosein Farzanehfard” Zero-Voltage Transition Current-Fed Full-Bridge PWM Converter” IEEE Trans on Power Electronics, vol. 24, no. 4, April 2009.

[6] Frohleke .N, Mende .R, Grotstollen .H, Margaritis .B, Vollmer .L” Isolated Boost Fullbridge Topology Suitable for High Power and Power Factor Correction” IEEE PESC conf., 1994, pp. 843-849.

[7] Hengchun Mao, Fred , Lee .C.Y, Xunwei Zhou, Heping Dai, Mohummet Cosan, and Dushan Boroyevich” Improved Zero-Current Transition Converters for High-Power Applications” IEEE Trans on Ind. Applications, vol. 33, no. 5, Sep/Oct 1997.

[8] Jumar Luis Russi, Mario Lucio da Silva Martins Hilton Abilio Grundling, Humberto Pinheiro , Jose Renes Pinheiro, and Helio Leaes Hey” A

Page 348: Keynote 2011

348 | P a g e

Unified Design Criterion for ZVT DC-DC PWM Converters With Constant Auxiliary Voltage Source” IEEE Trans on Ind. Electronics, vol. 52, no. 5, Oct 2005.

[9] Jung-Goo Cho, Chang-Yong Jeong, Hong-Sik Lee, and Geun-Hie Rim” Novel Zero- Voltage-Transition Current-Fed Full-Bridge PWM Converter for Single-Stage Power Factor Correction” IEEE Trans on Power Electronics, vol. 13, no. 6, Nov1998.

45. A high class security system for detection of abandoned

object using moving camera.

J. Vikram Prohit SMK Formra Institute of Technology Chennai

Abstract:

This paper presents a novel framework for detecting non-flat objects by matching a reference and target video sequence. The reference video is taken by a moving camera when there is no suspicious object in the scene. The target video is taken by a camera following the same route and may contain the suspicious object. The objective is to find this object in the scene. This is done by aligning the two videos. Here we use two sets of alignment procedure one is intersequence alignment based upon homographies, computed by RANSAC, to find all suspicious areas; an intrasequence frames to remove false alarms in flat areas. Last step is temporal filtering which to confirm the existence of suspicious object.

Introduction:

In recent years, visual surveillance by intelligent cameras has attracted increasing interest from homeland security, law enforcement, and military agencies. The detection of suspicious (dangerous) items is one of the most important applications. We here propose a method to detect static objects. We investigate on how to detect non flat object in the scene using moving camera. Since these objects have arbitrary shapes, texture or colour, state of art category specific (e.g., face/car/ human) objects detection technology, which is usually done by training images. To deal with this detection problem we propose a simple but effective framework based upon matching a reference and target video sequences. The reference video is taken by a moving

Page 349: Keynote 2011

349 | P a g e

camera when there is no suspicious object in the scene, and the target video is taken by a second camera following the same trajectory, and observing the scene where suspicious object may have been abandoned in the meantime. The objective is to find this object .

The steps involved in detection are:

an intersequence geometric alignment based upon homographies to find all possible suspicious areas.

An intrasequence alignment (between consecutive reference frames) to remove false alarms on high objects.

A local appearance comparison between two intrasequence frames to remove false alarms in flat areas, and

A temporal filtering step using homography alignment to confirm the existence of suspicious object.

Existing Systems:

Almost all existing systems for detection of static suspicious object aimed at finding abandoned object using static camera in a public places, e.g., commercial centre, metro station or airport hall. Spengler and schiele propose a tracking system to automatically detect abandoned objects and draw the operator’s attention to such objects. It consists of two major parts: A Bayesian multiperson tracker that explain the scene as much as possible, and a blob-based object detection system that identifies abandoned objects using the unexplained image parts. If a potentially abandoned object is detected, the operator is notified, and the system provides the operator the appropriate key frames for interpreting the incident.

Guler and Farrow propose to use a background subtraction based tracker and mark the projection of the centre of mass of each object on the ground. The tracker identifies object segmentation and qualifies them for possibility of a “drop-off” event. The stationary object detector running in parallel with the tracker quickly identifies potential stationary foreground objects by watching for pixel region that consistently deviate from the background for a set of duration of time. The stationary objects detected

correlated with drop off events and the distance of the owner from the object is used to determine warning and alerts for each camera view. The abandoned objects are correlated within multiple camera views using location information, and a time-weighted voting scheme between the camera views is used to issue the final alarms and eliminate the effects of view dependencies.

All of the previously techniques utilizes the static cameras installed in some public places, where the background is stationary. However, for some application, the space to keep a watch on is too large to use static cameras. Therefore, it is necessary to use a moving camera to scan these places. We in this project use a moving platform in which the camera is mounted to scan the area of interest.

Proposed approach:

Given a reference video and a target video which are taken by a camera following the similar trajectories, GPS is used to align the two frames. Now normally GPS alignment will provide information about the same geographical location but cannot guarantee that camera has same view point for both reference and target video, also the speed of camera also matters, which may cause variation between the reference and target videos. Therefore a fine geometric alignment is necessary. A feature based alignment method is used, for because it is a better choice when illumination and other environment factors are different for both reference and target frames. We propose to use a 2D homographies alignment system. Homographies system can align two images by registering their dominant planes, and that any non-flat objects on dominant planes are deformed while the flat objects remain almost unchanged after the alignment. Homography alignment play a key role in the detection of abandoned object. Therefore the following assumption are made: the object is a non-flat object, and when it is in target it is placed on the ground.

With this assumption we use intersequence and intrasequence homography alignment based on a modified RANSAC. The flowchart of the process is.,

Reference video frames Target video frames compare

All suspicious areas

Removing false alarms on high objects

Removing false alarms in flat areas by local appearance comparison

Suspicious areas only in flat areas

Page 350: Keynote 2011

350 | P a g e

\

By using the intersequence alignment , all possible suspicious areas are highlighted as candidates by setting a suitable threshold on the normalized cross-correlation(NCC)image of the aligned intersequence frame pair. By using the intrasequence alignment on reference video, we can remove false alarms caused by high objects and other moving vehicles. By using the intrasequence alignment on both reference and target video, most of the false alarms corresponding to the nonsuspicious flat areas (e.g. Caused by shadow and highly saturated areas)can be removed. Finally, a temporal filtering step is used to remove the remaining false alarms.

A. Intersequence geometric alignment:

The SIFT features descriptor is initially applied to the GPS-aligned frame pairs (we also tried the Harris corner detector, but the result is worse). To reduce the effect of SIFT features of high objects (e.g.,trees) on the homography estimation , it is better to apply it only to the image area which corresponds to the ground plane. Specifically, we first compute a 128-dimensional SIFT descriptor for each key point of the reference and the target frames (the extraction process just follows Lowe’s method). For each descriptor in the reference, we search its nearest in the reference frame. If the two nearest neighbours are consistent, we view them as a match.

Next mRANSAC is applied to find the the optimal inliers that can be used to compute the homography matrix Hinter. We use the following algorithm to detect optimal inliers:

1. Apply SIFT algorithm to get a set of SIFT feature descriptors for reference and target.

2. Find the putative matches, D:X1 X2 , between these SIFT feature of reference and target videos.

3. Call mRANSAC to find the optimal Hinter. mRANSAC: Input: Np pairs of putative matches – D:X1 X2. Hm-the homography model used to fit the data. n-the minimum number of data required to fit the model(4) mI-the maximum number of the iteration allowed (500) mInd- the maximum number of trials to select a non-degenerate data set (100) t- a threshold for determining when a datum fits a model (0.001) Output: bestH- model which optimally fits the data 1. Normalizing X1. A homogeneous

scale factor of 1 is applied as the last coordinate of X1. Then move their average to origin by subtracting their mean, and scale them to average length of sqrt(2). Let T1=Tscale X Ttranslate. Similarly normalize X2 and get T2. Represent the normalized data by D:X1 X2.

2. Let trialcount= 0; trialcountND=0;best H=nil; Let N=1;inliers= nil;maxScore=0;best inliers= nil;

3. While (N>trialcount)&(trialcount<= mI) degenerate=1; count=1; While(degenerate)& (count<=mInd); Randomly sample n pairs of data, Pn, from D If Pn is not degenerate. Call H-fitting to get Htemp from Pn. If Htemp is empty degenerate=1; count= count+1; if(degentrate) trialcount=trialcount+1;

Page 351: Keynote 2011

351 | P a g e

break evaluate Htemp by the inliers matching Htemp. HX1= Htemp X X1; invHx2=(Htemp)^1 X X2 HX1= HX1./HX1(3,:); invHX2=invHX2./invHX2(3,:); d2=sum(X1-invHX2).^2 )+ sum ((X2-HX1).^2) inliers=find(d2<t); ninliers= |inliers|x scatter(inliers); if(ninliers>maxScore) maxScore= ninliers; bestInliers=inliers; bestH=Htemp; fracinliers= ninliers/Np; pNoOutliers= 1=fracinliers^n; N=log(1-p)/log(pNoOutliers); trailcount= trialcount+1; 4.Call H-fitting based on best inliers to get H; 5.H=T2^-1 X H X T1;

H-fitting Input: randomly sampled n pairs of data, Pn Output:3x3 Homography matrix HT=(h1, h2,h3)

1. Noramlizing Pn in the way as in RANSAC and denoting the normalised Pn as nrP

2. For each pair of data in normalised nrP, nrP1

i=[u1,v1,w1]T and nrP2

i=[u2,v2,w2], i=1,…….n construct Ai= o

t -w2x(nrP1i)T -

v2x(nrP1i)T

-w2x(nrP1i)T ot -

u2x(nrP1i)T

-v2x(nrP1i)T -u2x(nrP1

i)T ot 3. Stack the n Ai into a 3nx9 matrix A=[A1,……………..An]

T. 4. Get the least square solution of Axh=o3 by SVD of A:A=∑ΦAt . The unit vector corresponding to the singular value is the solution h. 5. Reshape h into H and denormalize H as in RANSAC. Intrasequence Geometric alignment: The procedure for intrasequence geometric alignment is similar to that for intersequence alignment. The difference is that both the reference (the frame to be warped) and target frames are from the same video this time. The intrasequence alignment generally aligns the dominant planes very

well (even when shadow appears in one and disappears in the other). 1) Removal of False Alarms on High Objects: After applying the intrasequence alignment to reference Ri and Ri-k . We warp Ri-k into nrRi-k to fit. Thus, we can obtain the NCC image between nr Ri-k and Ri-k . Intuitively, we can locate the high-object areas in Ri by setting a suitable threshold t2 on the NCC image because the high objects in Ri-k are deformed and the NCC scores at these locations are usually low. The pixels whose NCC values are lower than t2 are treated as possible high-object areas of Ri. These false alarms can be discarded by using the result of intrasequence alignment for both reference and target. Due to the problem caused by the blurring resulting from the homography transformation, we will seek the difference of illumination-normalized aligned frames instead of normalised cross correlation of them again. 2) Removal of False Alarms on the Dominant Plane:

Given that any 3-D objects lying on top of the dominant plane are deformed after the intrasequence alignment, while any flat objects remain almost unchanged, we use the difference of grayscale pixel values between target Ti-k and Tk , and that between Ri-k and Ri for removing false alarms on the dominant plane . The appearances of the patches are quite different in non-flat areas in the target and warped reference frames, while they are very similar in the flat ones. Therefore, we can remove the false alarms in flat areas by setting a threshold on the difference image. Generally, a small threshold should be used if T is taken at dawn or dusk, while a large one is preferred if T is taken at noon (especially with strong light). C. Temporal Filtering

We use temporal filtering to get our final detection. Let K be the number of buffer frames used for temporal filtering. We assume that Ti is the current frame, and the remaining suspicious object areas in Ti after intersequence and intrasequence alignment . We stack, and into a temporal buffer . We also stack the homography transformations between any two neighboring frames of the buffer into Hbuffer. The algorithm for temporal filtering is: Input: target video sequence, T, buffer size K, and potential suspicious areas. Output: detection map

1. Initialization: stack the initial suspicious area into the buffer Tbuffer. Compute the

Page 352: Keynote 2011

352 | P a g e

homography matrix H for any neighbouring frame of initial K-1 frames of T and stack them into Hbuffer.

2. For an incoming frame, Ti, i=K,………∞, do: 2.a Get suspicious area and let Tbuffer(K)=suspicious area; compute H between TK-1 and TK , and let Hbuffer(K-1)=H; Let Idp= Tbuffer(K). 2.b. For u=1 to K-1 M1= Tbuffer(u); For v=u to K-1 M2= Hbuffer(v) x M1.

M1 = M2 ; Idp = Idp ∩ M1;

2.c. Update Tbuffer ; For j=1 to K-1 Tbuffer(j)= Tbuffer(j+1); 2.d. Update Hbuffer; For j=1 to K-2 Hbuffer(j)= Hbuffer(j+1).

CONCLUSION This paper proposes a novel framework for detecting nonflat abandoned objects by a moving camera. Our algorithm finds these objects in the target video by matching it with a reference video that does not contain them. We use four main ideas: the intersequence and intrasequence geometric alignment, the local appearance comparison, and the temporal filtering based upon homography transformation. Our framework is robust to large illumination variation, and can deal with false alarms caused by shadows, rain, and saturated regions on road. It has been validated on fifteen test videos. Authors: Shreya Somani ECE department,final year. Vikram Prohit.J ECE department,final year.

Page 353: Keynote 2011

353 | P a g e

46. REPAIR AND REHABILITATION P M Mohamed Haneef, C Periathirumal,

Students of III B.E (Civil) Anna University of Technology Tiruchirappali, Thirukuvalai Campus ABSTRACT:

Cement concrete reinforced with steel bars is an extremely popular construction material. One major flaw, namely its susceptibility to environmental attack, can severely reduce the strength and life of these structures. External reinforcements using steel plates have been used in earlier attempts to rehabilitate these structures. The most important problem that limited their wider application is corrosion. Recent developments in the field of fibre reinforced composites (FRCs) have resulted in the development of highly efficient construction materials. The (FRCs) are unaffected by electro-mechanical deterioration and can resist corrosive effects of acids, alkalis, salts and similar aggregates under a wide range of temperatures. This technique of rehabilitation is very effective and fast for earthquake affected structures and retrofitting of structures against possible earthquakes. This technique has been successfully applied in the earthquake-affected Gujarat. In the present paper important developments in this field from its origin to the recent times have been presented.

INTRODUCTION:

Hundreds of thousands of successful reinforced concrete and masonry buildings are annually constructed worldwide, there are large numbers of concrete and masonry structures that deteriorate, or become unsafe due to changes in loading, in use, or in configuration. Also from the earthquake of Gujarat it is clear that the old structures designed for gravity loads are not able to withstand seismic forces and caused wide spread damages. Repair of these structures with like materials is often difficult, expensive, hazardous and disruptive to the operations of the building. The removal and transportation of large amounts of concrete and masonry material causes concentrations of weight, dust, excessive noise, and requires long periods of time to gain strength before the building can be re-opened for service.Fibre Reinforced Composite (FRC) materials are

being considered for application to the repair of buildings due to their low weight, ease of handling and rapid implementation. A major development effort is underway to adapt these materials to the repair of buildings and civil structures. Appropriate configurations of fiber and polymer matrix are being developed to resist the complex and multi-directional stress fields present in building structural members. At the same time, the large volumes of material required for building repair and the low cost of the traditional building materials create a mandate for economy in the selection of FRP materials for building repair. Analytical procedures for reinforced and prestressed concrete and masonry reinforced with FRC materials need to be developed, validated, and implemented, through laboratory testing, computational analysis, full-scale prototyping, and monitoring existing installations. This paper reports recent developments in research, especially experiments that have been carried out to determine the efficacy of the system. A good proportion is looking for a sound rehabilitation and retrofitting technique for earthquake affected and vulnerable areas.

STRUCTURAL DAMAGES DUE TO EARTHQUAKE:

Earthquake generates ground motion both in horizontal and vertical directions. Due to the inertia of the structure the ground motion generates shear forces and bending moments in the structural framework. In earthquake resistant design it is important ensure ductility in the structure, ie. The structure should be able to deform without causing failure. The bending moments and shear forces are maximum at the joints. Therefore, the joints need to be ductile to efficiently dissipate the earthquake forces. Most failures in earthquake-affected structures are observed at the joints. Moreover, due to the existing construction practice, a construction joint is placed in the column very close to the beam-column joint (fig. 1(a)). This leads to shear or bending failure at or very close to the joint. The onset of high bending moments may

Page 354: Keynote 2011

354 | P a g e

cause yielding or buckling of the steel reinforcements. The high compressive stress in concrete may also cause crushing of concrete. If the concrete lacks confinement the joint may disintegrate and Figure 1 (a) failure at construction joint the concrete may spall (fig. 1(b,c)). All these create a hinge at the joint and if the number of hinges is more than the maximum allowed to maintain the stability of the structure the entire structure may collapse. If the shear reinforcement in the beam is insufficient there may be diagonal cracks near the joints (fig. 1(d)). This may also lead to failure of the joint. Bond failure is also observed, in case, lap splices are too close to the joints. However, in many structures these details have not been followed due to perceived difficulties at site. In most of the structures in Gujarat lack of confinement and shear cracks have been found to be most common causes of failure. A rehabilitation and retrofitting strategy must alleviate these deficiencies from the structures.

Figure1(a) failure at construction joint

Figure1(b) Crushing of Concrete

Figure 1 (c) Spalling of concrete

Figure 1 (d) Diagonal shear crack

CORROSION PROBLEMS: Cement concrete reinforced with steel

bars is an extremely popular construction material. One major flaw, namely its susceptibility to environmental attack, can severely reduce the strength and life of these structures. In humid conditions, atmospheric moisture percolates through the concrete cover and reaches the steel reinforcements. The process of rusting of steel bars is then initiated. The steel bars expand due to the rusting and force the concrete cover out resulting in spalling of concrete cover. This exposes the reinforcements to direct environmental attack and the rusting process is accelerated. Along with unpleasant appearance it weakens the concrete structure to a high degree.

The spalling reduces the effective thickness of the concrete. In addition, rusting reduces the cross sectional area of steel bars, thereby reducing the strength of the reinforcements. Moreover, the bond between the steel-and the concrete is reduced which increases the chances of slippage. The rusting related failure of reinforced concrete is more

Page 355: Keynote 2011

355 | P a g e

frequent in a saline atmosphere because salinity leads to a faster corrosion of the steel reinforcements. In a tropical country like India, where approximately 80% of the annual rainfall takes place in the two monsoon months, rusting related problems are very common, especially in residential and industrial structures. India also has a very long coastline where marine weather prevails. Typically, a building requires major restoration work within fifteen years of its construction.

EARLY METHODS OF REPAIR:

From the above discussion we can conclude that the three main weaknesses of RCC structures that require attention are:

• Loss of reinforcement due to corrosion

• Lack of confinement in concrete especially at the joints.

• Deterioration of concrete due to attack of multiple environmental agencies.

The present practice of repairs in India is focused towards delaying the deterioration. However, there have been some attempts to strengthen the dilapidated structures. In the last two decades the attempts on rehabilitation of damaged RC structures have been mainly concentrated in two methods external post tensioning, and the addition of epoxy bonded steel plates to the tension flange. High strength steel strands are used in external post tensioning to increase the strength of damaged concrete structures. However, the main obstacle faced in this method is difficulty in providing anchorage in post-tensioning strands. The lateral stability of the girder may become critical due to post-tensioning. Moreover, the strands are to be protected very carefully against corrosion. An alternative to the post tensioning method is the use of epoxy bonded steel plates. This alleviates the main difficulties of using the post-tensioning method -anchorage and lateral stability. This method has been applied to increase the carrying capacity of existing structures and to repair damaged structures as well. Several field applications of the epoxy bonded steel

plate have been reported recently. Several cracked slabs and girders of the elevated highway bridges in Japan have been repaired using this method. A number of damaged reinforced concrete bridges in Poland and erstwhile USSR have been repaired by bonding steel plates. The main advantage of using this method in repairing bridges is that it does not need closing down of the traffic during the repair.

Experimental studies on post-reinforcing concrete beams by using steel plates. Two series of beams of different dimensions were tested using two different glues. The yield strength of the steel plates was also varied. Several aspects such as glue thickness, pre-cracking prior to bending, plate lapping etc were studied. It was observed that the steel plate reinforced beam increases the allowable load on the structure and delays the usual cracks. The reinforcement improves overall mechanical characteristics of the Beam. The concrete beams of rectangular cross section reinforced with steel plates gives an improvement in performance in the ultimate load, stiffness and crack control. But the exposure tests revealed that considerable corrosion takes place in steel plates with natural exposure causing a loss of strength at the interface.

Composite Materials as Post-Reinforcement: Recent developments in the field of fiber reinforced composites (FRCs) have resulted in the development of highly efficient construction materials. They have been successfully used in a variety of industries such as aerospace, automobile and ship building. The FRCs are unaffected by electro-mechanical deterioration and can resist corrosive effects of acids, alkalis, salts and similar aggregates under a wide range of temperatures. FRCs thus holds a very distinct advantage over steel plates as an external reinforcing device. Moreover, FRCs is available in the form of laminas and different thickness and orientation can be given to different layers to tailor its strength according to specific requirements.

The difficulties encountered in using steel plates as reinforcement lead us to the use of fiber reinforced composite materials as post-

Page 356: Keynote 2011

356 | P a g e

reinforcements. Due to their high specific strength (strength/weight ratio) the composite reinforcements are very light and easy to handle. The composite materials are available as unidirectional fibers of a huge length. Therefore, joints in the reinforcement can be avoided very easily. Moreover, the corrosion of the reinforcements can be avoided completely. Research work is gaining momentum on the application of composite materials as post-reinforcement. The potential use of fiber reinforced composites in civil structures is manifold. The scope of the present paper is limited to the repair of - existing concrete structures only. FRCs can be used in the concrete structures in the following forms:

o Plates -at a face to improve the tension capacity.

o Bars -as reinforcement in beams and slabs replacing the steel bars.

o Cables -as tendons and post tension members in suspension and bridge girders.

o Wraps -around concrete members to confine concrete and improve the compressive strength.

Material For Strengthening Of Structures:

It can be seen that the non-metallic fibers have strengths that are 10 times more than that of steel. The ultimate strain of these fibers is also very high. In addition, density of these materials is approximately one-third of steel. Due to its corrosion resistance FRCs can be applied on the surface of structure without worrying about its deterioration due to environmental affects. They in turn protect concrete core from environmental attack FRPC sheets being malleable can be wrapped around the the joints very easily. We shall investigation on application of FRP on concrete.

FRC PLATES AS REINFORCEMENT TO CONCRETE BEAMS:

FRC for strengthening of structures can be glued to an old and deteriorated concrete surface to improve its strength. This method is more convenience and durable than epoxy bonded steel plates. The suitability of carbon

fibre reinforced epoxy laminates are examined for rehabilitation of concrete bridges.

The main advantages of carbon fibre composite laminates have been found to be

- No corrosion and therefore, no corrosion protection is necessary.

- No problem of transportation as it is available in rolls .

- Higher ultimate strength and Young’s modulus .

- Very good fatigue properties

- Low weight

The main disadvantages are found to be

- Erratic plastic behavior and less ductility.

- Susceptible to local unevenness.

- High cost.

The performance of CFC laminates in post strengthening of cracked concrete beams is as follows. The load deflection graph of a post strengthening beams has been compared with that of unstrengthening beams. It was observed that a point 3mm thick CFC laminates has doubled the ultimate load of 300x200 beam of 2m span. They also presented an account of failure modes in such beams. It is observed that the tensile failure of laminate occurred suddenly with a sharp explosive snap. However, it was announced in advance by cracking sound. They stressed on the Importance of an even bonding surface and guarding against shear cracks. They also indicated the high potential of such repair work in wood and other metal structures and in prestressed girders as well. Prestressing of the reinforcing laminate is advantageous for several applications. It has been observed that the

Page 357: Keynote 2011

357 | P a g e

prestressed laminates are effective in closing the crack in damaged structures and therefore, increase the serviceability of the strengthened structure. Prestressing also reduces the stresses in the reinforcing steel. This is advantageous when the steel is weakened due to corrosion. Another significant advantage of prestressing is that it reduces the tendency of delaminating at the crack front. The experiments on beams of 2 m and 6 m span under static and dynamic loading are as follows. The prestressing favorably influenced the number and the width of cracks. Therefore, the prestressed beams had a very good fatigue behavior. He observed that the FRP sheet has no plastic reserve strength. Therefore, maximum flexural strength of the beam is obtained when the failure of the laminate occurs at the instant of plastic yielding of the steel.

FRC BARS REINFORCEMENT IN SLABS AND BEAMS:

Steel reinforcement in concrete structures has responsible for detoriations. The equipment FRC rebar is non- metallic and non-corrosive in nature. However FRC bars have much less ductility and unpredictable plastic behavior. FRC bars have much less elastic modulus. Therefore deflection was limiting criterion in case of FRC Reinforced beams. The ultimate load supported by slabs reinforced with FRPC was equal to higher than that supported by the companion slabs reinforced with steel.

FRCs AS CABLES AND TENDONS:

Corrosion problems are very severe in transportation structures, especially those exposed to marine environment. This encourages use of FRCs in bridges. The FRC cables, post – tension tendons and plating can be used to improve the durability of bridges. Moreover, FRC cables are much lighter than the conventional steel cables leading to lesser self weight. Therefore, much longer spans can be designed by using FRC cables. FRC cables would allow tripling of limiting span of cable stayed bridges in comparison to steel wires.

FRCs AS WRAPPING ON CONCRETE ELEMENTS:

The tensile strength of concrete is much less than in comparison to its compressive strength. As results, even compression members often fail due to the tensile strength that develops perpendicular direction of compressive load. If such a concrete is confined using a wrapping the failure due to tensile cracks can be prevented. The compressive strength of concrete elements is several times higher than the unwrapped concrete elements. Although this is known for a long time effective application of confinement could not be achieved due to lack of suitable wrapping material. If the wrapping is torn the capacity of the element reduces dramatically. Therefore the durability of wrapping material is of utmost important. In addition, the wrapping material remains exposed to environmental attack. Therefore the steel is unsuitable for this purpose. FRCs due to non –corrosive in nature offer an attractive alternative. Moreover the light FRC fibers can be very easily wrapped around an old column.A typical stress-strain curve of cylindrical specimen wrapped with FRPC of varying number of layer is presented. It may be noted that one layer of FRPC wrap the ultimate strength of specimen is being increased by a factor of 2.5. The ultimate strength went on to increase up to 8 times when 8 layer of wrap were used. The ultimate strain is increased by 6 times with one layer of wrap. This feature is particularly attractive for earthquake resistant structures. For earthquake structures thin wrap that offers high ultimate strain but low stiffness is desirable. Glass fibers that have considerably lower stiffness than the carbon fibers and higher ultimate strain is desirable. Glass fiber is much less expensive than carbon fiber. Glass fibers can be used in rehabilitation and retrofitting of structures in Gujarat.

CONCLUSION:

Page 358: Keynote 2011

358 | P a g e

From the above discussion it can be observed that the fiber reinforced composite are very attractive proposition for repair and upgrading of damaged concrete structures. However the success of this method depends on followings. Availability of design methods: The design methods are already developed. However it needs wider publicity and possible inclusions in Indian Standards.Availability of materials: CFRP is available in adequate quantity. CFRP needs imported new production facilities will be extremely important as the use of technique gains popularity. Availability of technology: This technique needs several knowledge about new construction m materials. A large research base is available awareness in India about this materials is scan. Durability under Indian conditions: These materials are chemically inert and show little degradation over a time even in harsh conditions

.

Page 359: Keynote 2011

359 | P a g e

Sandeep Kumar,M Vinodhkumar Jayam College of Engineering and Technology, Dharmapuri

47. LOW COST HOUSING

Page 360: Keynote 2011

360 | P a g e

ABSTRACT

Food, clothing and shelter is a basic human requirement. Of our One Billion plus population, as of now, only a small percentage can afford to live in a decent residential accommodation of their own. According to Tenth Five year plan Report India has a shortage of 41 million housing units! In Tamil Nadu alone, the shortage is

2.81 million units. As per Chennai

metropolitan development authority’s (CMDA) second master plan the city requires over 4.13 lakh houses by 2011, of which 3,52,000 should cater to low-income and middle-income groups. This project seeks to examine how a greater understanding of the issue by all concerned, including the prospective home owner, can work towards enabling housing at much lower cost, and even for those who cannot dream of it today.Haphazard and unauthorized construction is one of the problems that is ailing the housing sector. The extent of encroachment on public land is simply phenomenal. Lack of housing and the resultant human plight makes it difficult for the State to prevent encroachment. As a result, slums proliferate (According to a recent study a total of 42.6 million people in India live in slums constituting 15% of the total urban population of the country).The multi-unit

apartment with its attendant efficiencies, as here in proposed is going to be the 'home' of tomorrow. In fact, it is the only way to enable housing for the millions. The mission is neither impossible nor all that difficult. The 'Unit-hold' concept is an arrangement of freehold ownership of units in building complexes. Unit-hold comprises of two

elements, namely, units (or flats), belonging to individual owners; and the land and

common areas, owned by a Unit-hold entity or the UHE.

1. INTRODUCTION Food, clothing and shelter is a basic

human requirement. Of our One Billion plus population, as of now, only a small percentage can afford to live in a decent residential accommodation of their own. According to Tenth Five year plan Report India has a shortage of 41 million housing units!

THE TIMES OF INDIA (03/07/2010) states that In Tamil Nadu alone, the housing shortage is 2.81 million units. As per Chennai metropolitan development authority’s (CMDA) second master plan the city requires over 4.13 lakh houses by 2011, of which 3,52,000 should cater to low-income and middle-income groups.

This project seeks to examine how a greater understanding of the issue by all concerned, including the prospective home owner, can work towards enabling housing at much lower cost, and even for those who cannot dream of it today.

A look at some of the ground realities first. Haphazard and unauthorized construction is one of the problems that is ailing the housing sector. The extent of encroachment on public land is simply phenomenal. Lack of housing and the resultant human plight makes it difficult for the State to prevent encroachment. As a result, slums proliferate (According to a recent study a total of 42.6 million people in India live in slums constituting 15% of the total urban population of the country).

The multi-unit apartment with its attendant efficiencies is going to be the 'home' of tomorrow. In fact, it is the only way to enable housing for the millions. The mission is neither impossible nor all that difficult.

Ideally, housing must be viewed in the light of what the person can afford-with a clear title and a cost not exceeding four time the gross annual household income.

Page 361: Keynote 2011

361 | P a g e

Financing is another issue and hence there is need for documents enabling a mortgage in favour of a financing bank from the initial stages through the transition, up to completion and possession, and even beyond. More innovative financial instruments are needed to bring flexibility. Public private partnership models and other new ways to support new projects are imperative.

.METHODOLOGY ADOPTED - A Holistic

Approach:

As mentioned earlier, the low-cost housing problem is an interdisciplinary one. There are at least four different areas that one needs to address in order to come up with a holistic solution.

“The first is land costs that are often a significant percentage of the overall costs of a housing project. The second deals with technological innovation, both from the perspective of cheaper building materials as well as time-reducing construction practices. The third area relates to the prevailing policy

environment. Here again, FSI rules, the use of Public-Private-Partnerships, stamp duties, Transferable Development Rights (TDR) and a host of other parameters play a role in increasing or decreasing the cost of housing. The fourth issue looks at the sustainability of the project and deals with the financing and operation of low-cost houses, once they are built” (Roundtable Discussion on Low-Cost Housing, IITM).

LAND

One of the main cost components of a flat or a house is land cost. Land cost depends on availability of land. The State Planning Boards, Town & country Panning Boards / Departments, local development Authorities and Municipalities need to duly adjust and tailor their planning process, namely, the preparation of a Regional plan and other Development plans on scientific

lines, to the concept of housing and development.

To enable supply of land, the Development Plan or the Master Plan should not be of a small area. It will need to cover a much larger area so that the requirements of today; of 20 years hence; and even of 50 years hence can be visualised, and the plan prepared accordingly.

CONSTRUCTION TECHNOLOGY After land costs, and before we

proceed to the cost of construction, we must note the difference a proper architectural design can make. While for the luxury sector, architectural design has its own connotation, for the non-luxury sectors, it assumes importance in a different form. Expertise and sophistication in design (or good architecture) increases spatial efficiency, i.e., maximises the usefulness of space or makes small spaces feel great, which can solve many problems. If the required area of a flat is reduced by design sophistication, naturally, the quantity (area) of land required as also the cost for the construction shall get correspondingly reduced.

ARCHITECTURAL DESIGN

One of the primary goals of architecture is to strike the balance between structural integrity, functional efficiency, and aesthetic beauty, while keeping in view the affordability by the potential occupier. Compact living units can offer a high utilisation level with all basic (and even some 'modern') conveniences within just a small space.

CONSTRUCTION COST

Construction cost in practical terms, needs to be approached in two separate parts: (1) the cost of making the basic superstructure; and (2) the finishing costs for the flat. The difference between these two must be recognised, while the cost of making the basic superstructure has limited scope for reduction,

Page 362: Keynote 2011

362 | P a g e

the latter (finishing costs), there is wide scope where persons financial status play a key role.

ECONOMY CAN BE ACHIEVED BY:

Mass production Optimization of designs standardization of elements Use of pre-stressed and pre-casted

elements Improvement in production process

and planning Incorporating green building concepts

to make it energy efficient. Use of Light weight concrete for door,

panel walls, etc. LAW AND POLICY

To clarify, here, we are not addressing the rich man's home, but that of the middle class. The market of 50 million homes-costing 2 lakhs to 30 lakhs-lies untapped because:

There is grossly insufficient availability of legally built flats with Clear title at reasonable cost and in sufficient quantity;

The flat Buyer (and the Bank) do not trust the Builders;

The inability (for income tax reasons) to make the down payment; bank finance is not available to most intending flat Buyers;

Economies of scale for construction costs cannot be achieved;

There is lack of ready movement (sale &purchase) of ownership with the flat continuing 'under mortgage' with a Bank; and

The typical potential flat Buyer of today believes that purchasing a flat is beyond him, and if all his savings are invested in it, there will be nothing left as old age reserve.

If these, as also a few other factors are catered to by laws and systems, the problem of housing would stand substantially solved.

To state it in point form, we need to

consider:

The concept of unit hold entity (UHE); A statutory authority or enabler for the

purpose in each district; A new framework of property

ownership, transfer and mortgage laws- particularly rules which enable mortgage to be effected at the time of the booking itself;

The concept of certifier-cum-performance guarantor (CPG Co); (“affordable housing” by Dr. Arun Mohan).

FINANCING AND OPERATION

Housing must be viewed in the light of what the person can afford-with a clear title and a cost not exceeding four times the gross annual household income. It also has to have value for money. 'Affordability', as used here, does not mean compromise on construction quality or safety or even provision of civic amenities and economic infrastructure in the vicinity. Our aim must be to create affordable and commercially viable housing specifically for the middle income group. Smaller unit sizes in suburban or peripheral areas are acceptable if they meet the budget requirement. Further, where ownership is not possible, at least decent accommodation at affordable rent must be available. CONSTITUTION OF HOUSING FINANCE

BANK

In so far as construction costs are concerned, R & D along with bulk construction will help achieve economies of scale - over 30% less than that at present. Reduced land cost component and construction cost as also a modern architectural design, which permits greater utility from lesser floor area of a flat, will

Page 363: Keynote 2011

363 | P a g e

bring down the costs substantially and make it 'affordable' in the true sense of the term. However, despite that, a large segment of the population will not be able to afford it, and to fill this gap, it needs bank finance.

2. CONCLUSION The great Indian middle class-who can

afford, is being denied the opportunity to own a house or even obtain an accommodation at a reasonable rent, just because the right laws and systems are not in place. The concept of unit-hold entity and the certifier-cum-performance guarantor (CPG Co) along with institutional enablers as herein addressed can only lead to the wider goal of housing for all.

To enumerating some of the advantages /

benefits it could bring:-

First, it would enable 30%+ cost reduction.

Second, it would bring legal ownership homes within the reach of middle class people and thus cater to large segment of the populace.

Third, haphazard and unauthorized construction will be prevented.

Fourth, the only solution to the problem of slums, which is only sympathized with rather than analyzed and solutions found, is the unit-hold ownership and bank financing as herein expounded, which will enable the slum dwellers of today to be the

owners of a proper flat tomorrow or even a tenant at reasonable rent. Any other approach will serve a vested interests and not for the citizens living in the slums.

Thus, a holistic approach taking into account all the above factors can only solve, the problem of low income housing which, hopefully will ensure an ownership home not only for the well-to-do strata of society, but also for the common man.

REFERENCES

“Roundtable Discussion on Low-Cost Housing” at Indian institute of technology, madras (2010).

Arun Mohan (2010) “Affordable

housing how law and policy can make it possible”.

Kavita Dutta, Gareth A. Jones (2000) “Housing and finance in developing

countries: invisible issues on research and policy agendas”.

Anand Sahasranaman (2010) “Approaches to Financing Low Income Housing in Urban India”.

“1st CRGP-IITM Roundtable on Private-Public Partnerships” at Indian institute of technology, madras (2010).

Piyush Tiwari (2000) “Housing and

development objectives in India”. Deepa G. Nair (2005) “Sustainable

Affordable Housing for India”.

48. Cost Effective Construction Technology to Mitigate the Climate Change G.Parthiban, Periyar Maniammai University, Thanjavur. [email protected]

ABSTRACT Concentration of greenhouse gases play major role in raising the earth’s temperature. Carbon dioxide, produced from burning of fossil fuels, is the

Page 364: Keynote 2011

364 | P a g e

principle greenhouse gas and efforts are being made at international level to reduce its emission through adoption of energy-efficient technologies. The UN Conference on Environment and Development, made a significant development in this field by initiating the discussion on sustainable development under the Agenda 21. Cost-effective construction technologies can bring down the embodied energy level associated with production of building materials by lowering use of energy-consuming materials. This embodied energy is a crucial factor for sustainable construction practices and effective reduction of the same would contribute in mitigating global warming. The cost-effective construction technologies would emerge as the most acceptable case of sustainable technologies in India both in terms of cost and environment. In this paper we are going to explain about the technologies present in cost effective methods to mitigate the climate change.

I. Introduction: ‘WARMING of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level’ – observed the Intergovernmental Panel on Climate Change in its recent publication1. Greenhouse gases (GHGs) released due to human activities are the main cause of global warming and climate change, which is the most serious threat that human civilization has ever faced. Carbon dioxide produced from burning of fossil fuels, is the

principle GHG. The major part of India’s emissions comes from fossil fuel-related CO2 emissions. A World Bank report2 has identified six countries, namely, USA, China, the European Union, Russian Federation, India and Japan as emitters of the largest quantity of CO2 into the atmosphere. India generates about 1.35 bt of CO2 which is nearly 5% of the total world emission. India, a signatory to the Kyoto Protocol (see Note 1), has already undertaken various measures following the objectives of the United Nations Framework Convention on Climate Change (UNFCCC). These are almost in every sector like coal, oil, gas, power generation, transport, agriculture, industrial production and residential. While in most of the above areas stress has been imparted on increasing energy efficiency and conservation, it is felt that reduction in consumption in various fields and rationalization of uses of energy-guzzling systems would also substantially contribute to our country’s efforts in reducing GHGs and mitigating global warming. Role of construction industry in

climate change The construction industry is one of the major sources of pollution. Construction-related activities account for quite a large portion of CO2 emissions. Contribution of the building industry to global warming can no longer be ignored. Modern buildings consume energy in a number of ways. Energy consumption in buildings occurs in five phases. The first phase corresponds to the manufacturing of building materials and components, which is termed as embodied energy. The second and third phases correspond

Page 365: Keynote 2011

365 | P a g e

to the energy used to transport materials from production plants to the building site and the energy used in the actual construction of the building, which is respectively referred to as grey energy and induced energy. Fourthly, energy is consumed at the operational phase, which corresponds to the running of the building when it is occupied. Finally, energy is consumed in the demolition process of buildings as well as in the recycling of their parts, when this is promoted3. We have found that the cost-effective and alternate construction technologies, which apart from reducing cost of construction by reduction of quantity of building materials through improved and innovative techniques or use of alternate low-energy consuming materials, can play a great role in reduction of CO2 emission and thus help in the protection of the environment. CO2 emission during production of

construction materials Production of ordinary and readily available construction materials requires huge amounts of energy through burning of coal and oil, which in turn emits a large volume of GHGs. Reduction in this emission through alternate technologies/practices, will be beneficial to the problem of global warming. To deal with this situation, it is important to accurately quantify the CO2 emissions per unit of such materials. In India, the main ingredients of durable and ‘pucca’ building construction are steel, cement, sand and brick. Emission from crude steel production in sophisticated plants is about 2.75 t carbon/t crude steel4. We may take it as 3.00 t per t of processed steel. The actual figure should be more, but is not available readily. Cement production is another high energy

consuming process and it has been found that about 0.9 t of CO2 is produced for 1 t of cement5. Sand is a natural product obtained from river beds, which does not consume any energy, except during transport. The energy thus consumed has not been considered in this article. Brick is one of the principal construction materials and the brick production industry is large in most Asian countries. It is also an important industry from the point of view of reduction of GHG emissions as indicated from the very high coal consumption and the large scope that exists for increasing energy efficiencies of brick kilns. In a study by GEF in Bangladesh (where the method of brick is the same as in India), an emission of 38 t of CO2 has been noted per lakh of brick production6. Use of cost-effective technologies in

India – Reduction in GHG emission and cost of construction

As already mentioned, there are other improved alternate technologies available like bamboo panels, bamboo reinforced concrete, masonry stub foundation, etc. All of them can contribute significantly, if not more, in reducing in the cost of construction and CO2 emission. For academic purpose, this article restricts discussion to rat-trap bond wall, brick arches and filler slabs only, for which data on material consumption and reduction from conventional techniques are readily available. By adopting the techniques mentioned above, a reduction of 20% can be achieved in the cost of construction without compromising on the safety, durability and aesthetic aspect of the buildings (Figure 6). In 2006, the cost of structural work for a building with ordinary masonry wall and slab in

Page 366: Keynote 2011

366 | P a g e

India was to the tune of Rs 3000 per sq. m. It may vary by 15–25% depending upon the location and availability of materials. A 20% saving in cost means reduction by Rs 600 per sq. m and for a 50 sq. m residential house, the saving will be to the tune of Rs 30,000. The figures given in Table 2, when related to the Table 1 show reduction in CO2 emission for a 50 sq. m building. The above reduction of 2.4 t in CO2 may qualify for carbon trading (see Note 5) also and according to the current rate of trading may fetch a minimum of Rs 1800 also. (Experts feel that though subject to wide fluctuations, the going rate of one Carbon Emission Reduction (CER) unit in the European market is around 12–13 Euros. One Euro is now equivalent to Rs 57.43.) In case of houses made of compressed mud block, reduction in CO2 emission would be to the tune of 8000 kg or 8 t per 50 sq. m of the house. The Indian housing scenario and scope

of reduction of CO2 emission Increase in population, rise in disposable income, and aggressive marketing by financial institutions to provide housing loan on easier terms are pushing up the demand for durable permanent houses in both urban and rural areas of India. Construction of permanent market omplexes, malls and other recreational amenities in big cities has also undergone phenomenal growth in recent times. In accordance with India’s National Housing and Habitat Policy 1998, which focuses on housing for all as a priority area, with particular stress on the needs of the economically weaker sections and low income group categories, the Two Million Housing Programmed was launched during 1998–99. This is a loan-based scheme, which envisages facilitating construction of 20

lakh (2 million) additional units every year (7 lakh or 0.7 million dwelling units in urban areas; 13 lakh or 1.3 million dwelling units in rural areas). If we consider that each house will be of a bare minimum area of 20 sq. m according to the standards of different government schemes, the total area of construction per year will be 40 million sq. m. If cost- effective construction technologies like rat-trap bond and filler slab are adopted, India alone can contribute to a reduction of 16.80 mt of CO2 per year and at the same time can save Rs 24,000 million (20% cost reduction over 40 million sq. m of construction @ Rs 3000 per sq. m), which will go to the state exchequer as the schemes are funded by the Government. The reduction in CO2 emission in monetary terms is equivalent to a CER of nearly Rs 1200 million

Figure 6. Office building with rat-trap bond wall, filler slab (source: FOSET). Table 2. Reduction in CO2 emission for

a 50 sq. m building

Page 367: Keynote 2011

367 | P a g e

Building material

required by conventional

method

Reduction by using cost-effective

construction technology (rat-trap bond wall, brick arch and

filler slab)

Reduction in

carbon dioxide

emission (kg)

Brick – 20,000 nos

20%, i.e. 4000 nos

1440

Cement – 60 bags or 3.0 t

20%, i.e. 0.6 t

540

Steel – 500 kg or 0.5 t

25%, i.e. 0.125 t

375

Total reduction in

carbon dioxide

emission

2355 (say

2.4 t)

Conclusion

Now it is the task of scientists, engineers and policy makers of our country to popularize the technology, so that India can significantly contribute to reduction in CO2 emission from its huge and rapidly growing construction sector. Most Government Bodies and Municipalities in India are reluctant to accept this technology and give permission to people to build their house with cost effective technology (CET). The following steps may be taken to ensure proper and extensive use of CET in the light of sustainable development and protection of the environment: • Sensitization of people: Extensive awareness campaigns and

demonstrations among general public and also among engineers and architects to make them familiar with these technologies. The market force of cost reduction will definitely play a major role in acceptance of CET if Governments/Municipal Bodies acknowledge these technologies and direct their concerned departments to adopt them. Promotion of cost-effective technologies through institutes like the HUDCO-sponsored building centers may also be thought of. • Manpower development: Shortage of skilled manpower can play a crucial role in implementing any sort of new technologies in the construction sector. To promote cost-effective technologies, skill up gradation programmers has to be organized for masons. These technologies should also be a part of the syllabus for students of civil engineering and architecture at undergraduate and diploma level. • Material development: The Central and State Governments should encourage the setting up of centers at regional, rural and district levels for production of cost-effective building materials at the local level. The building centers set up by HUDCO for this purpose should be further strengthened. Appropriate field level research and land-to-lab methodology should be adopted by leading R&D institutes and universities to derive substitutes to common energy-intensive materials and technologies. Reuse of harmless industrial waste should also be given priority. • Technical guidance: Proper guidance to general public through design, estimation and supervision has to be provided by setting up housing guidance centers, in line with the concept mooted by the HUDCO building centers. We

Page 368: Keynote 2011

368 | P a g e

have solutions in hand to reduce global warming. We should act now through use of clean and innovative eco-friendly technologies, and evolve policies to encourage their adoption by the statutory bodies to stop global warming. Along with other key sectors, this relatively ignored construction technology sector can also play a major role in reduction of CO2 emission and mitigate global warming. With sincere efforts of all stakeholders, the goal can be achieved. Notes 1. The Kyoto Protocol to the United Nations Framework Convention on Climate Change is an amendment to the international treaty on climate change, assigning mandatory emission limitations for the reduction of GHG emissions to the signatory nations. The objective is the ‘stabilization of GHG concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system’. 2. Laurie Baker (1917–2007): An award-winning English architect, renowned for his initiatives in low-cost housing. He came to India in 1945 as a missionary and since then lived and worked in India for 52 years. He obtained Indian citizenship in 1989 and resided in Thiruvananthapuram. In 1990, the Government of India awarded him with the Padma Shri, in recognition of his meritorious service in the field of architecture. 3. Stretcher: Brick (or other masonry blocks) laid horizontally in the wall with the long, narrow side of the brick exposed. Header: The smallest end of the brick is horizontal, aligned with the surface of the wall and exposed to weather.

4. Corbelling: A layer (or course) of bricks or any other type of masonry units that protrude out from the layer (or course) below. 5. Carbon trading: (i) Under Kyoto Protocol, developed countries agreed that if their industries cannot reduce carbon emissions in their own countries, they will pay others like India (a signatory to the Protocol) to do it for them and help them meet their promised reduction quotas in the interest of worldwide reduction of GHGs. (ii) The ‘currency‘ for this trade is called Carbon Emission Reduction (CER). One unit of CER is one tonne equivalent of carbon dioxide emission. (iii) UNFCCC registers the project, allowing the company to offer CERs produced by the project to a prospective buyer.

Reference 1. Climate Change 2007: The Physical

Science Basis – Summary for

Policymakers, IPCC, 2007, p. 5. 2. http://web.worldbank.org 3. Buildings and Climate Change – Status, Challenges and Opportunities, United Nations Environment Programme, Chapter 2.1, p. 7. 4. Corporate Sustainability Report 2003–04, TATA Steel; www.tatasteel.com/sustainability_05/environmental/ev_09.htm 5. http://www.bee-india.nic.in/sidelinks/EC%20Award/eca05/03Cement 6. UNDP Project Document, Government of Bangladesh, Improving kiln efficiency in the brick making industry, 2006, p. 50. 7. www.chaarana.org/ecofriendlyhouses.html

Page 369: Keynote 2011

369 | P a g e

49. EVALUATION ON STRESS CRACKING RESISTANCES OF VARIOUS HDPE

DRAINAGE GEONETS

1C Karthik,

2S.Sheik Imam, Students of II B.E(Civil), PSNA College of Engineering

[email protected]

ABSTRACT:

Specimens from each geonet were placed under various compressive loads in a vessel containing a solution of 10% surface-active agent and 90% water at a temperature of 50˚C. Then the surface morphology study of the specimen was performed after 500 hours test duration. The results show that all of these geonets did not appear any kind of stress cracking in the condition of 400 kPa, which is a typical landfill’s loading condition. However, in the case of bi-planar geonet there were some deposits on the surface of geonet’s strand and it is expected that this phenomena is due to the results of chemical clogging. On the other hand, in the case of the tri-planar and circular type bi-planar geonets, it maintained very clean flow channels until the end of the test. For high normal pressure some environmental stress cracks were detected for the circular type bi-planar geonet. The results show that the resistance to the environmental stress cracking is related to its polymer density,

crystallinity and also rigidity not its mechanical properties.

KEYWORDS: geonet, compressive loads, surface morphology, stress cracking, chemical clogging, flow channels

INTRODUCTION:

Land filling, by all indications, will continue to be the predominant method of solid waste disposal. As the use of high density polyethylene (HDPE) geonets increase in landfill applications, it is required to evaluate their long-term properties in several chemical conditions. Typically, the high crystallinity of polyethylene geonets provides an excellent chemical resistance to harsh chemical leachate, however can be problematic with regard to environmental stress cracking. Under low stresses in the circumstance of

Page 370: Keynote 2011

370 | P a g e

room temperature polyethylenes will fracture by slow crack growth. This mode of failure limits the lifetime of polyethylenes used in critical applications as drainage materials, lining under landfills. Geomembranes and geonets are used as a barrier and drainage component in this system, respectively. With addition of carbon black which is an anti-oxidation material HDPE geomembranes and geonets are normally used in hazardous landfill system as a barrier and drainage respectively. Many researchers and a lot of work about environmental stress cracking resistance for the geomembranes were done and many beneficial reports have already been published. However a few research results on the environmental stress cracking resistance for the geonet drainage material were performed. Therefore, in this study the resistance to environmental stress cracking (ESCR) was examined mainly in morphological issues for various geonets (bi-planar, tri-planar and circular type of bi-planar geonet) under condition of various normal pressures.

SPECIMEN & TEST METHODS:

Total three types of geonets were test in this study. Sample A has 5.6 mm mean value of thickness and two layers which means bi-planar geonet. The cross sectional shape of strand of Sample A is more likely to a square. Sample B has average of 8.6 mm thickness and has 3 layers (tri-planar). Sample C is also bi-planar geonet however has circular type cross sectional shape and thicker than sample A. The raw material of all these samples is high density polyethylene (HDPE). Typical specifications of the samples are provided in Table 1. Fig. 1 shows these samples. Short-term compressive deformation test was performed

using the procedures set forth in Standard Test Method for Determining Short-term Compression Behavior of Geosynthetics (ASTM D6364) to evaluate basic mechanical properties of samples. Specimen is positioned between two rigid steel platens and compressed at a constant rate of 1.0 mm/min. To control an accurate temperature of specimen of 23˚C heating platens were manufactured and its heating is 14˚C/min. Also special test equipment for ESCR under compression was manufactured and this equipment is shown Fig. 2.

The specimens were immersed in a solution of 90% water and 10% I-gepal CO630 at a temperature of 50˚C. The solution level was checked daily and de-ionized water used to keep the bath at a constant level. And the solution was replaced every 2 weeks. 200, 400 and 700 kPa for sample A, 600, 1,000 and 1,200 kPa for sample B and 400, 600 and 800 kPa for C of load were subjected as compressive load using 6:1 arm lever loading system within considering their compressive strengths. The immersion duration was 500 hours and during and after the test apparent observation and microscopic morphology was evaluated for the specimen.

Table 1 Typical specification of the samples

Property Test method

Unit Sample

A B C

Thickness ASTM D5199

Mm 5.6 8.6 8.2

Mass per unit area

ASTM D5261

g/m2 920 1700 2300

Carbon black

ASTM D4218

% 2.3 2.2 2.3

Density ASTM g/cm3 0.942 0.944 0.940

Page 371: Keynote 2011

371 | P a g e

D1505

Crystallinity ASTM D2910

% 56 55 61

(a) Sample A

(b) Sample B

(c) Sample C

Fig. 2 Compressive environmental stress cracking test equipment

RESULTS & DISCUSSION:

Considering the compressive strength and strain properties, the sample C has the stiffest behavior in these three Samples. Initial 5% elastic modulus is much higher than other samples. From this behavior of Sample C it is expected that sample C has rigid structure and has high crystallinity of over 60%. Table 1 confirms this phenomenon. In the other hand Sample A

Page 372: Keynote 2011

372 | P a g e

and C have more flexible behavior and low initial elastic modulus.

Fig. 3 Short-term compression test results

Figs. 4-9 exhibit the results of apparent observations and microscopic morphologies. Some kind of chemical clogging due to the I-gepal solution is expected for the Sample A because of its flow channel and thickness.

This chemical clogging for the Sample A was confirmed by the apparent observation. Fig. 4 shows the results of apparent observations for Sample A. In this figure many deposits on the surface of the specimens were detected during and end of the test and it seems that these deposits which were induced from the chemical solution may occur clogging and therefore affect geonet’s in-plane flow capacity. Also there is no chemical clogging on the surface of the specimen for Sample B and this fact was confirmed by apparent observation (Fig. 5). Considering flowing pattern of the I-gepal solution through out the specimen, the I-gepal has zigzag flow pattern and this courses some frictions with strands of sample A, therefore the chance of clogging is higher than the Sample B which has straight flow pattern. Also thin thickness compared to other samples can increase chance of any clogging. For the Sample C, the initial creep deformation was very low which means the initial modulus is higher

than the other samples and therefore high modulus indicate more rigid than others. High rigidity has brittle failure pattern rather than ductile failure and this can induce a stress crack during the compressive creep test. Also it seems that the chemical act a stress cracking accelerator.

Fig. 4 Apparent observation during and end of the test for sample A (200kPa)

Page 373: Keynote 2011

373 | P a g e

Fig. 5 Apparent observation during and end of the test for sample B (700 kPa)

Figs. 6-9 confirm this environmental stress cracking phenomenon. From these exhibitions it is clear that Sample A and Sample B which have relatively more flexible HDPE strand than Sample C didn’t experience any kind of environmental stress cracking. For the Sample C which is more rigid and has high crystallinity (Table 1) likely has to chance of stress cracking. The microscopic morphologies indicate that the extent of environmental stress cracking observed in the Sample C is related to its flexibility and crystallinity. And from the morphologies it seems that the stress cracks occurred at the junction point of the strands first and then propagate to strands with increasing normal pressure.

Fig. 6 Apparent observations end of the test for samples under various normal pressures

Fig. 7 Microscopic morphologies of Sample A after the test for various normal pressures

Fig. 8 Microscopic morphologies of Sample B after the test for various normal pressures

Fig. 9 Microscopic morphologies of Sample C after the test for various normal pressures

CONCLUSIONS:

In this study long-term (500 hours) environmental stress cracking resistance for various geonets under various normal pressures were evaluated. The conclusions are as follows: 1. ESCR property is one of the most critical parameters for evaluating

Page 374: Keynote 2011

374 | P a g e

long-term chemical resistance of HDPE geonets which used in hazardous landfill systems.

2. Traditional bi-planar geonets which have square type strand and tri-planar geonet have very strong chemical and stress cracking resistance even high normal pressure.

3. Cylindrical type bi-planar geonets is more rigid material than other samples and it is very weak to environmental stress cracking with increasing normal pressure.

REFERENCES:

• Bobsein RL (1999) factors influencing SP-NCTL test results, Proceedings of the 13th GRI Seminar: 46-57

• Carlson, DS, etc. (1993) laboratory evaluation of HDPE geomembrane

seams, Geosynthetics 93, Vancouver, Canada: 1543

• Lagaron JM, Pastor JM, Kip BJ (1999) Polymer 40: 1629-1636

• Peggs ID, Kannien MF (1995) Geosynthetics International

• Qian R, Lu X, Brown N (1993) Polymer

• Thomas, RW (1998) evaluating the stress crack resistance of HDPE seams, 6th International Conference on Geosynthetics

• Thomas RW, Deschepper BW (1993) stress crack testing of un-notched HDPE geomembrane and seams, Proceedings of the 7th GRI seminar Dec. 14-15

• Ward AL, Lu X, Brown N (1990) Polymer Engineering and Science, Sep 30(18)

50. ULTRA HIGH STRENGTH CONCRETE

(USING REACTIVE POWDER)

M Sokkalingam, P Gowtham

Students of Final B.E(Civil), JJ College of Engineering and Technology, Trichy

ABSTRACT:

Concrete is an essential material for construction, because of its composition and performance it is used every where. Nowadays, while designing a structure, more important is being given to strength and durability of the concrete. Each structure is built in accordance with its

needs, so the special properties are to be satisfied by them for their good performance. We know that concrete is a compressive material rather than a tensile material. For increasing its compressive strength, we add the chemical admixtures into the concrete. This concrete has higher strength as compared to that of ordinary concrete. The strength and durability of this concrete is mainly depends on its mix proportions. Reactive powder concrete

Page 375: Keynote 2011

375 | P a g e

(RPC) is a developing composite material that will allow the concrete industry to optimize the material use, generate economic benefits, and build structures that are very strong, durable, and sensitive to environment. A comparison of the physical, mechanical, and durability properties of RPC and high performance concrete(HPC) shows that RPC posses better strength(both compressive and flexural) Compared to HPC. This paper reviews the available literature on RPC, and presents the results of laboratory investigation such as fresh concrete properties, compressive strength, flexural strength, comparing RPC with HPC. Specific benefits and potential applications of RPC have also been described.

INTRODUCTION:

HPC is not a simple mixture of cement, water, and aggregates. Quite often, it contains mineral components and chemical admixtures having very specific characteristics, which impart specific properties to the concrete. The development of HPC results from the materialization of a new science of concrete, a new science of admixtures and the use of advanced scientific equipment to monitor concrete microstructure.

RPC was first developed in France in the early 1990s and the world’s first RPC structure, the Sherbrooke Bridge in Canada, was constructed in July 1997. RPC is an ultra high-strength and high ductility cementation composite with advanced mechanical and physical properties. To consists of a special concrete where the

microstructure is optimized by precise gradation of all particles in the mix to yield maximum density RPC is emerging technology that lends a new dimension to the term “High Performance Concrete “. It has immense potential in construction due to its superior mechanical and durability properties compared to conventional high performance concrete, and could even replace steel in some applications.

COMPOSITION OF RPC:

Reactive powder concrete has been developed to have a strength of 200 to 800 Mpa with required ductility. It is new technique involved in the civil engineering .Reactive powder concrete in made by replacing the conventional sand and aggregate by grounded quartz less than300 micron size, silica fume, synthesized precipitated silica ,steel fibers about 1 Cm in length and 180 micron in diameter .

RPC is composed of very fine powders (cement, sand, quartz powder, and silica fume), steel fibers (optional) and a super plasticizer. The super plasticizer, used at its optimal dosage, decreases the water binder ratio (w\b) while improving the workability of concrete. A very dense matrix is achieved by optimizing the granular packing of the dry fine powders. This compactness gives RPC ultra –high strength and durability .Reactive powder concretes have compressive strength ranging from 200 Mpa to 800Mpa

Page 376: Keynote 2011

376 | P a g e

Typical composition of reactive powder concrete 800 Mpa

1. Portland cement – type V 1000 kg/m3

2. Fine sand ( 150 – 400 micron) 5001 kg/m3

3. Silica fume (18 m2/gm) 390 kg/m3

4. Precipitated silica (35 m2/gm) 230 kg/m3

5. Super plasticizer (polyacrylate) 18 kg/m3

6. Steel fibers (length 3mm and dia.180µ) 630 kg/m3

7. Total water 180 kg/m3

8. Compressive strength (cylinder) 490 – 680 Mpa

9. Flexural strength 45 – 102 Mpa

Typical composition of reactive powder concrete 200 Mpa

1. Portland cement – type V 955 kg/m3

2. Fine sand ( 150 – 400 micron) 1051 kg/m3

3. Silica fume (18 m2/gm) 229 kg/m3

4. Precipitated silica (35 m2/gm) 10 kg/m3

5. Super plasticizer (polyacrylate) 13 kg/m3

6. Steel fibers 191 kg/m3

7. Total water 153 kg/m3

Page 377: Keynote 2011

377 | P a g e

PRINCIPLES FOR DEVELOPING RPC

Some of the general principles for developing RPC are given below,

1. Elimination of coarse aggregates for enhancement of homogeneity. 2. Utilization of the pozzolanic properties of silica fume. 3. Optimization of the granular mixture for the enhancement of compacted density. 4. Optimal usage of super plasticizer to reduce w/b and improve work ability. 5. Application of pressure (before and during setting) to improve compaction. 6. Post-set heat-treatment for the enhancement of the microstructure. 7. Addition to small-sized steel fibres to improve ductility.

PROPERTIES OF RPC:

The mixture design of RPC primarily involves the creation of a dense granular skeleton. Optimization of the granular mixture can be achieved by the use of packing models.

Property of RPC Description Recommended value Type of failure eliminated

Reduction in aggregate size

Coarse aggregate are replace by fine sand, with a reduction in the size of the coarse aggregate by a factor of about 50.

Maximum size of fine sand is 600 µm

Mechanical, chemical & thermo- mechanical

Enhanced mechanical properties

Improved mechanical properties of the paste by the addition of silica fume

Young’s modulus values in 50-75 Gpa range

Distribution on the mechanical stress field

Reduction in aggregate to matrix ratio

Limitation of sand content

Volume of the paste is at least 20 % voids index of non-compacted sand

By any external source (for example formwork).

8. Compressive strength (cylinder) 170 – 230 Mpa

9. Flexural strength 25 – 30 Mpa

10. Young’s modulus 54 – 60 Mpa

Page 378: Keynote 2011

378 | P a g e

MECHANICAL PEFORMANCE AND DURABILITY OF RPC:

The RPC family includes two types of concrete, designated RPC 200 and RPC 800, which offers interesting implicational possibilities in different areas. Mechanical for the two types of RPC are given in the table. The high flexural strength of RPC is due to addition steel fibres.

Comparison of RPC 200 Mpa and RPC 800 Mpa:

Property RPC 200 Mpa RPC 800 Mpa

Pre-setting pressurization, Mpa

None 50

Compressive strength (using quartz sand), Mpa

170 to 230 490 to 680

Compressive strength (using steel aggregate), Mpa

- 650 to 810

Flexural strength, Mpa 30 to 60 45 to 141

Comparison of HPC (80 Mpa) and RPC 200 Mpa:

Property HPC (80 Mpa ) RPC 200 Mpa

Compressive strength, Mpa 80 200

Flexural strength, Mpa 7 40

Modulus of elasticity, Gpa 40 60

Fracture toughness, J/m2 <103 30x103

Table shows typical mechanical properties of RPC compared to a conventional HPC having compressive strength of 80 Mpa. As fracture toughness, which is a measure of energy absorbed per unit volume of material to fracture, is higher for RPC, it exhibits high ductility. Apart from their exceptional mechanical properties, RPC have an ultra-dense microstructure, giving advantageous water proofing and durability characters. These materials can be therefore be used for industrial and nuclear waste storage facilities.

RPC has ultra-high durability characteristic resulting from its extremely low porosity, low permeability, limited shrinkage and increasing corrosion resistance. In comparison to HPC,

Page 379: Keynote 2011

379 | P a g e

there is no RPC given in table enable its use in chemically aggressive environments and where physical wear greatly limits the life of other concretes.

Laboratory investigations:

The materials used for the laboratory study, there is specification and properties have been presented in the table.

Materials used in the study and their properties:

Sl.no Sample Specific gravity Particle size range

1. Cement, OPC, 53-grade 3.15 31 µm – 7.5 µm

2. Micro silica 2.2 5.3 µm – 1.8 µm

3. Quartz powder 2.7 5.3 µm – 1.3 µm

4. Standard sand, grade-1 2.65 0.6mm – 0.3 mm

5. Steel fibres (30 mm) 7.1 Length: 30 mm and diameter:0.4 mm

6. River sand 2.61 2.36 mm – 0.15 mm

Page 380: Keynote 2011

380 | P a g e

20

40

60

80

100

120

140

com

pre

ssiv

e s

trength

, M

pa

HPC - F normal curing RPC - F normal curing

Mixture design of RPC and HPC:

The process of mixture selection of RPC and HPC is given below. Considerable numbers of trial mixtures were prepared to obtain good RPC and HPC mixture proportions.

MIXTURE PROPORTIONS OF RPC AND HPC:

Material Mixture proportions

RPC – F HPC – F

Cement 1.00 1.00

Silica fume 0.25 0.12

Quartz powder 0.31 -

Standard sand grade 1 1.09 -

River sand 0.20 -

30 mm steel fibers 0.03 0.023

Admixture (polyacrylate based)

Water 0.4 0.4

Workability and density were recorded for the fresh concrete mixtures. Some RPC specimens were heat cured by heating in a water bath at 90o

C after until the time of testing. Specimens of RPC and HPC were also cured in water at room temperature. The performance of RPC and HPC was monitored over time with respect to the following parameters.

• Fresh concrete properties. • Compressive strength • Flexural strength • Water absorption

Fresh concrete properties:

The workability of RPC mixtures (with and without fibres), measured using the mortar flow table test as r\per ASTMC10916, was in the range of120-140%. On the other hand, the workability of HPC mixtures (with fibres), measured using the slump test as per ASTM C23117, was in the range of 120-150mm. The density of fresh RPC and HPC mixture was found to be in the range of 2500-2650 kg/m3.

Compressive strength:

The compressive strength analysis throughout the study shows that RPC has higher compressive strength than HPC, as shown in fig. compressive strength is one of the factors linked with durability of a material.

Page 381: Keynote 2011

381 | P a g e

Water absorption of RPC and HPC

0

0.5

1

1.5

2

2.5

3

3.5

7 14 21 28Age, days

Wate

r a

bso

rption

pe

rce

nt

HPC

RPC

The maximum compressive strength of RPC obtained from this study is as 200Mpa, while the maximum strength obtained for HPC is 75Mpa. The incorporation of fibres and use of heat curing was seen to enhance the compressive strength of RPC by 30 to 50%. The incorporation of fibres did not affect the compressive strength of HPC significantly.

Flexural strength:

Plain RPC was found to possess marginally higher flexural strength than HPC. Table clearly explains the variation in flexural strength of RPC and HPC with the addition of steel fibers. Here the increase of flexural strength of RPC with the addition of fibers is higher than that of HPC.

As per literature, RPC 200should have an approximate flexural strength of 40 Mpa. The reason for low flexural strength obtained in the study could be that the fibers used (30mm) were long and their diameter was relatively higher. Fibre reinforced RPC (with appropriate fibres) has the potential to be used in structures without

any additional steel reinforcement. This cost reduction in reinforcement can compensate the increase in cost by the elimination of coarse aggregates in RPC to some extent.

FLEXURAL STRENGTH AT 28 DAYS, Mpa

RPC RPC –F HPC HPC – F

NC NC NC NC

11 18 8 10

NC = Normal curing.

Water absorption:

A common trend of decrees in the water absorption with age is seen here both for RPC and HPC. The percentage of water absorption of RPC, however, is very low compared to that of the HPC. The quality of RPC is one among the desired properties of nuclear waste containment materials.

Page 382: Keynote 2011

382 | P a g e

CASTING OF TEST SPECIMEN:

Test specimens are prepared to determine the strength of concrete. Cube specimens are used to test the compressive strength of the HPC and RPC.

CASTING OF CUBE:

Specimen size = 10 cm x 10 cm x 10 cm

Volume of one cube = 0.1 x 0.1 x 0.1

= 0.001 m3

PREPARATION OF MIX:

The above mix calculated quantities of cement, aggregates, admixtures and water are weighed properly.

Cement, fine aggregates and mineral admixtures are mixed thoroughly in dry conditions. These coarse aggregate is added and mixed well. Finally water and super plasticizer are added and mixed until the concrete appears to be homogeneous and has the designed consistency.

PREPARATION OF TEST SPECIMEN:

Each mould is to be cleaned and assembled properly. Then inside and joints of mould are thinly coated with mould oil.

The concrete is to be filled into the mould in layers. For cubical specimen the concrete is to be subjected using table vibrator 30 seconds per layer.

The specimens are stored at a room temperature of 27o ± 2o C for 24 hours ± ½ hours form the time of addition of water to the dry ingredients. After this period, the specimen are marked and removed from the moulds and immediately submerged in clean water.

The specimens in water are taken after 28 day. The specimens shall not be allowed to become dry at any time until these are tested.

TESTING OF SPECIMEN:

1000 KN compression testing machine is used for testing specimen for its compressive strength.

Procedure:

The specimen stored in water is to be tested immediately on removal from the water and they are still in the wet condition.

Surface water and quit is to be wiped of the specimen and any projecting fines are removed. The dimensions and weights of the specimens are to be taken.

Placing the specimen in the testing machine.

Page 383: Keynote 2011

383 | P a g e

The bearing surface of the testing machine and the surface of the specimens are to be cleaned. The specimen is to be placed in the machine in such a manner that the load is to be applied to the opposite side of the cubes as cast that is not at the top and bottom. The axis of the specimen is to be carefully aligned with the center of the thrust of the spherically seated platen no packing is to be used between two faces of the specimen and the steel platen of the testing machine.The movable portion is to be rotated gently by hand so that uniform seating may be obtained.The load is to be applied until the resistance of the specimen to the increasing load breaks down and no greater load can be restrained.The maximum load applied to the specimen is to be recorded and the appearance of the concrete and any unusual features in the type of the failure is noted.

LIMITATIONS OF RPC:

In a typical RPC mixture design, the least costly components of conventional concrete are basically eliminated and replaced by more expensive. In term of size scale, the fine sand used in RPC becomes equivalent to the coarse aggregate of conventional concrete, the Portland cement plays the role of the fine aggregate and the silica fume that of the cement. The mineral component optimization alone results in a substantial increase in cost over and above that of

conventional concrete (5 to 10 times higher than HPC). RPC should be used in areas were substantial weight savings can be realized and where some of the remarkable characteristics of the material can be fully utilized . Owing to its high durability, RPC can even replace steel in compression members where durability issues are in ( for example in marine condition ). Since RPC in its infancy, the long term properties are not yet known.

CONCULSION:

A laboratory investigation comparing RPC and HPC led to the following conclusions.

• A maximum compressive strength of 198 Mpa was obtained. This is in the RPC 200 range (175 Mpa – 225 Mpa).

• The maximum flexural strength of RPC obtained was 22 Mpa, lower than the values quoted in literature (40 Mpa). A possible reason for this could be the higher length and diameter of fibres used in this study.

• A comparison of the measurements of the physical, mechanical and durability properties of RPC and HPC shows that RPC better strength (both compressive and flexural) and lower permeability compared to HPC.

51. Automation in Waste Water Treatment D. SUBASRI, M.E., AP, PMU

A. SANGEETHA M.E., AP, PMU

[email protected]

PERIYAR MANIAMMAI UNIVERSITY, VALLAM, THANJAVUR

Page 384: Keynote 2011

384 | P a g e

Abstract— Due to tremendous increase in

the industrial processes waste water

generation also increases in the same way which creates a major problem in its

treatment and also in its disposal. Proper supervision of the treatment units is

necessary which is done manually is updated by industrial computer system

named SCADA which monitors and control the treatment units which leads to

the safer environment.

XXIV. INTRODUCTION

Waste water from industries causes serious ill effects in the environment when it is not safely treated and disposed off. Due to the lack of skilled labours and treatment units the waste water from the industries has to be monitored and controlled by a centralised computer system which collects data and according to the condition prevailing it controls the treatment units and safely dispose the waste water or it directs the waste water to recycle by acquiring its preliminary characteristics.

The term SCADA usually refers to centralized systems which monitor and control entire sites, or complexes of systems spread out over large areas (anything between an industrial plant and a country). Most control actions are performed automatically by Remote Terminal Units ("RTUs") or by programmable logic controllers ("PLCs"). Host control functions are usually restricted to basic overriding or supervisory level intervention. For example, a PLC may control the flow of cooling water through part of an industrial process, but the SCADA system may allow operators to change the set points for the flow, and enable alarm conditions, such as loss of flow and high temperature, to be displayed and recorded. The feedback control loop passes through the RTU or PLC, while the SCADA system monitors the overall performance of the loop.

Data acquisition begins at the RTU or PLC level and includes meter readings and equipment status reports that are communicated to SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the HMI can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to a Historian, often built on a commodity Database Management System, to allow trending and other analytical auditing.

SCADA systems typically implement a distributed database, commonly referred to as a tag database, which contains data elements called tags or points. A point represents a single input or output value monitored or controlled by the system. Points can be either "hard" or "soft". A hard point represents an actual input or output within the system, while a soft point results from logic and math operations applied to other points. (Most implementations conceptually remove the distinction by making every property a "soft" point expression, which may, in the simplest case, equal a single hard point.) Points are normally stored as value-timestamp pairs: a value, and the time stamp when it was recorded or calculated. A series of value-timestamp pairs gives the history of that point. It's also common to store additional metadata with tags, such as the path to a field device or PLC register, design time comments, and alarm information.

A Human-Machine Interface or HMI is the apparatus which presents process data to a human operator, and through which the human operator controls the process.

An HMI is usually linked to the SCADA system's databases and software programs, to provide trending, diagnostic data, and management information such as scheduled

Page 385: Keynote 2011

385 | P a g e

maintenance procedures, logistic information, detailed schematics for a particular sensor or machine, and expert-system troubleshooting guides.

The HMI system usually presents the information to the operating personnel graphically, in the form of a mimic diagram. This means that the operator can see a schematic representation of the plant being controlled. For example, a picture of a pump connected to a pipe can show the operator that the pump is running and how much fluid it is pumping through the pipe at the moment. The operator can then switch the pump off. The HMI software will show the flow rate of the fluid in the pipe decrease in real time. Mimic diagrams may consist of line graphics and schematic symbols to represent process elements, or may consist of digital photographs of the process equipment overlain with animated symbols.

The HMI package for the SCADA system typically includes a drawing program that the operators or system maintenance personnel use to change the way these points are represented in the interface. These representations can be as simple as an on-screen traffic light, which represents the state of an actual traffic light in the field, or as complex as a multi-projector display representing the position of all of the elevators in a skyscraper or all of the trains on a railway.

An important part of most SCADA implementations is alarm handling. The system monitors whether certain alarm conditions are satisfied, to determine when an alarm event has occurred. Once an alarm event has been detected, one or more actions are taken (such as the activation of one or more alarm indicators, and perhaps the generation of email or text messages so that management or remote SCADA operators

are informed). In many cases, a SCADA operator may have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other indicators remain active until the alarm conditions are cleared. Alarm conditions can be explicit - for example, an alarm point is a digital status point that has either the value NORMAL or ALARM that is calculated by a formula based on the values in other analogue and digital points - or implicit: the SCADA system might automatically monitor whether the value in an analogue point lies outside high and low limit values associated with that point. Examples of alarm indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen (that might act in a similar way to the "fuel tank empty" light in a car); in each case, the role of the alarm indicator is to draw the operator's attention to the part of the system 'in alarm' so that appropriate action can be taken. In designing SCADA systems, care is needed in coping with a cascade of alarm events occurring in a short time, otherwise the underlying cause (which might not be the earliest event detected) may get lost in the noise. Unfortunately, when used as a noun, the word 'alarm' is used rather loosely in the industry; thus, depending on context it might mean an alarm point, an alarm indicator, or an alarm event.

The technology is based on a type of recirculatory evapotranspiration channel. Primary treatment is obtained though the use of anaerobic sludge chambers and in-line filters. Secondary treatment techniques use a combination of aeration, aggregate biofilms, and sand, soil and rhizofiltration. Tertiary treatment relies on biological activity from microorganisms and selected plants as well as the cation exchange capacity of specific aggregates. Reuse is through evapotranspiration or if desired though a

Page 386: Keynote 2011

386 | P a g e

variety of surface applications. Non-transpired

effluent can recirculate through the system until used.

Infrastructure and Alarm Types of the SCADA System

The monitoring and control of the system is accomplished using SCADA arrangements in the field to control pump function and water flow, and to enable supervision of the operational aspects of the site. The on site alarm and control system are powered with an un-interruptible power supply (UPS) with a reserve capacity of at least 12 hours. The SCADA units communicate to a central monitoring location via redundant links of differing formats. These links consist of a physical link through the public switched telephone network (PSTN) and a redundant global systems mobile (GSM) link through the mobile network (MN). Both the PSTN modem and the GSM modem is powered via the UPS allowing site monitoring at all times. Pumps are monitored for correct operation; alarms are raised in the event of a failure. Water levels in the holding tanks are continuously monitored and the pump run times are automatically modified to keep the levels within specified limits. Effluent flow rates are monitored in the feed lines to ensure correct operation of diversion and flow control valves and assist in the detection of system abnormalities. The SCADA control system, UPS, PSTN modem and GSM modem will be housed in a lockable stainless steel enclosure rated to IP 65 and mounted either on a pole, slab, or

nearby shed, close to the system .Primary power circuit breakers, pump control relays and pump current monitors as well as any primary power metering will be housed in a separate lockable stainless steel enclosure also rated at IP 65 and mounted near the SCADA enclosure Urgent alarms will be reported immediately to the Central monitoring station flagged for urgent action. Types of alarms in this category will directly and immediately affect system operation and should be attended to without delay. Non-Urgent alarms will be reported on the next scheduled site scan by the Central Monitoring station. Types of alarms in this category will not directly affect system operation or integrity but may indicate unusual operational parameters. Off-Normal alarms indicate manual intervention has been initiated and will be reported on the next scheduled scan by the Central Monitoring station. These alarm categories may be reported visually and/or audibly locally if required by legislation. The level sensors are placed in the lid of each tank, facing directly into the water. The sensors are PIL brand 50kHz 24 V DC ultrasonic transmitter/ receivers with a 0 – 10 V analogue output proportional to the distance of the target from the sensor. Sensor range is 200 mm to 2000 mm. The sensors are power fed from the SCADA and the analogue output is processed directly by the SCADA. The output from these sensors is used to mitigate the pumping regime established in each tank. These sensors are also used to trigger some alarm conditions to be reported to the control station.

2.2 SCADA Software Instruction

Summary for waste water treatment System

The computer program behind the SCADA infrastructure is what determines its effectiveness. This next section describes the software instructions that control the

Page 387: Keynote 2011

387 | P a g e

operation and data collection procedures of the SCADA system gure 2. One of the main aims of the computer program is too equalize flows through the treatment processes, as wastewater production at schools typically occurs in surges, in accordance with the school schedules. Very little wastewater is produced outside of normal school hours orschool holidays unless a special event is occurring. The site also had installed an irrigation tank (tank 4) as the excess recycled water from the treatment system is used toirrigate a cabinet timber plantation and the school oval (sub-surface).

Tank 1 Requirements:

1. Pump P1 run for 15 minutes every 3 hours

2. No pump if tank level less than 50%

3. No pump if level sensor fails

4. Pump run immediately if tank 1 level more than 75%. Pump down to 60% then

revert to 15 minutes every 3 hours

5. Stop pumping if emergency overflow pump is running

Tank 2 Requirements:

1. Pump P1 run for 15 minutes every 3 hours

2. No pump if tank level less than 50%

3. No pump if level sensor fails

4. Pump run immediately if tank 2 level more than 75%. Pump down to 60% then

revert to 15 minutes every 3 hours

5. Stop pumping if emergency overflow pump is running

Tank 3 Requirements:

1. Pump P3 run if tank level more than 75%

2. No pump if tank level less than 25%

3. No pump if emergency pump P5 is running

4. No pump if level sensor fails

5. No pump to tank 1 if tank 1 level more than 75%

Emergency Overflow requirements

1. Pump P4 (to Tank 4) run if level in T3 is more than 90%. Pump down to 35%

2. No pump (to Tank 4) if Tank 4 is less than 85%

3. Revert to subsurface drip if Tank 4 is less than 85%

4. No pump if level sensors in Tank 3 or 4 fail

Tank 4 (irrigation tank) and

commissioning root feed

1. Pump P4 run at 0530 for 3 hours

2. No pump if Tank 3 level below 25%

3. Pump P4 run at 1703 for 3 hours

4. No pump if Tank 3 level below 25%

5. Pump P5 run at 0530 for 3 hours

6. No pump if Tank 4 level below 25%

7. Pump P5 run at 1703 for 3 hours

8. No pump if Tank 4 level below 25%

Transpiration levels

1. Measure level in Tanks 2,3,4 before cyclic pump

2. Wait for 15 minutes then log the levels in Tanks 2, 3 and 4 again

3. Calculate difference in levels

Page 388: Keynote 2011

388 | P a g e

4. Log difference

Pump Run Logs

1. Record the hours run for each pump

2. Log hours run

Tank Level Logs

1. Record tank levels every 3 hours halfway through pump cycle

2. Log levels

Alarms

1. If contractors operate without a valid pump run command raise an “OFF NORMAL” alarm and Exception Report to Control Centre

2. Sensor Fail raises “URGENT ALARM” and send Exception Report to Control Centre.

3. Total system reserve less than 20% Send SMS and Exception Report to Control Centre.

The data collection aspects of the program are aimed at providing information that would highlight any operating or maintenance problems within the system. Pump failures or line blockages are easily identified, as are storm water intrusions and infrastructure failures at the school (eg malfunctioning urinal). The health of the plants in the channels can also be monitored via the evapo transpiration volume; if these decrease with no logical explanation, such as extended overcast weather conditions, it can be assumed that something is wrong and needs correction, and a visit to the site can be made.

CONCLUSIONS

The operation of the SCADA technology in conjunction with the PLC system has so far been successful. SCADA technology may solve some of the control and monitoring problems associated with medium to large on-site wastewater treatment and reuse systems.

REFERENCES

1. http://www.controleng.com/article/CA321065.html. Retrieved 2008-05-30.[dead link] (Note: Donald Wallace is

COO of M2M Data Corporation, a

SCADA vendor.) 2. D. Maynor and R. Graham.

"SCADA Security and Terrorism: We're Not Crying Wolf". http://www.blackhat.com/presentations/bh-federal-06/BH-Fed-06-Maynor-Graham-up.pdf.

3. Robert Lemos (2006-07-26). "SCADA system makers pushed toward security". SecurityFocus. http://www.securityfocus.com/news/11402. Retrieved 2007-05-09.

4. J. Slay, M. Miller, Lessons learned from the Maroochy water breach, Critical Infrastructure Protection,, vol. 253/2007, Springer, Boston, 2007, pp. 73-82

5. External SCADA Monitoring 6. "S4 2008 Agenda".

http://www.digitalbond.com/wp-content/uploads/2007/10/S4_2008_Agenda.pdf.

7. KEMA, Inc. (November 2006). Substation Communications:

Enabler of Automation / An

Assessment of Communications

Technologies. UTC - United Telecom Council. p. 3–21.

8. Mills, Elinor (2010-07-21). "Details of the first-ever control system

Page 389: Keynote 2011

389 | P a g e

malware (FAQ)". CNET. http://news.cnet.com/8301-27080_3-20011159-245.html. Retrieved 21 July 2010.

9. "SIMATIC WinCC / SIMATIC PCS 7: Information concerning Malware / Virus / Trojan". Siemens. 2010-07-21. http://support.automation.siemens.com/WW/llisapi.dll?func=cslib.csinfo&lang=en&objid=43876783&caller=view. Retrieved 22 July 2010. "malware (trojan) which affects the visualization system WinCC SCADA."

10. "Siemens: Stuxnet worm hit industrial systems". http://www.computerworld.com/s/article/print/9185419/Siemens_Stuxnet_worm_hit_industrial_systems?taxonomyName=Network+Security&taxonomyId=142. Retrieved 16 September 2010.

Page 390: Keynote 2011

390 | P a g e

52. SEISMIC BEHAVIOUR OF REINFORCED CONCRETE BEAM-COLUMN JOINTS

C Karthikeyan, P C Pandithurai

Students of IV B.E(Civil), Institute of Road and Transport Technology Erode

XXII. ABSTRACT

Gravity load designed (GLD) structures are vulnerable to any major earthquake and require immediate assessment of their seismic performance to avoid any undesirable collapse. Nevertheless, there is no information available on comparative performance of existing structures designed based on prevailing practicing standards in different stages of their development. Further, stipulated provisions for seismic design of structures differ from one standard to another. Beam-column joint is an important part in seismic resistant design of reinforced concrete moment resisting frames since they ensure continuity of a structure and transfer forces from one element to another. The flow of forces within the beam-column joint may be interrupted if the shear strength of the joint is not adequately provided.In the present study, these issues are covered by considering different stages of Indian standards provisions for designing the reinforced concrete framed structure. An exterior beam-column joint (D-region) of a building frame has been selected as the target sub-assemblage to be studied under reverse cyclic load. The study is aimed at evaluating the seismic performance of the structural component designed according to different stages of standard provisions.

I. INTRODUCTION

Globally reinforced concrete structures, even in seismic zones, were being designed only for gravity load, and hence their performance under earthquake could be questionable. In most cases, those structures are vulnerable to even moderate earthquakes and so these structures need immediate assessment to avoid catastrophic failure. Moreover, for new structures, the specifications and detailing provisions, though available to certain extent, have to be considered in such a way that the structure would be able to resist seismic actions.The behavior of reinforced concrete moment

resisting frame structures in recent earthquakes all over the world has highlighted the consequences of poor performance of beam-column joints. Beam-column joints in a reinforced concrete moment resisting frame are crucial zones for transfer of loads effectively between the connecting elements (i.e. is beams and columns) in the structure. In the analysis of reinforced concrete moment resisting frames, the joints are generally assumed as rigid. Generally, a three phase approach is to design a structure to resist earthquake loading, i.e.,

(i). The structure must have adequate lateral stiffness to control the inter-story drifts such that no damage would occur to non-structural elements during minor but frequently occurring earthquakes, (ii). During moderate earthquakes, some damage to non-structural elements is permitted, but the structural element must have adequate strength to remain elastic so that no damage would occur, and (iii). During rare and strong earthquakes, the structure must be ductile enough to prevent collapse by aiming for repairable damage which would ascertain economic feasibility.Beam-column joints in a reinforced concrete moment resisting frame are crucial zones for transfer of loads effectively between the connecting elements(i.e.beams & columns)in the structure. In normal design practice for gravity loads, the design check for joints is not critical &hence not warranted. But, the failure of reinforced concrete frames during many earthquakes has demonstrated heavy distress due to shear in the joints that culminated in the collapse of the structure. Detailed studies of joints for buildings in seismic regions have been undertaken only in the past three to four decades. It is the worth mentioning that the relevant research outcomes on beam-column joints from different countries have led to conflicts in certain aspects of

Page 391: Keynote 2011

391 | P a g e

design. Co-ordinated programs were conducted by researchers from various countries to

identify these conflicting issues & resolve them.

XXIII. BEAM-COLUMN JOINT

A beam-column joint defined as that portion of the column within the depth of the deepest beam that frames into the column throughout this document, the term joint is used to refer to a beam-

column joint. A connection is the joint plus the columns, beams and slab adjacent to the joint. A transverse beam is one that frames into the joint in a direction perpendicular to that for which the joint is shear is being considered

.

Types of joints in a Moment Resisting Frame

(i) Interior Joint (ii)Exterior joint (iii) Corner joint

Page 392: Keynote 2011

392 | P a g e

XXIV. EXPERIMENTAL PROGRAMME

The experimental investigation focuses on the behavior of an typical exterior beam-column joint which is a part of the three storeyed building subjected to constant gravity loading & varying transverse loading.

General arrangement of the

building frame considered for the study

Details of Formwork

XXV.

Page 393: Keynote 2011

393 | P a g e

XXVI. MATERIALS PROPERTIES

The concrete used for the construction of all specimens consists of the following materials.

XXVII. CEMENT

Cement used for the specimen was ordinary Portland cement. The specification gravity of cement was determined as per IS: 576-1964 (10) &found to be 3.15.

FINE AGGREGATE

The fine aggregate used for all the specimens in river sand. The fine aggregate used casting was sieved through IS: 4.75 mm sieve. The specific gravity of

fine aggregate used for concrete was determined &found to be 2.65.

COURSE AGGREGATE

The coarse aggregate used in the mixes are hard blue granite stones from quarries. 12mm size aggregate was stored in separate dust proof containers. The

DIMENSIONS OF THE SPECIMEN

The specimens are cast using 12mm thickness film coated plywood sheet made to suit the dimensions of the model. The dimension of mould for beam region 380mm x 125mm x 120mm &that for column region is 600mm x225mm x120mm. The moulds are properly nailed to a base plywood sheet, in order to keep the alignment accurately. The mould is oiled before placing the concrete.

TEST SETUP AND TESTING PROCEDURE

Each specimen was tested under reverse cyclic loading in the structural laboratory. The column of the test assembly was placed in a loading frame. The column was centered accurately using plumb bob to avoid eccentricity. The bottom end was fixed in box shaped section made of steel plates. The dimension of the box section is chosen

in such a way that column face exactly packed in the section. In top of column mild steel plate over 10 tones proving ring and operated screw jack is fixed. It’s used for applying axial load for column and also avoids the movement of column.

Page 394: Keynote 2011

394 | P a g e

Page 395: Keynote 2011

395 | P a g e

Deflection meter is used to measure the downward & upward displacements in the beam at a distance of 350mm from the clear face of column. The deflection meter was connected to its stand that was in turn connected to the loading frame available in the laboratory.

BEHAVIOUR OF RC BEAM-

COLUMN JOINTS

In this experimental result of RC beam-column joint under reverse cyclic loading has been enumerated. The Parameters like load carrying capacity, ductility factor & energy absorption capacity and stiffness were studied.

LOAD CARRYING CAPACITY

The first crack was witnessed during 3rd cycle at the load level of 9KN. As the load level was increased, further cracks were developed in other portions. The ultimate load carrying capacity of the RC beam-column joint was 19.5KN recorded at 7th cycle.

LOAD DEFLECTION CHARACTERISTICS

The exterior beam-column joint specimen was subjected to quasi-static cyclic loading simulating earthquake loads. The load was applied by using screw jack totally 7 cycles were imposed.

The beam-column joint was gradually loaded by increasing the load level during each cycle. The load sequence consists of 1.8KN, 2.4KN, 3.0KN and up to failure load. The defection measured at centre was 1.15mm during the first cyclic of loading. As the load level was increased in each cycle, the observed deflection was greater than it was in earlier cycle.

MODE OF FAILURE

The exterior beam-column joint test specimen was tested under cyclic loading. During the forward loading cracks have been developed at the top of the specimen.

As the loading was progressed the width of back have been widened. And the reversal of load the specimen has to be in the reversed position, cracks have been formed at the bottom tension and the cracks already formed in the tension face have to be closed. And new cracks have been opened at the tension face. This opening & closing of the cracks have been confirmed till the final failure of the specimen takes place.

STIFFNESS CHARACTERISTICS

Page 396: Keynote 2011

396 | P a g e

Stiffness is defined as the load required to causing unit deflection of the beam-column joint.

a) A tangent was drawn for each cycle of

the hysteric curves at a load of P=0.75 Pu

where Pu- was the maximum load of that

cycle.

b) Determine the slope of the tangent

drawn to each cycle, which gives the

stiffness of that cycle. In general, with

the increase in the load there is

degradation of stiffness.

CONCLUSIONS

An experimental study was carried out on exterior beam-column joints which are tested under reverse cyclic loading. Based on the investigation reported in earlier chapters, the following conclusions are drawn. They are summarized below.

1. First crack load of the beam-column

joint was observed at 9KN

2. The ultimate load carrying capacities

of beam-column joints are 19.5KN

3. The initial stiffness of the beam-

column joint has been observed as

1.655kN/mm.

RC beam-column joint with secondary reinforcement shows an overall better behavior in terms of load carrying capacity, ductility, energy absorption and stiffness. Hence it is highly suitable for the structure in severe seismic prone zones.

REFERENCES

Eligehasuen. R, Ozbolt. J, Genesio. G, Hoehler. M. S, Pampanin. S (2006) ‘Three-Dimensional Modelling of Poorly Detailed RC Frame Joints’, NZSEE Conference (2006)

Geng et al. ~1998! and Mosallam ~1999! Used composite overlays to strengthen simple models of interior beamcolumn joints and recorded increases in the strength, the stiffness, and the ductility of the specimens.

Gergely et al. ~1998! “he analyzed one of the aforementioned bents by performing pushover

analysis and found good agreement between the experimental and theoretical load versus displacement response”.

R.Park and T.Paulay, (1975) ‘’He investigated the effect amount and arrangement of transverse steel in the joint region and the method of anchoring the beam steel on thirteen full scale reinforced concrete beam–column subassemblies”.

Sathish Kumar et al. (2002), have carried out an experimental study to clarify the effect of joint detailing on the seismic performance of lightly reinforced concrete frames.

Page 397: Keynote 2011

397 | P a g e

53. WATERSHED MANAGEMENT

P.SATISHKUMAR & M.MUNESH (PRE-FINAL YEAR)

[email protected] & [email protected]

Abstract:

Watershed management is a great boon for

our country for which more steps are taken to

improve .Watershed management is an

indispensable one ,it mainly deals with storing

runoff water during rainfall .India receives

rainfall only during monsoon season average

annual rainfall in India has decreased due to

anthropogenic activities. Due to over exploitation

of ground water, the ground water levels in many

areas show a declining trend, which in turn tends

to increase both the investment cost and the

operational cost. This problem can be alleviated to some extent by artificially recharging the potential

aquifer and efficient harvesting of the rainwater. GIS is advanced method to detect areas of

runoff. Watershed management is drained by lake ,ponds ,rivers, runoff.

“IT IS IMPORTANT TO APPLY THE

TECHNOLOGY OF WATERSHED

MANAGEMENT TO SOLVE ITS ANNUAL

PROBLEMS OF DROUGHTS AND FLOODS.”

INTRODUCTION:

Watershed Management - India's crying need “ WATER IS A PRECIOUS NATURAL

RESOURCE AND AT THE SAME TIME COMPLEX TO MANAGE”

In India, where a lot of water goes waste, it is important to apply the technology of Watershed management to solve its annual problems of droughts , floods. Watershed to achieve maximum production with minimum hazard to the natural resources and for well being of people.

Due to over exploitation of ground water, the ground water levels in many areas show a declining trend, which in turn tends to increase both the investment cost and the operational cost. This problem can be alleviated to some extent by artificially recharging the potential aquifer and efficient harvesting of the rainwater.

The demand for water exceeds its supply. Water covers about 3/4 of the earth's surface, but only about 2% is fresh water, and a larger portion of it is polar ice. 86% of Asia's fresh water is used for agriculture, 8% for industry and 6% for domestic purposes. Our country uses 83% of fresh water for agriculture. Fresh water, once considered an inexhaustible resource, is now fast becoming a rare and scarce commodity. . Conflicts sharing water resources are on the rise.

Page 398: Keynote 2011

398 | P a g e

XXVIII. OBJECTIVES OF WATERSHED

MANAGEMENT:

The term watershed management is nearly synonymous with soil and water conservation with the difference that emphasis is on flood protection and sediment control besides maximizing crop production.

The basic objective of watershed management is thus is thus meeting the problems of land and water use, not in terms of any one resource but on the basis that all the resources are interdependent and must, therefore, be considered together.

The watershed aims, ultimately, at improving standards of living of common people in the basin by increasing their earning capacity, by offering facilities such as electricity, drinking water, irrigation water, freedom from fears of floods, droughts etc.

Adequate water supply for domestic, agricultural and industrial needs.

Abatement of organic, inorganic and soil pollution.

WATERSHED:

A watershed is an area from which all water drains to

a common point, making it an interesting unit for

managing water and soil resources to enhance

agricultural production through water conservation.

DEFINITION: Watershed management is the process of creating and implementing plans, programs, and projects to sustain and enhance watershed functions that affect the plant, animal, and human communities within a watershed boundary. Watershed management basically involves harmonizing the use of soil and water resources between upstream and downstream areas within a watershed toward the objectives of natural resource conservation, increased agricultural productivity and a better standard of living for its inhabitants.

Principles of Watershed Management:

The main principles of watershed management based on resource conservation, resource generation and resource utilization are:

Utilizing the land based on its capability Protecting fertile top soil Minimizing silting up of tanks, reservoirs

and lower fertile lands Protecting vegetative cover throughout the

year In situ conservation of rain water Safe diversion of gullies and construction

of check dams for increasing ground water recharge

Increasing cropping intensity through inter and sequence cropping.

Alternate land use systems for efficient use of marginal lands.

Water harvesting for supplemental irrigation.

Improving socio - economic status of farmers

Page 399: Keynote 2011

399 | P a g e

Strategy Development: Develop goals and strategies to maintain or achieve water quality standards and meet future demands.

Implementation: Implement goals and strategies through permits, best management practices (BMPs) and education. One would also measure progress

GIS BASED WATERSHED MANAGEMENT:

In many parts of the world rapid growth in population, agriculture sector and industrialization have increased the demand for water. With the increasing use of water for various activities, ground water declines at an accelerated rate. In order to prevent the fast depletion of ground water levels, artificial recharge is required. Various artificial recharge and water harvesting structures are required to allow the movement of rainwater from the catchment surface into the underground formations. Selection of suitable sites for artificial recharge and water harvesting structures needs a large volume of multidisciplinary data from various sources. Remote sensing is of immense use for natural resources mapping and generating necessary spatial database required as input for GIS analysis. GIS is an ideal tool for collecting, storing and analyzing spatial and non - spatial data, and developing a model based on existing factors to arrive at a suitable natural resources development and management action plans. Both these techniques in conjunction with each other are the most efficient tools for selecting suitable sites for artificial recharge structures and water harvesting structures.

Watershed Approach to Water Quality

Management

The figure below demonstrates the Watershed Approach to Water Quality Management. By looking at watersheds, the state can evaluate all the sources of pollution that may be affecting the water quality and quantity.

The Watershed Approach is an ongoing cycle of tasks: setting standards for surface water quality; taking measurements of the conditions; assessing the data and identifying the impairments including establishing priorities; verifying the pollution sources and developing plans for restoring water quality; and implementing pollution source controls. Pollution source controls can be things such as permits, rules, and nonpoint source management practices.

Planning: Determine the watershed planning unit .

Data Collection: collect routine water quality and quantity data at specific locations.

Assessment and Targeting: Compare current water quality to state and federal stds.

INTEGRATED WATERSHED

MANAGEMENT:

Integrated watershed management approach (IWMA) is the process of utilisation, development and conservation of land, water, forest and other resources for continually improving livelihoods for communities in a hydrologically independent region.

OBJECTIVES OF IWM:

Promote sustainable economic development through (i) optimum use of land, water and vegetation (ii) provide employment (and local capacity building) Restore ecological balance through sustainable development of natural resources and community participation; encouragement of available low cost technologies for easy

Page 400: Keynote 2011

400 | P a g e

SWAT hydrological model: (SOIL AND WATER ASSESSMENT TOOL)

The SWAT model is designed to route water and sediments from individual Watersheds, through the river systems. It can incorporate the tanks and the reservoirs/check dams off-stream as well as on-stream. The agricultural areas can also be integrated with respect to its management practices. The major advantage of the model is that unlike the other conventional conceptual simulation models, it does not require much calibration and therefore can be used on engaged watersheds.

SIGNIFICANCE OF GIS IN WATERSHED

MANAGEMENT:

Watershed prioritization is an important aspect of planning for implementation of the watershed management programme. Two such applications, one for finding the interaction between the administrative and watershed boundaries that shall help in allocation of financial resources with respect to the watershed boundaries and the other to locate the water-harvesting structures, which is a common feature in the watershed management programme, have been formulated and demonstrated. The application can also be used for monitoring and evaluation of the watershed programmes which is an important component, but is invariably missing.

Due to over exploitation of ground water, the ground water levels in many areas show a declining trend, which in turn tends to increase both the investment cost and the operational cost. This problem can be alleviated to some extent by artificially recharging the potential aquifer and efficient harvesting of the rainwater.

RAIN WATER HARVESTING:

“Every Drop of Rain Water is Precious-Save it”

DEFINITION:

Rainwater harvesting means making use of each and every drop of rainwater to recharge the groundwater, by directing the rainfall to go into the well or under the ground, without wasting the rainwater.

OBJECTIVES:

To conserve the surface run-off during

monsoons.

To recharge the aquifers and increase availability

of ground water.

To improve the quality of ground water where

required.

To overcome the problem of flooding and

stagnation of water during the monsoon

season.

To arrest salt-water intrusion

RAINWATER HARVESTING THROUGH WATERSHED

MANAGEMENT:

By using the existing well:

The run-off water from rooftops can be led into the existing well through pipes and a small settling pit to filter the turbidity and other pollutants. In this cost-effective process we not only conserve the precious rainwater but also help to increase the local ground water table. Even an abandoned well can be used for this purpose.

Through percolation pits:

Percolation pits (1m x 1m x 3m) may be dug a little away from the building. The pit is then filled with brick jelly/pebbles followed by river sand for the purpose of better percolation. The top layer of sand

Page 401: Keynote 2011

401 | P a g e

may be cleaned and replaced once in two years to remove the settled silt for improving the percolation.

Decentralized percolation trenches:

This method can be adopted by those who reside in houses with large open areas. Run-off water from the rooftop can be diverted into bare soil/ garden in the premises. Apart from this a longitudinal trench of 1.0m depth and 0.5 m width may be dug at the periphery of the plot and filled with pebble stones and sand in order to store the excess run-off during rainy season that will eventually percolate into the ground.

ADVANTAGES:

Rainwater is bacteriologically pure, free

from organic matter and soft in nature.

It will be help in reducing the flood hazard.

To improve the quality of existing ground

water through dilution.

Rainwater may be harnessed at place of need

and may be utilised at time of need.

The structures required for harvesting the

rainwater are simple, economical and

ecofriendly.

TECHNIQUES OF ARTIFICIAL

RECHARGING TO GROUND WATER:

DEFINITION:

Artificial recharge (also sometimes called planned recharge) is a method of storing water underground at times of water surplus to meet demand in times of shortage.

REASONS OF A.R.:

The main reasons for carrying out artificial recharge may be summarized as follows:

promoting recovery of overexploited aquifers;

storage of surficial waters during flood periods

to maintain and improve supply in the dry

season;

storage of local or imported water in the

aquifer;

preventing seawater intrusion by creating

freshwater barriers;

increasing the value of aquifers for water

distribution in areas with many wells;

discharging certain wastewaters, such as

cooling water;

reducing groundwater salinity in agricultural

areas;

reducing subsidence caused by high pumping

rates; and

groundwater quality improvement

CLASSIFICATIONS OF A.R.:

Page 402: Keynote 2011

402 | P a g e

Basins:

Basins are large shallow depressions where recharge water is stored. Their shape and dimensions are generally adapted to local topography. Many basins need scheduled maintenance to improve the capacity of the bed for infiltration.

Floods:

This method is used on level ground, where waters are spread directly on the natural surface to

produce a thin water layer. The ground must retain its natural vegetation cover.

Ditches:

Parallel ditches (Figure 4) do not distribute water deeply, and are usually located very close to one another. They are usually arranged in three different ways:

with a "zigzag" course following the contours and

land slope;

with branches from a main channel; or

with ditches arranged perpendicular to the main

channel.

The excess of recharge water flowing from the ditches is collected in a discharge channel. Ditches vary in width from 0.3–1.8 m; the slope must be very slight to minimize erosion and to increase the wetted section. These methods may differ in areas with irregular topographic.

CONVENTIONAL AND NON-CONVENTIONAL METHODS:

CONVENTIONAL METHOD:

The major problem with surface storage is the loss of land covered with water, and the ecological, environmental, and social problems generated.

Non-Conventional Methods:

Non-conventional water resource technologies that may be used in artificial recharge schemes include:

desalination of salty or brackish waters

wastewater recovery or regeneration

climate modification programs

schemes to reduce evaporation.

Groundwater Recharge Precautions:

These provisions should apply to:

effluents and sludges produced by wastewater

treatment plants;

domestic waste disposal sites;

subsurface waste containment by deep-well

injection or storage.

Advantages and Disadvantages of Artificial

Groundwater Recharge : Advantages:

Very little special equipment is needed to

construct drainage wells

Groundwater recharge stores water during the

wet season for use in the dry season, when

demand is highest.

Most aquifer recharge systems are easy to

operate.

In many river basins, controlling surface water

runoff to provide aquifer recharge reduces

sedimentation problems.

Disadvantages: o Groundwater recharge may not be economically

feasible unless significant volumes can be

injected into an aquifer.

o A very full knowledge of the hydrogeology of an

aquifer is required before any full-scale recharge

project is implemented.

Conclusion: “SAVE THE SILVER DROPS !GET

REWARDS IN THE FUTURE!” Watershed management is a great boon to our country India , which will help in providing sustainable water to the present and future generation. Awareness programme should be conducted among

Page 403: Keynote 2011

403 | P a g e

people through which they will come to know the importance of watershed management.

54. FOOT OF NANOTECHNOLOGY

C.Justin Raj , R.Vinoth Kumar, Anna University ,Thirukkuvalai Campus

Page 404: Keynote 2011

404 | P a g e

ABSTRACT

During the last three decades, significant developments have taken place to improve the mechanical and durability properties of cement concrete. The sustained efforts of researches all over the world to innovate and incorporate unmatched

excellence in construction materials have led to development of several advanced technology such as nano technology in cement concrete composite.

Recently it is envisaged that the nano scale concrete dramatically improved the properties from conventional grain size materials of the same chemical composition.

Micro silica has been one of the world’s most widely used products as an additive in the concrete since eighty years. Nowadays many researches are going on with Nano

silica by fully replacing the Micro silica in concrete. The aim of replacing the Nano silica is to fulfill the environmental regulation. This paper briefly deals about the

properties of “NANO CONCRETE AND NANO FIBRES”. The latest generation of instruments with unique ability to generate the particle sizes in the Nano range is

the NANO SPRAY DRYER B-90, which is also discussed in this paper.

It was found that the first Nano product called CUORE CONCRETE which surpassed the expectation of the designs and afford concrete with higher

workability, strength and improved durability at low volume of cement concrete.

Secondly NYCON G –NANO, a Nano sized reinforcing fibrous concrete plays major role in concrete technology and provides an environmentally challenging

choice for globally responsible, truly sustainable product. It can withstand a temperature up to 255C (491 F) and reduces cracking of concrete. NANO FIBRES

offer a further incentive to “GO GREEN”.

This paper is highly useful and appreciable in today’s environment conscious thanks to the entry of nano tech in concrete composite.

“EVERYBODY AND EVERYTHING CAN MAKE A DIFFERENCE IN THE GREEN REVOLUTION”

INTRODUCTION:

Page 405: Keynote 2011

405 | P a g e

A long time used material in concrete is for the first time fully replaced by a

nano material.It is well known in physics and chemistry that a well designed and

developed nano material produces better and cheaper cost at this level: a thousand fold smaller than the older level: “micro” (0.000001 mt). Results than traditional

materials, to the stabilization and reinforcement of matter particles.

Civil engineers deal with designing, building and maintaining the various structures that make civilization function. Roads, bridges, canals, tunnels, traffic

systems, public transportation and other structures that operate on a large scale are subject to special considerations that require engineers to account for earthquakes,

winds, massive public movement and even military strikes. These special requirements give multiple applications for nanotechnology, from earthquake-

resistant building materials to graffiti-resistant subways.

Construction:

Nano is used in construction to improve the properties and functions of commonly used materials like:

Conventional concrete is partly made up of silica. But if you use nano-scale particles of silica in the concrete mix, you can make the structure of the concrete

more denser. This means it is better at blocking water and as a result is more durable. Use of carbon nano tubes or nano clays can also help to make new lighter

and stronger materials which last longer and are easier to work with and more able to resist shocks such as earthquakes.

MICRO SILICA:

Micro silica has been one of the world’s most widely used products for concrete for over eighty years. Its properties allowed high compressive strength

concretes; water and chemical resistant concretes, and they have been part of many concrete buildings that we see nowadays. Its disadvantage, though, has been its

relatively high cost and contamination, which affects the environment and the operators’ health. As micro silica, as a powder, is thousand fold thinner than

cigarette smoke. Operators must take special precautions to avoid inhaling micro silica and not to acquire silicosis, an irreversible disease.

In the middle of 2003, a product which could replace micro silica seen the

contaminant effects, having the same or better characteristics and at a reasonable cost was on the design table. The goal: silica fulfilling the environ-mental regulation:

ISO-14001.Using tools from physics, chemistry and recent Nanotechnology Advances, the challenge was fulfilled.Lab tests and production tests proved that the nano silica did not contaminate (because its state), but it also produced better results

Page 406: Keynote 2011

406 | P a g e

than micro silica, and a litre bottle of the product was equivalent to a barrel full of micro silica, extra cement and super plasticizing additives.

CUORE CONCRETE:

Because of its innovation the nano silica was tested for over a year in the

world’s largest subterranean copper mine to prove its long term characteristics. Cuore concrete takes care of the environment, the concrete and the operators´

health. It is the first nano product that replaced the micro silica. Cuore concrete surpassed the expectations of its design and gave concrete not only the high initial

and final resistance but in addition, plasticity, impermeability, minor final cost of work, and cement savings of up to 40%. Also, it lowered the levels of environmental

contamination.

In addition, a liter bottle of Cuore concrete equals a whole barrel of micro

silica, extra cement and super plasticizers. If before a 2 meters thick beam was required to hold a bridge correctly, now only 75 cm are required. If before 28 days

were necessary in order to achieve compressive strengths of 80MPa, now only 1 day is required. The pre stressed beams that before required 3 days to be ready and

needed to be cured with water and steam, now require only 1 day and they do not need water.

Moreover, Cuore concrete became one of the first indicators of the

properties that the next commercial nano cements in the market will have: nano particles of silica turn into nano particles of cement (nano cement) in the chemical

reactions that take place in the concoction of the concrete, Thanks to all these advantages, the entrance of nano silica Cuore concrete into the market modified the

concept of what is possible and what is not in the concrete field.

Properties of concrete with Cuore concrete nanosilica:

• In high compressive strengths concretes (H-70), Cuore concrete is 88% more efficient than micro silica, added to concrete and super

plasticizers. ( For an average 9,43 Kg. of Cuore concrete Nanosilica, 73Kg. of

all the others additives are used).

• The production cost of is drastically lower than using the traditional

production method or formulas. • It has an air inclusion of 0% to 1% • The cone test shows that It preserves the cone shape for more than one • hour. (with a relation of H2O/Cement=0.5, adding 0.5% of Nano silica of the

metric volume of the cement used, it conserved a its circle shape of 60 cm for

two hours, with a lost of only 5%). The nano silica has a plasticity that has

Page 407: Keynote 2011

407 | P a g e

been compared to the policarboxilate technology. Therefore the use of super plasticizing additives is unnecessary.

• High workability with reduced water/concrete levels, for example: 0.2. • Easy homogenization. The reduction of mixing times allows concrete plants

to increase their production • Depending on the cement and the formulations used for concrete (tests from

value H-30 to H-70), shows that the material provides compressive

• strengths between 15 MPa and 75 MPa at 1 day; 40 MPa and 90 MPa at 28 days and 48 MPa and 120 MPa at 120 days.

• Nano silica fully complies with ISO 14001 regulations regarding the environment and health. It preserves operators of the danger of being

contaminated with silicosis and does not contaminate the environment. • It successfully passed all the tests and since the beginning of this year it is

being commercialized in different parts of the world..

Immediate benefits for the user:

1. Cessation of contamination caused by micro silica solid particles. 2. Lower cost per building site. 3. Concrete with high initial and final compressive and tensile strengths. 4. Concrete with good workability. 5. Cessation of super plasticizing utilization. 6. Cessation of silicosis risk. 7. High impermeability. 8. Reduction of cement using Cuore concrete Nanosilica.

9. Cuore concrete nano silica on itself produces nano cement. 10. During the moisturizing reaction of the cement, the silica produces CSH

particles, the “glue” of the concrete ensuring the cohesion of all the particles. 11. Cuore concrete has a specific surface near to 1,000m2/gr (micro silica has

only 20m2/gr) and a particle size of 5nm to 250 nm.

As a consequence of its size, Cuore concrete produces nano crystals of CSH, filling up all the micro pores and micro spaces which where left empty in traditional

concrete production. Former described function reinforces the concrete structure on levels, thousand times smaller then in the case of traditional concrete production.

This allows the reduction of the cement used and gives the compression needed to reduce over 90 % of the additives used in the production of H-70 concrete.

Page 408: Keynote 2011

408 | P a g e

Cuore concrete allows to save in between 35% and 50% of the used cement.We do stress that we recommend to change the formula of the concrete in order to take

advantage of the characteristics of the Cuore concrete Nano silica particle.

Less material is needed to obtain better results, using Cuore concrete.

The results are the proof:

1. Resistance to compression from 40 to 90MPa in 1 day. 2. Resistance to compression from 70 to 100 MPa (or more) in 28 days. 3. Versatile: produces high resistance even with low addition (1 to 1.5 % of the

cements weight) and gives self compacting characteristics with higher

proportions (2.5 %). 4. Meets the norms of environmental protection (ISO14001). 5. 70% less use of additives as traditional silica, super plasticizers or traditional

fibres.

NANO FIBRE – CONCRETE:

As a structural material, Nycon's reinforcing fibers allow products to withstand the forces of nature for decades. Now, with NyconG-Nano fibers, your

products are kinder to nature than ever before. In addition to superior three-dimensional fiber reinforcement, NyconG-Nano is a nano-sized,

free flowing, fibrous material that provides an environmentally sound choice for globally responsible, truly sustainable products.

Proven effective in countless applications, NyconG-Nano fibers are made from

100% recycled nylon from reclaimed carpet. For years little has been done with carpet waste besides throwing it away. Today, thanks to an advanced patented

technology (U.S. Patent #6,971,784, other US and foreign patents pending), Nycon "harvests" carpet waste, much like an agricultural asset. Carpet once taken to

landfills is now a resource for producing our proprietary, high-value reinforcing fibers. And more than just conserving landfill space, recycling carpet saves water

and reduces carbon dioxide emissions that contribute to global warming. Along with outstanding performance properties at reduced cost, NyconG-Nano

fibers provide a further incentive to "go green". Everybody and everything can make a difference in the Green Revolution, and Nycon is committed to making a

difference. We do this by reducing our reliance on scarce resources, extending the useful life of our reinforcing products--and now, with environmentally friendly

NyconG-Nano

Page 409: Keynote 2011

409 | P a g e

fibers, reducing energy use, greenhouse gas emissions, and landfill waste.

1) NyconG-Nano Physical Properties:

Designed specifically as a free-flowing, nano fiber reinforcement, NyconG-

Nano is a microscopic blend of nylon fibers and a carefully controlled percentage of calcium carbonate particles made from 100% reclaimed carpet. Production and

packaging standards of NyconG fibers adhere to strict quality control measures, including random testing to insure minimal environmental impact.

2) NyconG-Nano Performance Characteristics:

• Three-dimensional, uniform nano reinforcement • Stronger than polypropylene • Free flowing product • Unlike polypropylene, will not float to the surface • Corrosion-free • Virtually unnoticeable in finished products • Withstand temperatures up to 491°F (255°C)

• Overall improvement in product integrity and soundness

3) Performance Benefits of NyconG-Nano Reinforced Products:

• Reduced cracking • Improved tensile strength • Increased compressive strength, impact resistance • Improved fatigue crack-resistance • Reduced strip and finishing time • Overall improvement in integrity and soundness

4) Business Advantages of NyconG-Nano Reinforced Products:

• Lower production costs • Environmentally friendly • Ideal for all dry package applications and products requiring free-flowing

'Green' fiber reinforcement

REDUCTION IN CO2 GAS:

Concrete is the most widely used man-made material, and the manufacture of cement--the main ingredient of concrete--accounts for 5 to 10 percent of all

Page 410: Keynote 2011

410 | P a g e

anthropogenic emissions of carbon dioxide, a leading greenhouse gas involved in global warming. But now, researchers at MIT

studying the nanostructure of concrete have made a discovery that could lead to

lower carbon-dioxide emissions during cement production.

The researchers found that the building blocks of concrete are particles just a

few nanometers in size, and that these nanoparticles are arranged in two distinct manners. They also found that the nanoparticles' packing arrangement drives the

properties of concrete, such as strength, stiffness, and durability. "The mineral [that makes the nanoparticle] is not the key to achieving those properties ...

Rather, it's the packing [of the particles]," says Franz-Josef Ulm, a civil- and

environmental-engineering professor at MIT who led the work. "So can we not replace the original mineral with something else?" The goal is to formulate a

replacement cement that maintains the nanoparticles' packing arrangement but can be manufactured with lower carbon-dioxide emissions.

Cement manufacture gives rise to carbon-dioxide emissions because it involves

burning fuel to heat a powdered mixture of limestone and clay at temperatures of

1,500ºC. When cement is mixed with water, a paste is formed; sand and gravel are

added to the paste to make concrete. But scientists do not fully understand the structure of cement, Ulm says.

The biggest mystery is the structure and properties of the elementary building

block of the cement-water paste, calcium silicate hydrate, which acts as the glue holding together all the ingredients of concrete. "All of the macroscopic properties

of concrete in some way are related to what this phase is like at the nanometer level," says Jeffrey Thomas, a civil- and environmental-engineering professor at

Northwestern University.

If this structure was better understood, researchers could then engineer cement

on a nanoscale to tailor the properties of concrete, says Hamlin Jennings, a civil- and environmental-engineering and materials-science professor at Northwestern.

Because researchers do not know the behavior of cement on a nanoscale, until now, "progress in concrete and cement research has largely been hit-and-miss,"

Jennings says. Jennings had predicted that calcium silicate hydrate is a particle with a size of about five nm.

Page 411: Keynote 2011

411 | P a g e

Ulm and his postdoctoral researcher Georgios Constantinides have confirmed this structure using a technique called nanoindentation, which involves probing cement

pastes with an ultrathin diamond needle.

Nano spray dryers:

Nano spray dryers refer to using spray drying to create particles in the

nanometer range. Spray drying is a gentle method for producing powders with a

defined particle size out of solutions, dispersions, and emulsions which is widely used for pharmaceuticals, food, biotechnology, and other industrial materials

synthesis.

In the past, the limitations of spray drying were the particle size (minimum 2

microns), the yield (maximum around 70%), and the sample volume (minimum 50 ml for devices in lab scale). Recently, minimum particle sizes have been reduced to

300 nm, yields up to 90% are possible, and the sample amount can be as small as 1 ml. These expanded limits are possible due to new technological developments to the

spray head, the heating system, and the electrostatic particle collector. To emphasize the small

particle sizes possible with this new technology, it has been described as "nano"

spray drying. However, the smallest particles produced are in the submicron range common to fine particles rather than the nanometer scale of ultrafine particles.

NANO SPRAY DRYER B-90

World novelty for fine particles from

minimal sample quantities at high yields!

The latest generation of instruments – the Nano Spray Dryer B-90 –revolutionizes today’s spray drying possibilities with the unique ability to generate particle sizes in

the nano range for milligram sample quantities at high yields.

The Mini Spray Dryer B-190, B-191 and B-290 systems are nano scale spray dryers.

Customer benefits:

• Production of submicron- or even nanoparticles with very narrow size distribution for new breakthroughs in R&D

• Only a minimal sample amount of high value product needs to be invested to receive a dry powder

Page 412: Keynote 2011

412 | P a g e

• Benefit from minimal loss of high value products due to

uniquely high yields • Efficient and fast process

thanks to simple assembling, easy cleaning and fast product changes

• Simple instrument adjustment for specific needs by

taking advantage of the various accessories (e. g. spray drying

organic solvents, controlling particle size)

Markets:

• Pharmaceuticals • Biotechnology • Materials • Nanotechnology

Application areas:

• Nanoparticle

suspensions/nanoemulsions • Micro- and

nanoencapsulations/englobing • Nanoparticle

agglomerations • Structural modifications • Generation of

nanoparticles with high recovery

rates • Spray drying of aqueous

and organic solvent samples

PRESTIGIOUS AWARD:

“Cast in concrete” is not all it’s cracked up to

be. Concrete structures from bridges to condominium complexes are susceptible to

cracks, corrosion and other forces of natural and man-made chemical assault and

degradation. Aging structures can be repaired, but at significant cost.

Florence Sanchez is looking into the tantalizing world of nanoscience for ways to

strengthen concrete by adding randomly oriented fibers ranging from nanometers to

micrometers in length and made of carbon, steel or polymers. (A nanometer is roughly the

size of four atoms.) The assistant professor of civil and environmental engineering at

Vanderbilt University has won the prestigious CAREER Award from the National Science

Foundation (NSF) for her research on long-term durability of nano-structured cement-

based materials.

The award, given to select junior faculty for their exceptionally promising research, will

enable Sanchez and her associates to study the complex chemistry of nanofiber-reinforced

cement-based materials and how these new materials will perform over time, in a variety

of conditions due to weathering.

Page 413: Keynote 2011

413 | P a g e

Conclusion:

Nanotechnologies show a lot of

promise for big benefits in terms of better pollution sensors and clean up systems,

cheap and portable water treatment and more effective filters for pollution and

viruses.

In addition, the potential for

environmentally friendly fuels like solar cells and hydrogen storage, better fuel

efficiency and energy storage (see energy) pesticide replacements and more efficient,

lighter cars, planes, ships (see transport) all help save materials and fuel, indirectly benefiting the environment.

“Cement is an ancient material that has been used

for centuries but its chemistry is still not well understood,” Sanchez says. “We mix cement with

aggregate to create concrete, which we often reinforce with steel rebar. The rebar corrodes over

time, leading to significant problems in our transportation and building infrastructure.”

Sanchez wants to explore how new materials being

developed by the nanoscience community might contribute to solving the problem. Nanofibers made

of carbon, for example, might be added to a concrete bridge, making it possible to heat the structure

during winter or allowing it to monitor itself for cracks because of the fibers’ ability to conduct

electricity.

Because nanostructured materials are new, little is

known about how they will behave in concrete. “We know that nanofibers improve the strength of

concrete at 28 days, but we don’t know how well it resists weathering over the long term,” Sanchez

says, “We do not fully understand the chemistry, the mechanisms of interaction, or how molecular level

chemical phenomena at the fiber-cement interface influence the material performance during

environmental weathering”.

Page 414: Keynote 2011

414 | P a g e

55. INTELLIGENT TRANSPORT SYSTEM

B.VIJAYAKUMAR M.RANJITH

II Year Civil, CARE School of Engineering

Page 415: Keynote 2011

415

ABSTRACT

NOWADAYS ALL THE MODES BECOME

ELECTRIFIED SYSTEMS. WHY SHOULD

HAVE OUR TRANSPORT SYSTEM AS

ELECTRIFIED? IN THIS PAPER WE HAVE

DISCUSSED ABOUT HOW THE TRANSPORT

SYSTEM CAN BE MADE INTO

INTELLIGENT TRANSPORT

SYSTEM. IN THIS PAPER WE HAVE

DISCUSSED ABOUT WIRELESS

COMMUNICATION, COMPUTATIONAL

TECHNOLOGIES, GIS, SENSING

TECHNOLOGIES, INDUCTION LOOP

SYSTEM ETC., AND ALSO THIS PAPER

DEALS WITH VIDEO VEHICLE DETECTION,

ELECTRONIC TOLL COLLECTION CENTRE, EMERGENCY VEHICLE NOTIFICATION

SYSTEM.

XXIX. INTRODUCTION

Interest in ITS comes from the problems caused by traffic congestion and a synergy of new information technology for simulation, real-time control, and communications networks. Traffic congestion has been increasing worldwide as a result of increased motorization, urbanization, population growth, and changes in population density. Congestion reduces efficiency of transportation infrastructure and increases travel time, air pollution, and fuel consumption.

A. Intelligent transportation technologies

Intelligent transportation systems vary in technologies applied, from basic management systems such as car navigation; traffic signal control systems; container management systems; variable message signs; automatic number plate recognition or speed cameras to monitoring applications, such as security CCTV systems; and to more advanced applications that integrate live data and feedback from a number of other sources, such as parking guidance and information systems; weather information; bridge

deicing systems; and the like. Additionally, predictive techniques are being developed in order to allow advanced modeling and comparison with historical baseline data. Some of the constituent technologies typically implemented in ITS are described in the following sections.

1) Wireless communications

Various forms of wireless communications technologies have been proposed for intelligent transportation systems.

Radio modem communication on UHF and VHF frequencies are widely used for short and long range communication within ITS.

Short-range communications (less than 500 yards) can be accomplished using IEEE 802.11 protocols, specifically WAVE or the Dedicated Short Range Communications standard being promoted by the Intelligent Transportation Society of America and the United States Department of Transportation. Theoretically, the range of these protocols can be extended using Mobile ad-hoc networks or Mesh networking.

Longer range communications have been proposed using infrastructure networks such as WiMAX (IEEE 802.16), Global System for Mobile Communications (GSM), or 3G. Long-range communications using these methods are well established, but, unlike the short-range protocols, these methods require extensive and very expensive infrastructure deployment. There is lack of consensus as to what business model should support this infrastructure.

2) Computational technologies

3) Recent advances in vehicle

electronics have led to a move toward

fewer, more capable computer

processors on a vehicle. A typical

vehicle in the early 2000s would have

Page 416: Keynote 2011

416

between 20 and 100 individual

networked

microcontroller/Programmable logic

controller modules with non-real-time

operating systems. The current trend is

toward fewer, more costly

microprocessor modules with hardware memory management and

Real-Time Operating Systems. The new embedded system platforms allow

for more sophisticated software applications to be implemented,

including model-based process control, artificial intelligence, and ubiquitous

computing. Perhaps the most important of these for Intelligent

Transportation Systems is artificial

intelligence. ITS by GIS

Mechanism

Developing Advanced Traveler Information System (ATIS) in Geographic Information

System (GIS) is main objective of current project. In this system shortest path, closest facility

and city bus routes were included. Besides these features location wise information and inter city

traveler information like bus, train and airways timing are also included. Mechanism involved in

the development of package is described in following sections.

2.1.1 Shortest path

Route planning is a process that helps vehicle drivers to plan a route prior to or during a

journey. It is widely recognized as a fundamental issue in the field of transportation. A variety of

route optimization criteria or planning criteria may be used in route planning. The quality of a

route depends on many factors such as distance, travel time, travel speed and number of turns.

These all factors all can be referred as travel cost. Some drivers may prefer the shortest path

The route selection criteria can be either fixed by a design or implemented via a

selectable user interface. In the current project route selection is via user interface. In the

optimization of the travel distance (road segment length), distance was stored in digital data base and the route planning algorithm was used. In the optimization of travel time, road segment length and speed limit on that road are stored in digital data base and travel time was calculated (distance/speed limit). The calculated travel time was used as travel cost in the performance of path optimization.

2.1.2 Closest facility

In the closest facility problem route length and travel time (drive time) were considered

as travel costs. Different facilities like hospitals, bus stations, and tourist places were taken as

themes in the project. Closest facility algorithm calculates all the routes from selected origin to

facilities based on travel cost. It compares travel costs of these routes and gives one optimal

Page 417: Keynote 2011

417

route as output [1].

2.1.3 City bus routes

City buses with their numbers were stored in a data base in a compressed format because

on one road segment there will be more than one bus. A search algorithm was used to find bus

service number from selected origin and destination. According to bus number, road segments on

the map were selected and highlighted with different color. The schematic flow chart of the

package is shown as Fig 1.

2.3 Source Program

The source program for this package has been written in Avenue programming language.

Avenue is object-oriented and scripting language for ArcView GIS. Customization of the

package was done in Avenue. The source code was divided into many numbers of scripts

because in Avenue language functions or procedures are not available. Each script is used for a specified purpose.

2.4 Software Development for Hyderabad City

ArcView GIS version 3.1

Network Analyst version 1.1b

Avenue programming language

4)

5) Floating car data/floating cellular data

Virtually every car contains one or more mobile phones. These mobile phones routinely transmit their location information to the network – even when no voice connection is established. This allows them to be used as anonymous traffic probes. As the car moves, so does the signal of the mobile phone. By measuring and analyzing triangulation network data – in an anonymous format – the data is converted into accurate traffic flow information. With more congestion, there are more cars, more phones, and thus, more probes. In metropolitan areas, the distance between antennas is shorter and, thus, accuracy increases. No infrastructure needs to be built along the road; only the mobile phone network is leveraged. In some metropolitan areas, RFID signals from ETC transponders are used. Floating car data technology provides great advantages over existing methods of traffic measurement:

• much less expensive than sensors or cameras

• more coverage: all locations and streets

• faster to set up (no work zones) and less maintenance

• works in all weather conditions, including heavy rain

Page 418: Keynote 2011

6) Sensing technologies

Technological advances in telecommunications and information technology coupled with state-of-the-art microchip, RFID, and inexpensive intelligent beacon sensing technologies have enhanced the technical capabilities that will facilitate motorist safety benefits for Intelligent transportation systems globally. Sensing systems for ITS are vehicle and infrastructure based networked systems, e.g., Intelligent vehicle technologies. Infrastructure sensors are indestructible (such as in-road reflectors) devices that are installed or embedded on the road, or surrounding the road (buildings, posts, and signs for example) as required and may be manually disseminated during preventive road construction maintenance or by sensor injection machinery for rapid deployment of the embedded radio frequency powered (or RFID) in-ground road sensors. Vehicle-sensing systems include deployment of infrastructure-to-vehicle and vehicle-to-infrastructure electronic beacons for identification communications and may also employ the benefits of CCTV automatic number plate recognition technology at desired intervals in order to increase sustained monitoring of suspect vehicles operating in critical zones.

7) Inductive loop detection

Inductive loops can be placed in a roadbed to detect vehicles as they pass over the loop by measuring the vehicle's magnetic field. The simplest detectors simply count the number of vehicles during a unit of time (typically 60 seconds in the United States) that pass over the loop, while more sophisticated sensors estimate the speed, length, and weight of vehicles and the distance between them. Loops can be

placed in a single lane or across multiple lanes, and they work with very slow or stopped vehicles as well as vehicles moving at high-speed.

8) Video vehicle detection

Traffic flow measurement and automatic incident detection using video cameras is another form of vehicle detection. Since video detection systems such as those used in automatic number plate recognition do not involve installing any components directly into the road surface or roadbed, this type of system is known as a "non-intrusive" method of traffic detection. Video from black-and-white or color cameras is fed into processors that analyze the changing characteristics of the video image as vehicles pass. The cameras are typically mounted on poles or structures above or adjacent to the roadway. Most video detection systems require some initial configuration to "teach" the processor the baseline background image. This usually involves inputting known measurements such as the distance between lane lines or the height of the camera above the roadway. A single video detection processor can detect traffic simultaneously from one to eight cameras, depending on the brand and model. The typical output from a video detection system is lane-by-lane vehicle speeds, counts, and lane occupancy readings. Some systems provide additional outputs including gap, headway, stopped-vehicle detection, and wrong-way vehicle alarms.

B. Intelligent transportation applications

1) Electronic toll collection

Page 419: Keynote 2011

Electronic toll collection at "Costanera

Norte" Freeway, downtown Santiago, Chile

Main article: Electronic toll collection

Electronic toll collection (ETC) makes it possible for vehicles to drive through toll gates at traffic speed, reducing congestion at toll plazas and automating toll collection. Originally ETC systems were used to automate toll collection, but more recent innovations have used ETC to enforce congestion pricing through cordon zones in city centers and ETC lanes.

Until recent years, most ETC systems were based on using radio devices in vehicles that would use proprietary protocols to identify a vehicle as it passed under a gantry over the roadway. More recently there has been a move to standardize ETC protocols around the Dedicated Short Range Communications protocol that has been promoted for vehicle safety by the Intelligent Transportation Society of America, ERTICO and ITS Japan.

While communication frequencies and standards do differ around the world, there has been a broad push toward vehicle infrastructure integration around the 5.9 GHz frequency (802.11.x WAVE).

Via its National Electronic Tolling Committee representing all jurisdictions and toll road operators, ITS Australia also facilitated interoperability of toll tags in Australia for the multi-lane free flow tolls roads.

Other systems that have been used include barcode stickers, license plate recognition, infrared communication systems, and Radio Frequency Identification Tags (see M6 Toll tag).

2) Emergency vehicle notification systems

The in-vehicle eCall is an emergency call generated either manually by the vehicle occupants or automatically via activation of in-vehicle sensors after an accident. When activated, the in-vehicle eCall device will establish an emergency call carrying both voice and data directly to the nearest emergency point (normally the nearest E1-1-2 Public-safety answering point, PSAP). The voice call enables the vehicle occupant to communicate with the trained eCall operator. At the same time, a minimum set of data will be sent to the eCall operator receiving the voice call.

The minimum set of data contains information about the incident, including time, precise location, the direction the vehicle was traveling, and vehicle identification. The pan-European eCall aims to be operative for all new type-approved vehicles as a standard option. Depending on the manufacturer of the eCall system, it could be mobile phone based (Bluetooth connection to an in-vehicle interface), an integrated eCall device, or a functionality of a broader system like navigation, Telematics device, or tolling device. eCall is expected to be offered, at earliest, by the end of 2010, pending standardization by the European Telecommunications Standards Institute and commitment from large EU member states such as France and the United Kingdom.

Page 420: Keynote 2011

Congestion pricing gantry at North Bridge Road, Singapore.

3) Cordon zones with congestion pricing Main article: Congestion pricing

Cordon zones have been implemented in Singapore, Stockholm, and London, where a congestion charge or fee is collected from vehicles entering a congested city center. This fee or toll is charged automatically using electronic toll collection or automatic number plate recognition, since stopping the users at conventional toll booths would cause long queues, long delays, and even gridlock. The main objective of this charge is to reduce traffic congestion within the cordon area.

4) Automatic road enforcement Main article: Traffic enforcement camera

Automatic speed enforcement gantry or "Lombada Eletrônica" with ground sensors at Brasilia, D.F.

A traffic enforcement camera system, consisting of a camera and a vehicle-monitoring device, is used to detect and identify vehicles disobeying a speed limit or some other road legal requirement and automatically ticket offenders based on the license plate number. Traffic tickets are sent by mail. Applications include:

• Speed cameras that identify vehicles traveling over the legal speed limit. Many such devices use

radar to detect a vehicle's speed or electromagnetic loops buried in each lane of the road.

• Red light cameras that detect vehicles that cross a stop line or designated stopping place while a red traffic light is showing.

• Bus lane cameras that identify vehicles traveling in lanes reserved for buses. In some jurisdictions, bus lanes can also be used by taxis or vehicles engaged in car pooling.

• Level crossing cameras that identify vehicles crossing railways at grade illegally.

• Double white line cameras that identify vehicles crossing these lines.

• High-occupancy vehicle lane cameras for that identify vehicles violating HOV requirements.

• Turn cameras at intersections where specific turns are prohibited on red. This type of camera is mostly used in cities or heavy populated areas.

5) Dynamic Traffic Light Sequence

Intelligent RFID traffic control has been developed for dynamic traffic light sequence. It has circumvented or avoided the problems that usually arise with systems such as those, which use image processing and beam interruption techniques. RFID technology with appropriate algorithm and data base were applied to a multi vehicle, multi lane and multi road junction area to provide an efficient time management scheme. A dynamic time schedule was worked out for the passage of each column. The simulation has shown that, the dynamic sequence algorithm has the ability to intelligently adjust itself even with the presence of some extreme cases. The real time operation of the system able to emulate the judgment of a traffic policeman on duty, by considering the

Page 421: Keynote 2011

421

number of vehicles in each column and the routing proprieties.

C. ITS World Congress and Exhibition 2009

The World Congress and Exhibition on Intelligent Transport Systems and Services, takes place in Stockholm in September 2009. The 16th annual event, which rotates between Europe, the Americas and the Asia-Pacific region, comes to Sweden for the first time and takes place at Stockholm International Fairs (Stockholmsmässan), from 21 - 25 September 2009. The theme of this prestigious event is ‘ITS in Daily Life’, exploring how ITS can improve everyday mobility with strong emphasis on co-modality.

Conclusion

"Road operators, infrastructure, vehicles, their drivers and other road users will cooperate to deliver the most efficient, safe, secure and comfortable journey. The vehicle-vehicle and vehicle-infrastructure co-operative systems will contribute to these objectives beyond the improvements achievable with stand-alone systems."

D. References

1. ^ Monahan, Torin. 2007. "War Rooms" of the Street: Surveillance Practices in Transportation Control Centers. The Communication

Review 10 (4): 367-389. 2. ^ Trend in Road Accidents, Japan 3. ^ Dynamic Traffic Light Sequence,

Science Publications 4. ^ 3rd eSafety Forum, 25 March

2004

Page 422: Keynote 2011

422

56. Optimization of Machining Parameters for Form Tolerance in Electrical Discharge Machining

for Aluminium Metal Matrix Composites XXX. T.RAJA

M. Tech Student PRIST University

Metal matrix composites (MMC) are being increasingly used in Automotive and Aircraft industries. MMC’s possess extreme hardness and temperature resistant properties. The property does not change at elevated temp make it in unique engineering applications. Among the MMC’s, Aluminium Metal Matrix composites (AMC) have been found applications in surgical components.

The strength and high hardness with reinforcement makes difficult to machine in the conventional machining process. The use of conventional techniques causes serious tool wear due to the abrasive nature of reinforcement.

These materials can be machined any of the unconventional machining, but machining in Water Jet machining and Abrasive Jet machining are limited to linear cutting only.

Electrical Discharge Machining is found best option for MMC’s. The working is based on the thermoelectric energy. Spark is created between the wok and electrode and both were immersed in dielectric fluid with conduction of electric current.

During the process a gap (10-100µm) is maintained. The ignition of discharge is initiated by a high voltage, overcoming the dielectric breakdown strength of the small gap. Machining was controlled by

some process parameters namely pulse current, pulse on time, and pulse off time, intensity and flushing pressure.

The performance measures in the process are material removal rate (MRR), electrode wear ratio (EWR), surface roughness (SR) and form tolerances that are affected by the above said process parameters. Material removal rate increases with increase in current mean while EWR are also increases. Surface roughness also increases while increase in current. The objective of this work is to optimize the process parameters for the optimum performance measures. A suitable material is going to be machined and an optimization technique is applied to find the optimum process parameters.

I. Introduction

Identifying the set of process variables( current, pulse on time & pulse off time) which

gives the higher MRR, lower EWR , higher Surface finish and optimum Form tolerance (Circularity, Cylindrical, Flatness and Straightness) for the material.

Validating the optimized values.

To achieve the better performance measures with optimized process variables in composite materials.

Aluminum composite materials

Emerging material

Applied in Aerospace and Automotive Industries

ABSTRACT

2. Background and Justification of the Problem

Page 423: Keynote 2011

423

The constituent – hard reinforcement

Difficult to machine in the conventional machines

So Unconventional machining is best

Laser cutting and WJM – limited in linear cutting

So EDM is preferred

d for machining composites

Machining on EDM results on max MRR, min EWR, better SR and better form tolerance.

If MRR increases EWR increases.

If MRR increases SR increases.

So an optimum value is needed to machine the composite with better form tolerance

The material is going to be prepared by stir casting method

Aluminum is matrix material

SiC is reinforcement material

Suitable electrode material is going to be selected (Brass or Copper)

Metal matrix composites (MMCs) are a range of advanced materials that can be used for a wide range of applications within the aerospace, automotive, nuclear, biotechnology, electronic and sporting goods industries. MMCs consist of a non-metallic reinforcement incorporated into a metallic matrix which can provide advantageous properties over base metal alloys. These include improved thermal conductivity, abrasion resistance, creep resistance, dimensional stability, exceptionally good stiffness-to-weight and strength-to-weight ratios and better high temperature performance.

STEPS for the MMC Fabrication

Process

Collection and preparation of the raw materials to put as charge

Placing raw materials in a graphite crucible under nitrogen gas into a furnace

Heating the crucible above the liquidus temperature and allowing time to become completely liquid

During cooling stirring is started at the semi-solid condition and continued until a temperature where less than 30% of the metal is solidified is reached

Pouring into the mould

Withdrawal of the composite from the mould

Desired fabricated MMCs ingots

4. Sinker EDM

Sinker EDM, also called cavity type EDM or volume EDM, consists of an electrode and workpiece submerged in an insulating liquid such as, more typically oil or, less frequently, other dielectric fluids. The electrode and workpiece are connected to a suitable power supply. The power supply generates an electrical potential between the two parts. As the electrode approaches the workpiece, dielectric breakdown occurs in the fluid, forming a plasma channel, and a small spark jumps.

These sparks usually strike one at a time because it is very unlikely that different locations in the inter-electrode space have the identical local electrical characteristics which would enable a spark to occur simultaneously in all such locations. These sparks happen in huge numbers at seemingly random locations between the electrode and the workpiece. As the base metal is eroded, and the spark gap subsequently increased, the electrode is lowered automatically by the machine so that the process can continue uninterrupted. Several hundred thousand sparks occur per second, with the actual duty cycle carefully controlled by the setup parameters. These controlling cycles are sometimes known as "on time"

3. Material Preparation

Page 424: Keynote 2011

424

and "off time", which are more formally defined in the literature.

The on time setting determines the length or duration of the spark. Hence, a longer on time produces a deeper cavity for that spark and all subsequent sparks for that cycle, creating a rougher finish on the

workpiece. The reverse is true for a shorter on time. Off time is the period of time that one spark is replaced by another. A longer off time, for example, allows the flushing of dielectric fluid through a nozzle to clean out the eroded debris, thereby avoiding a short circuit. These settings can be maintained in micro seconds. The typical part geometry is a complex 3D shape,often with small or odd shaped angles. Vertical, orbital, vectorial, directional, helical, conical, rotational, spin and indexing machining cycles are also used.

Advantages

Some of the advantages of EDM include machining of:

• Complex shapes that would otherwise be difficult to produce with conventional cutting tools

• Extremely hard material to very close tolerances

• Very small work pieces where conventional cutting tools may damage the part from excess cutting tool pressure.

• There is no direct contact between tool and work piece. Therefore delicate sections and weak materials can be machined without any distortion.

• A good surface finish can be obtained. • Very fine holes can be easily drilled.

Any combination material can be prepared and the effect of machining can be studied

Change in mechanical properties of composite materials can be studied.

Surface topography and RCL formation can be studied

Preparing/ purchasing Al- SiC composites. Design of Experiments. Performing EDM process. Measurement of performance. Optimization and validation

Electrical Discharge Machining (Die-Sinking Type)-J.J.C.E.T

Roughness Tester –J.J.C.E.T

Weight Balance – J.J.C.E.T

Coordinate Measuring Machine – N.I.T

Scanning Electron Microscope – N.I.T

Optimization Software’s.

XXXI. CONCLUSION:

Scope and Objectives

Methodology and Approach

5. Facility required

Expected outcome

Page 425: Keynote 2011

425

Optimized parameters of Peak current, Pulse on time and Pulse off time will be found.

With those levels better performance will be measured for the Al- SiC Composite material

with different percent volume of SiC.

Optimization technique of machining

parameters grey relational analysis

For the Electrical Discharge Machining (EDM) process, material removal rate is a higher-the-better performance characteristic. However, surface roughness and electrode wear ratio are a lower-the-better performance characteristic. As a result, an improvement of one performance characteristic may require a degradation of another one.

Hence, optimization of the multiple performance characteristics is much more complicated than optimization of a single performance characteristic. The grey relational analysis based on grey system theory can be used to solve the complicated interrelationships among the multiple performance characteristics effectively.

In the grey relational analysis, a grey relational grade is obtained to evaluate the multiple performance characteristics. In this way, optimization of the complicated multiple performance characteristics can be converted into optimization of a single grey relational grade. the use of the orthogonal array with the grey relational analysis can greatly simplify the optimization procedure for determining the optimal machining parameters with the multiple performance characteristics in the EDM process. As a result, the method developed in this study is very suitable for practical use in a machine shop.

The relation between machining parameters and performance can be found out with the Grey relational analysis. Grey relation analysis has even been applied in facing recognition combining with other statistical methods (Chen et al., 2000). It is important to select machining parameters in

Electrical Discharge Machining (EDM) for achieving optimal machining performance (Tarng et al., 1995). The Taguchi method (Taguchi, 1990; Ghani et al., 2004), is a systematic application of design and analysis of experiments for the purpose of designing

and improving product quality.

Dr. Taguchi proposes the plan of quality project (called the Taguchi method) with the robust design based on design of experiment, to simplify a great quantity of fully factor experimentation. It had extensively applied in engineering design and the analysis of optimal manufacturing. To deliberate multiple performance characteristics by the Taguchi method it requires further research effect

A statistical Analysis of Variance (ANOVA) is performed to identify the process parameters that are statistically significant. Based on ANOVA the optimal combinations of the process parameters are predicted.In the grey relational analysis, the experimental results of electrode wear ratio, material removal rate and surface roughness are first normalized in the range between zero and one, which is also called the grey relational generating. Next, the grey relational coefficient is calculated from the normalized experimental results to express the relationship between the desired and actual experimental results. Then, the grey relational grade is computed by averaging the grey relational coefficient corresponding to each performance characteristic. As a result, optimization of the complicated multiple performance characteristics can be converted into optimization of a single grey relational grade. Optimal level of the process parameters is the level with the highest grey relational grade. Furthermore, a statistical Analysis of Variance (ANOVA) is performed to see which process parameters are statistically significant. With the grey relational analysis and statistical analysis of variance, the optimal combination of the process parameters can be predicted

Page 426: Keynote 2011

426

Narender Singh P, Raghukandan K, Pai BC (2004) Optimization by Grey relational of EDM parameters on machining Al–10%SiCp composites. J Mater Process Techno 155–156:1658–1661

Narender Singh P, Raghukandan K, Rathinasabapathi M, Pai BC (2004) Electric discharge machining of Al–10%SiCp as-cast metal matrix composites. J Mater Process Techno 155–156:1653–1657

Mohan B, Rajadurai A, Satyanarayana KG (2004) Electric discharge machining of Al–SiC metal matrix composites using rotary tube electrode. J Mater Process Techno 153–154:978–985

Seo YW, Kim D, Ramulu M (2006) Electrical discharge machining of functionally graded 15-35 vol.% SiCp/Al composites. Material and Manufacturing Processes 21:479–487

Sushant D, Rajesh P, Nishant S, Akhil S, Hemath KG (2007)Mathematical modeling of electric discharge machining of cast Al–4Cu–6Si alloy–10 wt.% SiCp composites. J Mater Process Techno 194:24–29

Akshay D, Pradeep K, Inderdeep S (2008) Experimental investigation and optimization in EDM of Al 6063 SiCp metal matrix composite. Int J Machin Machinab Mater 5(3/4):293–308.

Narender Singh P, Raghukandan K, Pai BC (2004) Optimization by Grey relational of EDM parameters on machining Al–10%SiCp composites. J Mater Process Technol 155–156:1658–1661

References

Page 427: Keynote 2011

427

57. OPTIMIZATION OF SQUEEZE CASTING PROCESS PARAMETER FOR WEAR USING SIMULATED ANNEALING

V. VINAYAKAMURTHY Prof. P. VIJAYA KUMAR M.E Student of M.Tech (Manufacturing Technology) Department of Mechanical Engineering [email protected] PRIST UNIVERSITY – THANJAVUR PRIST UNIVERSITY – THANJAVUR

ABSTRACT

Squeeze casting process is a special

casting process by which net or near net

shaped components can be formed from the

molten. There are number of process

parameters generally considered for the

soundness of cast components.

The process parameters that were taken

into account are squeeze pressure, die

preheating temperature, pressure applying

duration and die material. LM6 and LM24

aluminum alloy material have been chosen

for squeeze casting process.

In this project, the process parameters

were optimized for minimization of the wear

of squeeze cast component. A pin on disc

wear measuring instrument was used for

determining the wear of squeeze cast

components.

From the experimental data, the second

order equations were developed for LM6

and LM24 aluminum alloy using MINITAB

software. The maximum and minimum

Page 428: Keynote 2011

428

values of process parameters were taken as

constraints.

The optimal values of process

parameters were obtained through simulated

annealing algorithm. MATLAB software

was used for coding.

The wear of LM6 and LM24 aluminum

alloy was compared for optimum process

parameter values and it shows LM6

aluminum alloy has better wear property

than the LM24 aluminum alloy. The wear of

squeeze cast and sand cast components were

compared and it confirms squeeze cast

components have better wear property the

sand cast components.

Objective of the Project

The squeeze casting process parameters

such as squeeze pressure, pressure applying

duration, die pre- heat temperature and die

material are going to optimized using

stimulated annealing algorithm for

minimization of wear of those components.

Page 429: Keynote 2011

429

Finally the wear of LM6 and LM24

aluminum alloy was compared for optimum

process parameter values and the wear of a

squeeze cast material and sand –cast

material also compared.

About the Squeeze casting process

Squeeze casting process, a combination

of gravity die casting and closed die forging

uses the technique by which molten metal

solidifies under pressure within closed dies

halves.

The applied pressure helps in making

the solidifying metal an intimate contact

with the mould which produces rapid heat

transfer condition that yield a pore free,

grained casting with mechanical properties

approaching those of a wrought product.

Squeeze casting was introduced in the

United States by group technology centre.

These are also some investigations carried

out in Russia, Germany, Japan and India.

This process though not popular, is slowly

picking up moment.

In recent years, squeeze casting has

found some production application in United

States, U.K and Japan for aluminum alloys

and metal matrix composites.

Squeeze casting technique is still in the

experimental stage in India. The rapid

expansion of automotive and aerospace

sectors, the demand for superior engineering

components is expected to increase in course

of time and squeeze casting has the potential

and it will play an important role in

improving the quality of the engineering

products.

Advantages of Squeeze Casting

Ability to produce part with complex

profiles and thin sections beyond the

capability of conventional casting and

forging techniques.

Substantial improvement in material yields

because of elimination of gating and feeding

systems.

Ability to use both cast and wrought

compositions.

Improvements in product quality in regards

to surface finish, dimensional accuracy.

Casting may be heat treated. Complete

elimination of shrinkage and gas porosity.

Page 430: Keynote 2011

430

Potential for using cheaper recycled material

without the loss properties that would occur

with other processes.

The potential for the increased productivity

in comparison with the gravity die casting

and low pressure die casting.

Suitable for production of cast composite

material.

Improved mechanical properties.

Application of Squeeze Casting

The micro structural refinement and

integrity of squeeze cast product are

desirable for many applications.

As the Squeeze casting process

generates the highest mechanical properties

attainable in a cast product it has been

successfully applied to a variety of ferrous

and non-ferrous alloys in traditionally cast

and wrought compositions.

Application include Aluminum ally

piston for engine and the disc brakes,

automotive wheels, trucks hubs ,barrelhead

and hub bed flanges, brass and bronze

brushing and gears, steel missile

components, differential piston gears,

affordable cost.

The technological break though of

manufacturing of metal-ceramic composites,

along with the ability to make complex parts

to near net shapes.

Squeeze casting process suggest that this

process will find application where cost

consideration and physical properties of

alloy are factors.

Benefits of Squeeze casting process

By improving the squeeze casting

process, a new option will be available to

the casting industry for producing the

complex, light weight aluminum parts that

are increasingly demanded in competitive

markets such as that for automotive

components.

This will in melted energy savings

due to the lower melt temperature for

aluminum Vs Alternate metals. Improving

the current knowledge of the Squeeze

casting process also will be reduce the scrap

by an estimated 15% in addition to a

significant reduction in solid waste dust.

Page 431: Keynote 2011

431

Methodology

Fig 1 Flow chart

Composition and Properties

LM6 Aluminum Alloy

Equivalent to AL 413.0

Composition

Aluminum – 88%

Silicon – 12%

Properties

Density -2.65 g/cm3

Tensile Stress – 200 N/mm2

Percentage Elongation – 13

Brinell Hardness Number – 60

Co-efficient of Expansion per degree x 10-6

– 20

Thermal Conductivity at 25 deg – 142

W/mk

Electrical conductivity at 20 deg –

37%IACS

Fluidity, hot tears resistance, pressure

tightness, corrosion resistance –

Strength at elevated temperature and mach

inability – poor

Application

Water cooled manifolds and jacket, motor

cars and road transport fitting, the motor

housings, meter cases and switch boxes.

LM24 Aluminum Alloy

Equivalent to AL 380.0

Page 432: Keynote 2011

432

Composition

Aluminum – 88%

Silicon - 8 %

Cu - 3%

Fe – 1%

Properties

0.2% proof stress (MPa): 126

UTS (MPa): 233

% Elongation: 2.7

some sub areas. That we will discuss in this chapter.

9.0 Reference

1. S.Seshan, 1996, “Squeeze casting – Present status and future trends” Indian foundry journal.

2. Kunal Basu, 1989, queeze casting of Aluminum and its alloy” Indian journal

3. Kalyanmoy Deb., (2000) Optimization for Engineering Design Algorithms and Examples. Prentice Hall India Limited.

Application: .S.Rao.,

(1998) Engineering

Suitable for general purpose die casting

application and air craft application.

About the Wear

Wear

Wear material loss due to a force that act

slides surface relative to one another wear is

a complicated area, where there seems to be

few predictive relations. Most of the design

criteria seem to be “rules of thumb”.

In order for these rules to be applicable,

they are limited to certain special areas for

wear. Thus, wear is usually divided into

optimization. Theory and practice.

5. Jukka Kohonen., (1999) A brief

comparison of simulated Annealing and

Genetic Algorithm approaches., Term paper

for the “Thress Concepts Utility”, course.

Page 433: Keynote 2011

433

6. S.Kirkpatirck, C.D.Gelatt, Jr. M.P.

Vecchi., (13 May 1983) optimization by

Simulated Annealing., Science., Volume

220, Number 4598.

58. OPTIMIZATION OF WELDING PARAMETERS USING RADIOGRAPHY TESTING

R.BASKARAN R.SASTHISKUMAR, M.E.,

Student of M.Tech (Manufacturing Technology) Asst.Professor

[email protected] Faculty of Mechanical Department

PRIST UNIVERSITY-THANJAVUR PRIST UNIVERSITY-THANJAVUR

ABSTRACT

Non destructive testing of engineering materials, components and structure has steadily increased in recent years at an unprecedented rate because of the all round thrust for improving material quality, component integrity and performance reliability. The demands for absolute safety and reliability in strategic industries like atomic energy, aerospace, defence etc. that are aimed to obtain almost defect free components. This is possible only through effective and reliable use of NDT methods. Radiography Testing is one of the NDT technique in which the materials are completely examined without damage to the materials. RT method used for the detection of internal defects such as porosity and voids. Various rays are used to expose the film. With proper orientation of the X-ray beam, planar defects can also be detected with radiography. Generally X rays are used. In this technique various defects Porosity, slag inclusion, blow

holes and cracks are identified and quality levels are maintained. According to the density of the materials, RT films are exposed. Higher density material gives the white color film and lower density material will give darker film. Experimentally, we take five cases (Mild Steel and Stainless Steel) of specimens are prepared and tested in Radiography testing method. The detection of defects are optimized by using POD concept. The probability of detection is done for radiography testing at 90/95% of confidence level.

The film evaluations with respect to density changes are done. Variations in the darkness may be interpreted to provide information concerning the internal structure of the material.

1.INTRODUCTION

Reliability comes through improving the quality level of components. The quality of

Page 434: Keynote 2011

434

products, components or parts depends upon many factors; important among them are the design, raw material properties and fabrication techniques.

Quality is related to the presence of those defects and imperfections in the finished product which impair the performance level. Many defects are also generated during service. Knowledge of these defects with a view to detect and evaluate them and then minimizing them in the product is essential to achieve improved or acceptable level of quality. There is therefore, a need to have methods by which the defects in the products can be examined without affecting their performance.

Non-Destructive Testing and Non-Destructive Inspection are the terms used in this connection to represent the techniques that are based on the application of physical principles employed for the purpose of determining the characteristics of materials or components or systems and for detecting and assessing the in homogeneities and harmful defects without impairing the usefulness of such materials or components or systems.

NDT plays an important role not only in the quality control of the finished product but also during various stages of manufacturing. NDT is also used for condition monitoring various items during operation to predict and assess the remaining life of the component while retaining its structural integrity.

2.OBJECTIVE OF THE PROJECT

To optimize the Radiography Testing film Evaluation with respect to density changes.

During the testing, good one is rejected or bad one is accepted both are create serious problem in manufacturing industry

We need to achieve the 100% of defect detection using Radiography Testing

3. NON DESTRUCTIVE TESTING

Non-destructive Test and Evaluation is aimed at extracting information on the physical, chemical, mechanical or metallurgical state of materials or structures. This information is obtained through a process

of interaction between the information generating device and the object under test. The information can be generated using X-rays, gamma rays, neutrons, ultrasonic methods, magnetic and electromagnetic methods, or any other established physical phenomenon.

The process of interaction does not damage the test object or impair its intended utility value. The process is influenced by the physical, chemical and mechanical.

4. NON DESTRUCTIVE TECHNIQUES

NDT Methods range from the simple to the intricate. Visual inspection is the simplest of all. Surface imperfections invisible to the ye may be revealed by penetrate or magnetic methods. If serious surface defects are found, there is often little point in proceeding further to the more complicated examination of the interior by other methods like ultrasonic or radiography. The principal NDT methods are Visual or optical inspection, Dye penetrate testing, Magnetic article testing, Radiography testing and Ultrasonic testing.

ASNT - American Society for Non destructive Testing

ISNT - International Society for Non destructive Testing

CWI - Certified Welding Inspector

Page 435: Keynote 2011

435

NDT - Non Destructive testing

NDE - Non Destructive Evaluation

NDI - Non Destructive Inspection Level –I working under the supervision. Level-II Calibrate, Test, Interpret and evaluate with respect to code and standard. Level –III Establish techniques and procedure for specific process

APPLICATION OF NDT

1. Nuclear, space, aircraft, defence,

automobile, chemical and fertilizer industries.

2. Heat exchanger, Pressure vessels,

electronic products and computer parts.

3. High reliable structures and thickness

measurement.

5. WORK METHODOLOGY

(RADIO ISOTOPE)

(PHOTOGRAPHY FILM)

PROBING MEDIUM

INTERACT WITH

MATTER

DETECTER

SOURCE

DISPLAY

(TRANSMISSION/

ABSORPTION)

Page 436: Keynote 2011

436

Page 437: Keynote 2011

437

6.RESULTS AND CONCLUSION

Penetrate Testing Magnetic

Particle testing

Radiography

Testing

Ultrasonic

Testing

Depth of

thickness

None Fair Good Good

Cracks Excellent Excellent Excellent Excellent

Slag Good Good Good Good

Porosity Good Fair Excellent Excellent

Undercut Good Excellent Fair Poor

Laminations NA Poor Poor Superior

Speed Rapid Fairly rapid Slow Fairly rapid

Accuracy Good Good Excellent Excellent

Effectiveness Fairly effective Fairly effective Very effective Very effective

Cost Low Cost Low cost Expensive Expensive

REFERENCES

1. Hand book of piping design by

G.K.Sahu

2. Stress Analysis of piping systems

by C.Basavaraju

3. Practical Non destructive Testing

, Narosa Publishing, Baldev Raj

4. Non destructive Test and

Evaluation of materials, TMG, J

Prasad

Page 438: Keynote 2011

438

5. Welding technology Today by

Craig Stinchcoms

6. Non destructive testing hand

book, R Hamsaw

7. Research Techniques in NDT ,

R.S. Sharp

8. Non destructive testing of welds,

T.Jayakumar

59. OPTIMAL MANIPULATOR DESIGN USING INTELLIGENT

TECHNIQUES (G.A & D.E)

P.RAJESH,Student of M.E (CAD/CAM), JJ College of Engineering and Technology, [email protected]

ABSTRACT

specifications using evolutionary optimization approaches. Two

In this paper, we discuss optimum Robot design based on task

evolutionary optimization approaches employed simple NSGA, DE. These

approaches were used for the optimum design of a planar Robot used for

grinding operation. In order to ensure the position and orientation accuracy of a robot end effectors as well as to reduce the assembly cost of the robot, it is

necessary to quantify the influence of the uncertain factors and optimally allocate the tolerances. This involves a study of the direct and inverse kinematics of

robot end effectors in the presence of uncertain factors. The objective function minimizes the torque required for the motion subject to deflection and physical constraints with the design variables being the link physical characteristics (length and cross-section area parameters). In this work, we

experimented with various cross-sections. The main findings of this research are that the differential evolution converges quickly, requires significantly less number of iterations and achieve better results. The results obtained from various techniques are compared and analyzed

1. Introduction In this paper, we discuss optimum robot design based on task specifications using two evolutionary optimization approaches. The two evolutionary optimization approaches employed Non-dominated Sorting Genetic Algorithm (NSGA), and differential evolution (DE). These approaches were used for the optimum

design of SCARA type manipulators. The objective function minimizes the torque subject to deflection and physical constraints for a defined manipulator motion with the design variables being the link physical characteristics (length and cross sectional area parameters). The links of serial manipulators are usually over designed in order to be able to support the subsequent links on the chain and the task to be manipulated. However, increasing the size of the links

Page 439: Keynote 2011

439

unnecessarily requires the use of larger actuators resulting in higher power requirements.

Optimum robot design has been addressed by many researchers as found in the open literature based on kinematics or dynamic specifications. Oral and Idler optimized a planar robotic arm for a particular motion. They described the minimum weight design of high-speed robot using sequential quadratic programming [1]. Yang and Ming-Chen applied the minimized degree of freedom concept in a modular robotic system in order to maximize the load carrying capacity and reduce the power requirements using an evolutionary algorithm (a variation of GA) [2]. Paredis and Khosla discussed optimum kinematics design for serial link manipulators using simulated annealing based on task specifications [3]. Tang and Wang compared Langrangian and primal-dual neural network for real time joint torque optimization of cinematically redundant manipulators. They used desired acceleration of end effectors as input for a specific task [4]. Chen and Cheng described various control schemes and explained advantages of “minimum velocity norm method” over other methods for local joint torque optimization in redundant manipulator [5]. Paredis used a distributed agent based genetic algorithm to create fault tolerant serial chain manipulators from a small inventory of link and modules [6].

Chedmail and Ram stein used a genetic algorithm to determine the base position and type (one of several) of a manipulator to optimize workspace reach ability [7]. The DE technique to

the best of our knowledge has never been investigated or applied to the optimum design of serial link manipulators.

This paper addresses the application and comparison of two evolutionary techniques for optimum design of serial link robotic manipulators based to task specifications. The objective function minimizes the required torque for a defined motion subject to various constraints while considering kinematics, dynamic and structural conditions. The kinematics and dynamic analyses are derived based on robotic concepts and the structural analysis is performed based on the finite element method. The design variables examined are the link parameters and the link cross sectional characteristics. The developed environment was employed in optimizing the design variables for a SCARA type manipulator.

This paper is organized as follows. In section 2, the proposed NSGA and DE techniques to obtain the minimum torque of the SCARA manipulator are presented. The optimization problem definition is given in section 3. In section 4, a numerical example, the industrial robot with 4-DOF (SCARA) is presented to illustrate the use of the proposed NSGA and DE techniques to find out the objective function. In section 5, the results obtained in various methods are presented and compared. The conclusions are presented in section 6.

2. Proposed methods

Page 440: Keynote 2011

440

In this section, the computational techniques used for obtaining minimum objective function of the manipulator in the working area are described.

3. Non-dominated Sorting Genetic Algorithm (NSGA)

The no dominated sorting genetic algorithm was proposed by Srinivas and Deb and is based on several layers of classifications of the individuals. Before the selection is performed the population is ranked on the basis of no domination: all no dominated individuals are classified into one category. To maintain the diversity of the population, these classified individuals are shared with their dummy fitness values. Then this group of classified individuals is ignored and another layer of no dominated individuals is considered. The process continues until all individuals in the population are classified.

Figure: Process of Non-dominated Sorting Genetic algorithm (NSGA)

Initialize

Population

Front=1

Reproduction

according to

dummy fitness

Crossover

Mutation

Is

gen < max gen ?

Is population

classified?

Gen = gen + 1

Start

Page 441: Keynote 2011

441

4. Differential Evolution

The DE approach contains the same processes of population initialization, mutation, crossover and selection. It emphasizes direct use of the objective function. A cost function, C, is used to rate the individual vectors according to their capability to minimize the objective function, f. otherwise

Solution vectors without constraint violations have cost functions equal to the objective function. Any constraint violation tends to increase the cost of a vector by a value I greater than the objective function. For our study of torque minimization, the lower the cost the better or more fit the design variables are.

Optimization Problem Definition

The optimization problem investigated is that of optimizing the link parameters for a robotic manipulator to obtain minimum power requirements or joint torques. The task specification is subdivided into two elements, the kinematics characteristics (the required position of the end effectors) and dynamic characteristics (the time required to complete the task motion while carrying the tool and considering the inertia properties of the links themselves). Constraints are imposed on the minimum and maximum (range) values of the link parameters that include the link length, and the link cross-

sectional area characteristics, and the allowable deflection of the end effectors. In addition, constraints are imposed that address physical limitations such as the range of motion of the actuators of the manipulator.

Objective Function

The objective function is defined as the cumulative sum of the torques for each joint during the motion of the manipulator,

f(x) = sqrt(∑ ∑= =

Time

1i

sintJo

1j

2ijt )

(1)

where tij is the torque at time i for joint j .

Constraints

The constraints for this optimization include the deflection of the end effectors of the manipulator, physical constraints such as the limits on the joint values, and the structural characteristics of each link.

(2)

Page 442: Keynote 2011

442

The deflection δ is evaluated using finite element analysis techniques when the manipulator is at its maximum reach (completely stretched out) since this will yield the maximum deflection. Any other configuration will yield a smaller deflection value considering that the same payload is carried. The deflection evaluation is a function of the structural and material properties of the links and the payload. The constraints on the joint limits or range of motion of the manipulator actuators are imposed due to physical constraints. The joint constraints are defined as

(3)

where θi is the joint value for joint i and n ∈ ℜn with n being the number of joints.. These constraints are defined as

(4)

Where d j indicates the j

structural characteristic, and d∈ ℜm with

m being the number of structural characteristics. The design space for the structural characteristics always consists of the link length with the remaining parameters depended on the type of cross section being considered. In this work, the cross-sections considered are hollow rectangular, square and cylindrical and a C-channel and are shown in Figure 4. The number of the structural constraints is dynamic and depends on the type of cross section being considered in the analysis.

Analysis Procedure

The first step in the analysis process is to define the problem, define the design variables, assign values to all the parameters and define the constraint vector. The problem definition consists of defining the kinematics structure of the manipulator to be analyzed using Denavit-Hartenberg, (DH) parameters, the desired initial and final position of the manipulator in Cartesian space and desired time for the motion, the payload, and the material property and cross-section type of the links. The schematic of this process is shown in figure.

.

The analysis routines related to manipulator motion analysis are inverse kinematics, joint space trajectory generation and dynamics. These are evaluated using standard robotic analysis algorithms.

Inverse kinematics evaluates the required joint variables initials and final

based on the desired Cartesian coordinates of the initial and final positions.Joint space trajectory generates the position θ(t) (cubic polynomial profile), velocity θ’(t) and acceleration θ”(t) profiles based on the desired motion time and joint variables.Dynamics evaluates the individual joint torques based on the structural characteristics of the links, the payload and the position, velocity and acceleration profiles. The

Page 443: Keynote 2011

443

dynamics are evaluated according to the Newton-Euler method

Page 444: Keynote 2011

444

Figure: Analysis Flow-Chart

REFERENCES:

1, S.ORAL and S.K.IDER, Optimal design of high speed flexible robotics arms with dynamic behavior constrains, computer and structure volume 65(2)255-259, 1997

2, G. YANG and I.MING CHEN, Task based optimization of modular robot configuration, Minimized degree of freedom approach, Mechanism and machine theory.

Volume35, 517-540, 2000

3, J.J. PAREDIS and P.K. KHOSALA, Kinematics design

Of serial link manipulator from task

4, Specifications, Internal Journals of Robotic Research.

Volume 12(3), 274-287, 1993

Page 445: Keynote 2011

445

Page 446: Keynote 2011

446

60. STUDY OF PROJECT MANAGEMENT IN CAR

INFOTAINMENT

Mukesh R Research Scholar Departement Of Industrial Engineering And

Management,Dayananda Sagar Col ege Of Engineering,Bangalore,India

Abstract— Car Infotainment is

going to be a growing opportunity into the next decade. There is a giant need in the Information Systems Industry, for better Project Management skills. The necessity of Project Management in Technical aspects is growing day by day.

Initially we receive the SRD

(Software Requirement Description) document from the customers.

The SRD is formed by repeated discussions with the clients. In

Planning phase the project is scheduled and planned in a systematic

manner to meet the specified objectives within the time period. Here

the Project is sub-divided into modules and work-packages which are

subsequently assigned to the human resources. In the Execution phase,

the modules are implemented using suitable Tools. Project Tracking

involves monitoring of the entire project with project specific tools. We track

and control the process such that the Quality and time is not impeded.

Project Execution or Implementation works hand-in-hand with

the Tracking process. The software generated is tested for the

required prospects and is delivered within the scheduled time period.

Project Delivery is a phase where the

products or the software is delivered to the clients or the end-users.

We plan the releases according to the features implemented. We

are committed to deliver the products through Quality and Innovation. Hence defines the value of Project Management in Software Development for improving software quality and productivity.

Keywords:- Infotainment System, SRD,

Work-package, Project planning, Execution, Tracking and Delivery.

I.INTRODUCTION

Project management is ―the application of knowledge

skills tools and techniques to project activities to meet the project requirements‖[1].Project management is not only about striving to meet specific scope, time, cost, and quality goals of projects, it also includes facilitating the entire process to meet the needs and expectations of people involved in or affected by project activities. To discuss project management it is important to understand the concept of a pro ject. [2]―A pro ject is a temporary endeavor undertaken to create a unique product, service or result‖. Operations on the other hand are work done in organizations to sustain the business. Pro jects are d ifferent

Page 447: Keynote 2011

447

R V Praveena Gowda Assistant Professor Department Of Industrial Engineering And Management, Dayananda Sagar College Of Engineering,Bangalore,

fro m operations in that they end when their objectives have been reached or the project as been terminated. A.PROJECT CHARA CTERISTICS 1. It is temporary – temporary means that every project has a definite beginning and a definite end. Project always has a definitive time frame.

2 .A pro jects creates unique deliverables, which are products, services, or results. 3. A pro ject creates a capability to perform a service.

4. Project is always developed in steps and continuing by increments – Progressive Elaboration B.INFOTAINMENT SYSTEM

Project Management in ―In fotain ment Systems‖ is

going to be a growing opportunity into the next decade. Infotain ment System is a system which is combination of Information and Entertain ment. Infotainment= Information + Entertainment. Information system includes modules which prov ide information. Various modules that come under this are: Navigation, Climate, Traffic information. Entertain ment system also includes modules which provide entertain ment. Various modules under this include Television, Radio, Bluetooth, multimedia etc. C. TRIPLE CONSTRAINT:

Trip le Constraint in Pro ject Management is the act of

organizing resources such as Scope, Time and Cost to bring about a desired result. Triple Constraint is the balance of the

Page 448: Keynote 2011

448

project‘s scope, schedule (time) and cost. It is sometimes called Dempster‘s triangle wherein one of the sides or corners represent the scope, time and cost of a project being managed by the project managers. Triple constraint is used to gauge whether a project‘s objectives are being met.

During the p lanning process of a project, the pro ject

management team define the project scope, time, cost and quality of a project. As the process continues, the project managers discover that there may be some changes or adjustments to be made in one of the project‘s scope, time and cost. When this happens, the other factors of the triple constraint are likely to be affected as well. For examp le, if the cost increases, it is logical to assume that the scope and time will increase as well. The same thing happens if the cost decreases, the scope and time will decrease too. It is the job of the project management team to respond to the project risk which is a possible incident or condition which can have a good or bad effect on the project.

Fig 1: Shows Triple Constraint[6],[9]

II. DESCRIPTION OF PROJECT MANA GEMENT

PROCESSESS: Traditionally, pro ject management includes a number of elements: four to five process groups, and a control system. Regardless of the methodology or terminology used, the same basic project management processes will be used.

Page 449: Keynote 2011

449

Fig 3: Shows Project Management Processes Major process phases generally include: Initiation Planning or develop ment Execution or Imp lementation Monitoring and Controlling Deliverable or Closing

Fig 4. Project Process Phases[4] 1. INITIATION:

Initiating process include, defin ing and authorizing a

project or project phase. Initially we receive the SRD (Software Requirement Description) document fro m the customers. The SRD is formed by repeated discussion with the customers. The project charter and project scope is defined along with this process.

2. PLANNING AND DESIGN:

After the initiation stage, the project is planned to an

appropriate level of detail. The main purpose is to plan time, cost and resources adequately to estimate the work needed and to effectively manage risk during project execution. As with the Initiation process group, a failure to adequately plan greatly reduces the project's chances of successfully accomplishing its goals.

In project p lanning phase the project is scheduled and

planned in a systematic manner to meet the specified objectives within the time period. In this phase the requirements are kept into consideration and are planned

accordingly. Here the project is subdivided into modules and work – packages which are subsequently assigned to the human resources.

Project planning generally consists of determining how to plan (e.g. by level of detail or according to the features to be implemented); developing the scope statement; selecting the planning team; identifying deliverables and creating the work breakdown structure; identifying the activities needed to comp lete those deliverables and networking the activ ities in their logical sequence; estimating the resource requirements for the activities; estimating time and cost for activities; developing the schedule; developing the budget; risk p lanning; Gain ing formal approval to begin work. Additional processes, such as planning for

communications and scope management, identify ing roles and responsibilities, determining what to purchase for the project and holding a kick-off meeting are also generally advisable. For new product development projects, conceptual design of the operation of the final product may be perfo rmed concurrent with the project planning activities, and may help to inform the planning team when identifying deliverables and p lanning activities. 3. EXECUTING

Executing consists of the processes used to complete

Page 450: Keynote 2011

450

the work defined in the pro ject management plan to acco mplish the project's requirements. In this phase the modules are implemented using suitable tools. The features are implemented according to the Project plan. The work – packages are executed according to planned release process. Execution process involves coordinating people and resources, as well as integrating and performing the activities of the project in accordance with the project management plan. The deliverables are p roduced as outputs from the processes performed as defined in the pro ject management plan. 4. MONITORING AND CONTROLLING:

Monitoring and controlling consists of those

processes performed to observe project execution so that potential problems can be identified in a timely manner and corrective action can be taken, when necessary, to control the execution of the project. Tracking involves monitoring of the entire project with project specific tools. We track and control the process such that the Quality and Time is not impeded. The key benefit is that project performance is observed and measured regularly to identify variances fro m the project management plan.

Project Execution or Implementation works hand – in

– hand with the tracking process. The software generated is tested for the required prospects and is delivered within the scheduled time period. [6]Any deviation fro m the requirements or the specifications will be considered as bugs or errors and hence an ticket is raised.

Monitoring and Controlling includes: Measuring the ongoing project activities ('where we are'); Monitoring the project variables (cost, effort, scope, etc.) against the project management plan and the project performance baseline (where we

should be); Identify corrective actions to address issues and risks properly (How can we get on track

again);

Page 451: Keynote 2011

451

Influencing the factors that could circumvent integrated change control so only approved changes are imp lemented

In multi-phase projects, the monitoring and controlling process also provides feedback between project phases, in order to implement corrective or preventive actions to bring the project into compliance with the project management plan. Project Maintenance is an ongoing process, and it includes:

]

Continuing support of end users Correction of errors Updates of the software over time Fig 5: Monitoring and Controlling cycle[9] In this stage, auditors should pay attention to how

effectively and quickly user problems are resolved. Over the course of any construction project, the work scope may change. Change is a normal and expected part o f the construction process. Changes can be the result of necessary design modifications, differing site conditions, material availability, contractor-requested changes, value engineering and impacts

fro m third parties, to name a few. Beyond executing the change in the field, the change normally needs to be documented to show what was actually constructed. This is referred to as Change Management. Hence, the owner usually requires a final record to show all changes or, mo re specifically, any change that modifies the tangible portions of the finished work.

The record is made on the contract documents –

usually, but not necessarily limited to, the design drawings. The end product of this effort is what the industry terms as -built drawings, or more simply, ―as built.‖ The requirement for providing them is a norm in construction contracts. When changes are introduced to the project, the viability of the pro ject

Page 452: Keynote 2011

452

has to be re-assessed. It is important not to lose sight of the initial goals and targets of the projects. When the changes accumulate, the fo recasted result may not justify the orig inal proposed investment in the project. 5. DELIVERY OR CLOSING

Project Delivery is a phase where the products or the

software is delivered to the clients or the end-users. We plan the releases according to the features implemented. Closing includes the formal acceptance of the project and the ending thereof. [10],[9]Administrative activ ities include the archiv ing of the files and documenting lessons learned. This phase consists of:

Project close: Finalize all activ ities across all of the process groups to formally close the project or a project phase Contract closure: Co mplete and settle each contract (including the resolution of any open items) and close each contract applicable to the project or project phase.

―We are co mmitted to deliver the products

or deliverables through Quality and Innovations‖.

III. DESCRIPTION OF INFOTAINMENT SYSTEM

Infotain ment System consists of various modules

which is shown using the following diagram. Infotain ment system basically is formed fro m two

words, Information and Entertain ment. Information module consists of the support of Nav igation system with help of GPS System. It provides details about the possible routes to a particular destination. Vo ice support is also provided for the navigation system. Map of the particular country or reg ion is displayed along with the current position of a car or any vehicle on the map.

Navigation has various features associated with it,

which includes the point of interest or (POI) where the user can select a particular location wh ich of a certain interest to him and the route guidance to the particular ‗set destination‘ is provided accordingly.

Page 453: Keynote 2011

453

Fig 6: Infotain ment System in a Car [5]

Traffic information is also provided in this

system. The possible routes to a particular destination and dynamic routes based on the traffic situation over particular route is provided and hence suitable route is displayed to the user. Climate Information provides the various temperature and the weather details of any locality. It also predicts the various natural phenomena which are likely to occur.

Entertain ment module consists of various features

which are supported by it. It includes Radio, Television, Bluetooth, Internet, Telephone pairing and other Multimedia facilities are included with in. IV. CONCLUSION

Technical Fields when solely imp lemented may have

various constraints and difficu lties associated with it. Th is was the case previously. Hence the inclusion of Project Management shed light on how it leads to the better way of implementation of the project in a much organized and procedural manner. The blend of Managerial Skills and Technical skills can provide better imp lication of skills, management of resources, meeting the specifications according to the requirements within specific time constraints.

It also reveals that Project Management can moderate the effects of Planning and control on the process involved in the implementation of the requirements and hence

reduce the effects on the end user. In some cases controlling the project performance without managerial skill seems to be inefficient. The results indicate that the Project Management makes a greater contribution to product performance when there is high level of inherent uncertainty. Thus it helps in accurate understanding of the approach and reduces the risks or tickets which results in the greater success of project or product. V. REFERENCES 1]. Kathy Schwalbe, Ph.D, PMP, Augsburg College, Information Technology Project Management, 2006. 2]. PMBOK Gu ide, 4e (2008 Project Management Institute, Inc). 3]. PCI Group, PM Best Practices Report, October 2001. 4]. International Journal of Pro ject Management, Elsevier, ISSN: 0263-7863. 5]. www.lu xoft.co m/casestudies/7782/ - United States/ Developing Car Infotain ment System. 6]. Project Management Technology – Software Tech Support Centre. 7]. The Definitive Guide to Project Management, NOKES, Sebastain, 2e, 2007 8]. Dennis Lock, 2007, Pro ject Management, 9e, Go wer Publishing, ltd, 2007, ISBN 0-566-08772-3 9]. Morgen Witzel, 2003, Fifty Key Figures in Management, Routledge, 2003, ISBN 0-415-36977-0, 96-101. 10]. Boo z Allen Hamilton – History of Boo z Allen, 1950.

Page 454: Keynote 2011

454

Page 455: Keynote 2011

455

61. HEAT TRANSFER AUGMENTATION IN A PILOT SIMULATED SOLAR

POND

Mugesh Babu K, Prakash K

3Students of JJ College of Engineering and Technology, Tiruchirappalli

[email protected]

Abstract: Solar Ponds are pool of salt water

which is used as thermal energy storage

system. To tap the availability of solar

energy for various applications, thermal

energy storage systems are required

because the availability of solar energy is

intermittent. Various methods are

available to store the thermal energy. Also

the stored thermal energy should be

exchanged properly for required

applications; during this heat energy

exchange process better ways of

transferring heat is also an important

aspect. In this paper we have attempted to

increase the rate of heat transfer in a

simulated pilot solar pond by providing

twisted tapes in the flow passage.

Key words: Solar Pond, Augmentation,

Twisted tapes, Thermal energy storage.

I. INTRODUCTION

Solar ponds are low-cost, large

scale, solar thermal energy collectors with

integrated heat storage. The saltwater

naturally forms a vertical salinity gradient

also known as a "halocline", in which low-salinity

water floats on top of high-salinity water. The layer

of salt solutions increases in concentration with

depth i.e., density of water increases with depth.

When solar energy is absorbed in the water its

temperature increases, this causes thermal

expansion and reduced density of water. If the

water were fresh, the low-density warm water

would float to the surface, causing convection

current. The temperature gradient alone causes a

density gradient that decreases with depth.

However the salinity gradient forms a density

gradient that increases with depth, and this

counteracts the temperature gradient, thus

preventing heat in the lower layers from moving

upwards by convection and leaving the pond. This

means that the temperature at the bottom of the

pond will rise to over 90 °C while the temperature

at the top of the pond is usually around 30 °C Below

a certain depth; the solution has a uniformly high

salt concentration. Thus these salt gradient solar

ponds are of non convecting type.

II. VARIOUS ZONES OF A SALT GRADIENT SOLAR

POND

Page 456: Keynote 2011

456

There exist distinctly three different

zones in solar ponds.

Fig 1: Solar pond Zones[2]

A. Upper Convecting Zone (UCZ)

This is the top layer of the pond

with less dense liquid. This zone is

transparent allows the solar radiation to

pass through.

B. Non Convecting Zone (NCZ)

This is an intermediate zone which

acts as a thermal insulator. The density of

the water is higher than the UCZ but lower

than LCZ. In this zone variation of density

along the depth is predominant.

C. Lower Convecting Zone (LCZ)

The bottom layer of the solar pond

contains highest salinity among other

zones. The maximum storage of heat occurs

in this zone and hence the maximum

compared to other zones. The exchange of

heat from solar pond is achieved from the

lower convective zone.

III. HEAT EXCHANGE PROCESS IN A SOLAR POND

Water

Page 457: Keynote 2011

457

Fig 2. Heat Exchange in a Solar Pond

The conventional method adopted

for exchanging the heat from the solar pond

to any medium (usually water) is shown in

the schematic sketch. The hot water leaving

the pond is used for process heat or it can

exchange heat with low boiling point fluids

and can develop power by operating in an

organic rankine cycle. In conventional

method of heat exchange the rate of heat

transfer is less and this provides the scope

for augmentation of heat transfer in the

solar pond technology.

Page 458: Keynote 2011

458

IV. EXPERIMENTAL SET UP

Data

Logger

Sampling

vents

Thermocouples

Water Inlet Water Outlet

Solar Pond

Heat

Water

Storage

Page 459: Keynote 2011

459

Fig.3 Schematic Sketch of Simulated Pilot Solar Pond

Two Solar Ponds are fabricated using GI

sheet of 1.5 mm thick with the following

dimensions 50mm x 60mm x 50mm(height).

A heat exchanger made of copper tube of

12mm dia was installed in the fabricated

pond.

In the second solar pond twisted tapes

were inserted, these twisted tapes were of

10 mm height and made of copper sheets.

The solar ponds are insulated with glass

wool for 20 mm followed by styrofoam for

25mm

.

The four sides and the bottom of the

ponds were insulated. The fabricated ponds

are filled with salt water of 100%

concentration for a depth of 25 mm, This

region forms the Lower Convective Zone

(LCZ). Above this zone water with 50%

concentration of salt is filled for a depth of

25 mm.This zone forms the Non Convective

Zone (NCZ). Finally 10 mm of depth is filled

with ordinary water, to form the Upper

Convective Zone (UCZ). Sampling vents

were provided at every 10 mm to

determine the density of water.

Thermocouples of T type were placed in all

the three zones to determine the

temperatures.

V. EXPERIMENTS CONDUCTED

The two solar ponds after filling with

salt water were left for three days.

The presence of halocline was ensured

by checking for variation of density with

depth. Every day samples were taken from

sampling vents to confirm the existence of

halocline. The densities of the sample are

determined by measuring the mass of 10 ml

of sample using electronic chemical

balance. The ponds were heated by two

numbers of 500 W halogen lamps. The

thermocouples were connected to the

digital temperature indicator. The

temperatures of the three zones were

measured for every 30 minutes for three

days. Also the temperature of water

entering the heat exchanger and leaving the

heat exchanger was also tabulated.

Page 460: Keynote 2011

460

VI. AVERAGE READINGS FOR THREE DAYS

Heat Input 1000W

Time taken for collection of 1lit of water 55s

Volume flow rate 0.018182I/s

mass flow rate 0.018182kg/s

Without Twisted Tape

Average Temperatures(C)

Time Duration LCZ NCZ UCZ Twi Two Tamb

9:30 0 28 28 28 28 28 29

10:00 0:30 30 28 32 28 28 29

10:30 1 33 29 32 28 29 30

11:00 1:30 35 30 32 29 30 30

11:30 2 38 31 32 29 31 30

12:00 2:30 41 32 32 30 32 31

12:30 3 43 33 33 30 33 31

13:00 3:30 45 33 33 30 34 31

13:30 4 46 34 33 30 35 31

14:00 4:30 48 34 34 31 36 31

14:30 5 49 34 34 31 37 31

15:00 5:30 51 35 34 31 38 30

15:30 6 52 35 34 31 39 30

16:00 6:30 53 35 34 31 40 30

16:30 7 55 36 34 31 41 29

With Twisted Tape

Page 461: Keynote 2011

461

Average Temperatures(C)

Time Duration LCZ NCZ UCZ Twi Two Tamb

9:30 0 28 28 28 28 28 29

10:00 0:30 30 28 32 28 28 29

10:30 1 33 29 32 28 29 30

11:00 1:30 35 30 32 29 30 30

11:30 2 38 31 32 29 31 30

12:00 2:30 41 32 32 30 32 31

12:30 3 43 33 33 30 34 31

13:00 3:30 45 33 33 30 35 31

13:30 4 46 34 33 30 37 31

14:00 4:30 48 34 34 31 39 31

14:30 5 49 34 34 31 41 31

15:00 5:30 51 35 34 31 42 30

15:30 6 52 35 34 31 43 30

16:00 6:30 53 35 34 31 44 30

16:30 7 55 36 34 30 45 29

VII. RESULTS AND DISCUSSION

Without Twisted Tape

Page 462: Keynote 2011

462

With Twisted Tape

0

10

20

30

40

50

60

Te

mp

(C)

Time

Temp Variation in Zones

LCZ

NCZ

UCZ

Two

0

10

20

30

40

50

60

Tem

pera

ture

(C)

Time

Temp Variation in Zones(Twisted Tape)

LCZ

NCZ

UCZ

Two

Page 463: Keynote 2011

463

CONCLUSION

This experiment conducted clearly

indicates the increase in heat transfer in the

solar pond by providing twisted tape in the

flow passage. This is evident from the

increase in the change in temp of the fluid

for the solar pond with twisted tape

compared to the solar pond without

twisted tapes. Since solar pond is a method

to tap solar energy and it operates at low

temperature. For low temperature system it

is necessary to have increased rate of heat

transfer. Further various twist ratios of

tape inserts can be tried for increasing the

rate of heat transfer of solar pond. Also we

are sacrificing the pressure drop in the flow

passage due to the insertion of twisted

tapes to increase the rate of heat transfer.

REFERENCES

[1] J Srinivasan (1993), “Solar Pond

Technology”,Sadhana Vol.18 Part 1.

[2] G.R. Ramakrishna murthy and

K.P.Pandey, Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur “Solar Ponds- A

Perspective from Indian Agriculture”.

[3] P. Sivashanmugam

and S. Suresh

“Experimental studies on heat transfer and

friction factor characteristics of laminar

flow through a circular tube fitted with

helical screw-tape inserts”

[4] O. V. Mitrofanova, (2003)

“Hydrodynamics and Heat Transfer in

Swirling Flows in Channels with Swirlers

(Analytical Review)” High Temperature, Vol.

41, No. 4, 2003, pp. 518–559.

[5] R.M Manglik and A.E Bergles, (1992)

“Heat transfer enhancement and pressure

drop in viscous liquid flow in isothermal

tubes with twisted tape inserts”. Warme –

und stufferburtragung.

[6] S. E. Tarasevich and A. B. Yakovlev

(2003) “The Hydrodynamics of One- and

Two-Phase Flows in Channels with

Longitudinally Continuous Twist” High

Temperature Vol.41, No 2, pg 233 – 242.

[7] Sharma S D, Kazunobu Sagara (2005),

“Latent Heat Storage Materials and

Systems”, Int. Journal of Green Energy 2: 1–

56.

Page 464: Keynote 2011

464

62. PETRI NET METHODS: A SURVEY IN

MODELING AND SIMULATION OF PROJECTS

Rashmi L.M Research Scholar Departement Of Industrial Engineering And Management,Dayananda Sagar Col ege Of Engineering,Bangalore,India

Abstract— Petri nets are promising tool for describ ing and studying information processing systems that are characterized as beingconcurrent, asynchronous, distributed, parallel, non-deterministic.This paper surveys recent research on the application of Petri net models to project management. Petri nets have been used extensively in applications such as automated manufacturing, and there exists a large body of tools for qualitative and quantitative analysis of Petri nets. We present an overview of the various models and problems formulated in the literature.

Keywords:- Key words: Project

management, Petri nets, Simulation, Reachability.

I.INTRODUCTION

Project management is the discipline of p lanning, organizing,

securing and managing resources to bring about the successful

completion of specific engineering pro ject goals and

objectives. Modeling is the process of producing a model; a

model is a representation of the cons truction and working of

some system of interest. One purpose of a model is to enable

the analyst to predict the effect of changes to the system.

Simu lation of a system is the operation of a model of the

system analyzes randomness in a system.

Project management has various functions such as

planning, scheduling and control, [11] it‘s a comp lex task to

manage the constraints like resource, time and cost.

Traditional pro ject management tools and other revised tools

are limited in representation of the problem and in dealing

situation dynamically.The most popular methods for project

planning and management are based on a network diagram

M.B.Kiran Professor Department Of Industrial Engineering And Management, Dayananda Sagar College Of

Engineering,Bangalore

such as Program Evaluation and Review Technique (PERT)

and the Critical Path Method (CPM). The availability of

improved tools such as decision CPM (DCPM), Graphical

Evaluation and Review Technique (GERT) and Venture

Evaluation and Review Technique (VERT) have not satisfied

the requirements. A.HISTORY OF CPM/PERT Critical Path Method (CPM):

1. E I Du Pont de Nemours & Co. (1957) for construction of

new chemical plant and maintenance shut-down.

2. Deterministic task times.

3. Activ ity-on-node network construction.

4. Repetitive nature of jobs. Project Evaluation and Review Technique (PERT)

1. U S Navy (1958) for the POLARIS missile program.

2. Mu ltiple task time estimates (probabilistic nature).

3. Activ ity-on-arrow network construction.

4. Non-repetitive jobs (R & D work). B. PERT (Project Evaluation Review Technique)

PERT is based on the assumption that an activity‘s duration

follows a p robability distribution instead of being a single

value. Three time estimates are required to co mpute the

Page 465: Keynote 2011

465

parameters of an activ ity‘s duration distribution:

Pessimistic time (tp) - the time the activity would take if

things did not go well.

Most likely time (tm) - the consensus best estimate of the

activity‘s duration

Optimistic time (to) - the time the activity would take if things

Page 466: Keynote 2011

466

did go well. C. CPM (Critical Path Method)

CPM Calculation:

Path: A connected sequence of activities leading fro m the

starting event to the ending event.

Critical Path: The longest path (time); determines the pro ject

duration.

Critical Activities: All of the activities that make up the

critical path.

1. Forward Pass: Earliest Start Time (ES)

Earliest time an activity can start: ES = maximu m EF of

immed iate predecessors.

Earliest fin ish time (EF): Earliest time an activity can finish

earliest start time p lus activity time EF= ES + t .

2. Backward Pass: Latest Start Time (LS)

Latest time an activ ity can start without delaying critical path

time. LS= LF - t .

Latest finish time (LF): Latest time an activ ity can be

completed without delaying critical path time.

LS = minimu m LS of immediate predecessors . D.PETRI NETS

Although these (PERT, CPM, DCPM, GERT,

VERT) [1] have been successful in off-line planning and

scheduling, it is difficu lt to dynamically monitor and control

the progress of the project. [2]The major hurdle with these

tools is the assumption that there are infinite nu mbers of

resource interdependencies, no provision of information to

analyze reasons for the tardy progress of activities and no help

in the studies of partial allocation, mutual exclusivity and

substitution of resources. In such context new tool like Petri

nets is used. Hence, the need for powerful graphical and

analytical tools such as Petri net arises in project management.

A Petri net is a directed bipartite graph, in which the

nodes represent transitions (i.e. events that may occur,

signified by bars) and places (i.e. conditions, signified by

circles). A Petri net (also known as a place/transition net or

P/T net), is one of several mathematical modeling languages

for the description of distributed systems. Petri nets can be

used as visual commun ication aid similar to flow charts, block

diagrams and networks. Tokens are used in these nets to

simu late the dynamic and concurrent activities of systems.

Petri nets are pro mising tool for describing and studying

information processing systems that are characterized as being

concurrent, asynchronous, distributed, parallel, non-

deterministic.

1. BASIC OBJECTS OF PETRI NETS LA NGUA GE Places These are inactive and are analogous to inboxes

in an office-based system. They are shown as

circles in a Petri Net diagram. Each Petri Net has

one start place and one end place, but any

number of intermed iate places.

Transitions These are active and represent tasks to be

performed. They are shown as rectangles in a

Petri Net d iagram.

Arcs Each of these joins a single Place to a single

Transition. They are shown as connecting lines

in a Petri Net diagram. An inward arc goes from

a Place to a Transition and an outward arc goes

fro m a Transition to a Place.

resources available for each activity of a pro ject. Inadequacies

of conventional management tools include: non-automatic

rescheduling of activ ities, non-suitability to resolve conflicts

arising fro m resource priorities, incapability of representing

Tokens These represent the current state of a workflow

process. They are shown as black dots within

Places in a Petri Net d iagram. A p lace can hold

Page 467: Keynote 2011

467

zero or more to kens at any mo ment in time. 2. BASIC SYMBOLS OF PETRI NETS Immediate transition—the transition occurs immediately after

conditioning without any delay. Such transition always has

priority over transitions of other types.

Deterministic transition—the transition occurs a constant

numbers of times after conditioning.

Exponential transition—the firing time of this transition is

distributed exponentially with mean time .

General stochastic transition—the time after meeting of input

conditions until the transition occurrence is described by a

probability distribution function. II.CLASSIFICATION OF PETRI NETS [3],[4]

Petri nets are normally classified in to four main

categories are i) elementary nets , ii) Petri nets , iii) higher

order nets and iv) timed Petri net, as shown in Table 1.

III.DESCRIPTION OF PETRI NET MAIN CLASSES

Fig 1.Petri Net Class Transformation Relationships.

Page 468: Keynote 2011

468

A. Elementary Net Classes

These are a fundamentally simplistic class of Petri

nets. Normally in ENs controlled changes take place via

events that must be identical in another context. It is possible

to obtain EN structures by ‗reducing‘ other types of Petri nets.

In this case information is lost or removed. For EN systems

the state space is quite small. Behavior is always predictable.

In the elementary net categories the Petri net structures are

rather basic and restricted. Condition event nets (C/E) are

structurally similar to elementary nets (EN). These structures

are identified as having simp le structural qualities in Petri net

terminology. Input arcs and Output arcs connecting to a

Transition removes and output one token. This means that

Places represent Boolean information. Condition event net

idea is to formally or informally transform requirements

systems are pure/simp le/1-live. There is backward and directly

into a simple Petri net or elementary net. This is forward reach

ability where every event has a chance to occur. B. General Petri Nets

The second category of Petri net is still quite similar

to the previous one. [5],[8] There are structural similarities. In

fact elementary net structures could be Petri nets with certain

restrictions. It is still quite simp le to convert his class into

elementary nets via reduction. The Petri nets in his category

are still simp le in structure but places can contain more than

one token and arcs can have multiple values removing more

than one token at a go. In this category there are i) Ordinary

Place Transition nets, ii) Free choice nets, iii) S-System state

mach ines, iv) T-System marked graphs, etc. Its possible to

classify Place transition systems in this category. Tokens are

still unstructured but they can represent integer Values. C. Higher Order Net Classes

The third category as its name imp lies has significant

differences fro m the previous ones [6],[7],[9]. Here Petri nets

are no longer simple and start to resemble programmable

artifacts that are graphical and retain basic structural and

operational Properties of Petri nets. Some higher order nets

are: Algebraic Petri nets, Predicate. Transition nets (Prt),

Product nets, environmental nets, object oriented Petri nets,

colored Petri nets, etc .Higher order structures are

characterized mainly by token types that can represent

anything like an object, record, data sets, complex data types,

etc. Place token types are co mp lex. To kens are well structured

or well formed and can represent complex values or structures

defined on abstract data types. Transitions can have comp lex

firing rules that match special to ken data sets. The rules can be

programmed using special languages like ML as in the case of

CPNs [9],[10] Arcs can contain special inscriptions. These

Petri nets are suitable for detailed modeling and system

behavior. They offer a great deal o f flexib ility and simulations

that are close to the real world or actual environ ment. This

class of Petri nets greatly increases the modeling power of

these structures, at the same time these structures become very

complex and are no longer simple to create. They require

special expertise. To work with these classes programming

knowledge is a prerequisite. D. Timed Petri Net Classes

The fourth category represents timed Petri nets

(TPNs). There are many different sub categories of TPNs, but

basically they are one of the three classes previously presented

including the time dimension. As explained, more than a

category in its own right this class derives fro m the prev ious

ones.[8] However here they are considered as separate

structures because they offer a different modeling perspective

and the time dimension which is impo rtant for system and

software structures. This implies that it is possible to

convert/transform an elementary net into a timed Petri net.

Normally the time values are assigned to transitions. It is

possible to assign the time values to arcs and places also.

Some co mmon types of time Petri nets are timed Petri nets

(TPNs), determin istic timed Petri nets (DTPNs), stochastic

Petri nets (SPNs), GSPNs, Q-nets, etc. TPNs are very

important for performance modeling, simu lation and analysis

Page 469: Keynote 2011

469

related to bottleneck prob lems in systems. So me sub classes

of TPNs can become quite complex and detailed and require

special expertise being directed to special areas transitions. A

higher order Petri net can be created fro m a P/T net or vice-

versa. Fro m the TPN structure it is possible to create a higher

order Petri net or an EN o r TPN etc. For modeling system and

software behavior the ideal starting point might be either the

EN system or a Place transition net. Transformation fro m the

P/T net into the EN is quite simp le as actually it is a reduction

of the structural properties of the P/T net. Different literature

exists for formalizing the possible correspondence and

transformations. Formalizing the transformations might reduce

the actual usefulness of this approach because it might be

better to leave it open to the user to develop the models

accordingly. Hence the diagram in fig. 1 serves as a reference

map or guideline to what is possible. A.PETRI NET CLASS USERS

Petri Nets is normally classified in to four Categories

or Class Or Levels, Each of these Petri Nets is used by so

many people for different purpose in an organization. As

shown in Table 2,The Tab le gives the shows the users of Petri

nets classes along with a description and purpose of different

classes of Petri Nets.

Table 2: Shows the Petri Net Class users.

IV.CONCLUSION

Project Management is a co mplex task. Pro ject

Page 470: Keynote 2011

470

managers are on the look out for efficient project

management tools to suit specific needs and tackle

realistic problems. Though many modeling and controls tools

are available, they are limited in application when considering

real-life situations. Hence, improved tools to tackle all kinds

of situations are called for. Considering the limitations of the

traditional pro ject management tools and the benefits offered

by the Petri nets.

It has been shown how all the different classes of

Petri nets fit together and can be combined for different

purposes. It has been explained how to use these classes in a

semi structured approach. Petri nets are decomposable in a top

down approach. This implies that the models especially the

more co mplex ones can be refined in detail. It is possible to

extract functioning of the indiv idual system components .

So me advantages of using Petri net classes in

requirements Engineering might be: i) improved

system/software features, ii) reduced rework at construction

stage, iii) final product has uses fewer defects, iv ) better

stakeholder negotiation of the product at the initial stages a

priori to the design, v) accurate final system performance and

timing ,vi) good requirements

V.REFERENCES

1]. Elsayed, E.A. and N.Z. Nasr, 1986. Heuristics for

Resource constrained scheduling. Int. J. Production

Res., 24 (2): 299-310. 2]. Ku manan, S. and O.V. Krishnaiah Chetty, 2000

Project management through Petri nets

Proceedings of the 16th International Conference

Of CAD/CAM. Robotics and factories of future

Trin idad, West Indies, 645-652. 3]. A Classification of Petri Nets, http://www.info rmatik.uni-

hamburg.de/TGI/Petri Nets/Classification

Page 471: Keynote 2011

471

4]. L. Bernardinello, F. De Cindio, A Survey of Basic Petri

Net Models and Modular Net Classes, LNCS Vo l. 609,

Springer-Verlag, 1992. 5]. J. Desel, W. Reisig, Place/Transition Petri Nets, LNCS,

Vo l 1491\1998, Sp ringer-Verlag, 1998. 6]. K. Jensen, G. Rosenberg, High-Level Petri Nets: Theory

and Application , Springer – Verlag, Berlin , 1991. 7].K. Jensen, L. M. Kristensen, L. Wells, ―Colored Petri Nets

and CPN Tools for Modelling and Validation of Concurrent

Systems‖, International Journal On Software Too ls for Tech.

Transfer (STTT), Springer- Verlag, Vo l. 9, Springer-Verlag,

2007, pp. 213-254. 8].M. Zhou, K. Venkatesh, Modeling, Simulation and Control

of Flexible Manufacturing Systems a Petri Net Approach,

World Scientific, 1999, isbn981023029X. 9].L.M. Kristensen, S. Christensen, K. Jensen, ―The

Practioner‘s Gu ide to Co loured Petri Nets‖, International

Journal On So ftware Tools for Tech.Transfer (STTT), Vo l. 2,

Springer-Verlag,1998, pp. 98-132. 10].CPNTools, CPN Group, Department of Co mputer Science

University of Aarhus ,Den mark http://www.daimi.au.dk/CPnet

11].A merican Journal Of Applied Sciences, Dec ,2008 by

S. Ku manan, K. Raja

Page 472: Keynote 2011

472

63. Quality sustenance on vendor and inventory products

*Vigneshwar Shankar, ** R.V.Praveen Gowda, ** Dr.H.K.Ramakrishna, * UG Student, Department Of Industrial Engineering and Management, DSCE, Bangalore-78, India

** Department of Industrial Engineering & Management, DSCE, Bangalore-78, India.

*ID:[email protected],**[email protected],[email protected].

Abstract: The rapidly changing demand for

finished products without delays in today’s

world has brought a necessity of a

manufacturing company to include quality

sustenance in vendor products and inventory

as one of the important tool to win over its

delays or cut in time taken for manufacturing

so as to advance in the manufacturing world.

It is no surprise that quality product

produced in less time is the important key to retain customers and compete in the market.

Manufacturers are forced to function in a world of change and complexity and hence it is

more important to have the right skill and process check in order to survive the

surrounding competition.

This paper presents the results of an

empirical study on the factors that enhance the

quality of a product with time in mind without

hindering the game plan. An experience of

implant training had got the attention towards

quality on vendor products and inventory line

up. It was noted that when one of the raw

materials or a component was recognised with

defect the whole product assembly time is

affected leading to huge delays and backlog of

working hours.

Keywords: quality sustenance, ramp up in

production, reduction in delays, retention of customers, Reliable assembly process

XXV. INTRODUCTION

H. Understanding current trends:

The economy is changing at a faster pace than ever before, forcing companies to meet deadlines with different product portfolio sets from one year to the next, regardless of the overall pace of economic growth. Overall expansion in production lines isn’t the answer. Expansions

haven’t decreased the backlog or delay rate. Studies show companies that expand their lines usually run into the same problem of quality issues on raw materials or components from vendors or inventory stock and often end up adding more back logs to their current order book.

XXVI. GET TO KNOW YOUR INVENTORY

Today manufacturers are more aware of their intrinsic value, not just as another manufacturer in the production world. Studies show that the focus is shifting from the sales perspective, and the efficiency in production which is becoming more and more important.

Even though a manufacturer has managed to find the right process, this is not enough. A crucial factor is to reduce backlogs or delay in the assembly line due to rejection of raw materials and other components. In some industries the high level of assembly line turnover is naturally high, but for most industries this could damage the company in terms of for example quality and customer service.

The problem:

During an implant training at a brake manufacturing company it was noticeable that product quality is given top priority. When it came to a component “back plate” which was formed in the press shop, a complete batch was rejected due rust formation on the surface which was noticeable before it was passed onto several presses and was assigned to the process shop for removal of foreign particles which causes delays in the assembly of the master boosters for the brake. Another instance where the wheel cylinder had a stiff piston had also lead to the delay in the final assembly line. Such events not only slow down the assembly line but also add pressure

Page 473: Keynote 2011

473

onto the supervisors, assembly workers, and the management to meet the deadline of the customer requirements.

XXVII. THE THOUGHT STATUS

Methodology

1. Study of the situation 2. Collecting data 3. Analysis 4. Applying Corrective action

Study of the situation:

This has to be done periodically or addressed whenever there is a feedback from the inventory dept. about the materials in stock. The component or raw material life time/status has to be immediately addressed so as to avoid future negative repercussion.

ECONOMIC ORDER QUANTITY (EOQ)

Economic Order Quantity (EOQ) is the most economical purchase order quantity or lot size. It keeps a balance between inventory carrying costs and ordering costs which are opposed to one another. When inventory costs increase, ordering costs decrease with quantity of purchase order. As these requirements are conflicting there is a particular quantity at which the sum of both the ordering and inventory carrying costs are minimum. This quantity is called as the Economic ordering quantity (EOQ).The following formula is used to calculate Economic Ordering Quantity mathematically.

Total Ordering Cost=No. of Orders X Ordering Cost per Order.

Total cost = A/Q X S

Where,

A=Annual Consumption

Q=Economic Ordering Quantity

S=Ordering or Buying Costs per Order

Fig. 4: Total costs of Inventory

The tendencies have been shown in the diagram above.

A. ABC (always better control) classification

system

Classifying inventory according to some measure of importance and allocating control efforts accordingly.

A - Very important

B - Moderately important

C - Least important

Fig 5: ABC Classification System

Page 474: Keynote 2011

474

A Class Items (High Consumption Value): These need very strict policies control, no safety stocks or very low stocks.

B Class Item (Moderate Value): Items are of moderate control, Ordering is done once in 3 months, low safety stocks, and periodic follow up of stocks is done.

C Class Items (Low Consumption Value): Policies are used in bulk ordering that is done once in six months. Production can be held up if one of the items is not available.

B. The quality of the raw material can be validated batches wise in which a unit or piece of raw material could be can be checked randomly. If 6 pieces out of 20 pieces in a batch is checked and approved then the next batch can be checked for 6 pieces out of a batch of 50 and the later pieces can be checked on random multiples in a million which would have the same acceptable quality that has come from the same supplier chain which could be calculated using probability solutions. This model could help in quality sustenance without any compromise on production time and operational costs.

XXVIII. TECHNIQUES ADOPTED FOR

CONTROLLING INVENTORY

A. KANBAN SYSTEMS

KANBAN, a technique for work and inventory release, is a major component of Just in Time and Lean Manufacturing philosophy. It was originally developed at Toyota in the 1950s as a way of managing material flow on the assembly line.

Fig 6: KANBAN System

Kanban stands for Kan- card, Ban- signal. The essence of the Kanban concept is that a supplier, the warehouse should only deliver components as and when they are needed, so that there is no excess Inventory. Within this system, workstations located along production lines only produce desired components when they receive a card and an empty container, indicating that more parts will be needed in production. In case of line interruptions, each workstation will only produce enough components to fill the container and then stop. In addition, Kanban limits the amount of inventory in the process by acting as an authorization to produce more Inventories. Since Kanban is a chain process in which orders flow from one process to another, the production or delivery of components are pulled to the production line, in contrast to the traditional forecast oriented method where parts are pushed to the line. [8]

B. Advantages of Kanban Processing

1. Provides a simple and understandable process.

2. Provides quick and precise information. 3. There are low costs associated with the

transfer of information. 4. Provides quick response to changes. 5. Avoids overproduction and Minimizes

wastes.

C. JUST-IN-TIME (JIT)

Page 475: Keynote 2011

475

Just in Time (JIT) production is a manufacturing philosophy which eliminates waste associated with time, labor, and storage space. Basics of the concept are that the company produces only what is needed, when it is needed and in the quantity that is needed. The company produces only what the customer requests, to actual orders, not to forecast. JIT can also be defined as producing the necessary units, with the required quality, in the necessary quantities, at the last safe moment.

Fig. 7: Screen Shot of JIT

D. Benefits of JIT

1. Reduced set up times in warehouse - the company can focus on other processes that might need improvement.

2. Improved flows of goods in/through/out warehouse employees will be able to process goods faster.

3. Employees who possess multi-skills are utilized more efficiently – the company can use workers in situations when they are needed, when there is a shortage of workers and a high demand for a particular product.

4. Better consistency of scheduling and consistency of employee work hours -if there is no demand for a product at the time, workers don’t have to be working. This can save the company money by not having to pay workers for a job not completed or could have them focus on other jobs around the warehouse that would not necessarily be done on a normal day.

5. Increased emphasis on supplier relationships - having a trusting supplier relationship is important for the company because it is possible to rely on goods being there when they are needed.

6. Supplies continue around the clock keeping workers productive and businesses focused on turnover - employees will work hard to meet the company goals.

E. JIT concentrates on eliminating 7 types of

WASTES

1. Overproduction and early production – producing over customer requirements, producing unnecessary materials.

2. Waiting time – time delays, idle time (time during which value is not added to the product)

3. Transportation – multiple handling, delay in materials handling, unnecessary handling

4. Inventory – holding or purchasing unnecessary raw materials, work in process, and finished goods

5. Motion – actions of people or equipment that do not add value to the product

6. Over-processing – unnecessary steps or work elements / procedures (non added value work)

7. Defective units – production of a part that is scrapped or requires rework.

F. JIT Implementation in an Organization

Company will implement the JIT concept depends on many factors. It is obvious that for large companies more time will be spent. On the other hand, smaller companies have the

Page 476: Keynote 2011

476

opportunity to implement the JIT concept much faster because their organization structure is not so complicated. But it does not mean that smaller companies are better in JIT implementation.

The following algorithm shows how the company implements the JIT concept. First of all, top management must accept idea of the JIT. Without their permission it is not possible to move on with the whole process. They are responsible for ensuring financial resources for the project. Perhaps the most difficult thing for engineers is to convince managers that the company under consideration really needs implementation of the JIT concept in order to improve business processes. Convincing managers to allow evaluation of JIT is not only a problem that comes from human. Second step for a company is success which is connected with the fact that employees also have to understand significance of the new concept

The third step would be the setup of Enterprise Resource Planning is a system which integrates all data and processes of an organization into a single unified system. With a centralized data base it is much easier to manage all enterprise resources. The next step would be to test our own system. Now all preconditions of the JIT implementation are considered and we are trying to figure out: are there any difficulties to start with implementation. In this step one question comes up: "Is the system ready for JIT implementation?" When the answer is NO, it is recommendable to go back and do changes. If the answer is YES, everything is prepared for the implementation process. The last step is testing and control. For successful existence and developing of the JIT system there must be continuous control. Feedback loops also exist and they are very important for the whole process. [7]

G. LEAN THINKING:

Lean thinking is sometimes called lean

manufacturing. Lean focuses on the removal of

waste, which is defined as anything not necessary to

produce the product or service. The use of term

“lean”, in a business to describe the philosophy that

incorporates a collecting tools and technique into the

business processes to optimize time, human

resources assets and productivity.

Lean manufacturing focuses on 3MU’S:

MURI-Strain due to

1. Overburden 2. Poor design 3. Posture 4. Non availability of people, causing

stagnation or bottlenecks resulting in failures.

5. Doing work manually which are to be done by machines also causes strain.

MURA-Inconsistency due to

1. Busy in one area and ideal in another area

2. Mixing of experienced and inexperienced

3. Variation in quality

4. Irregularity of tooling quality

5. Using equipment wastefully

MUDA-Wastes due to

1. Overproduction is to produce more than

demanded or to produce it before it is

needed.

2. Transportation does not add any value to the

product. Transportation costs should be

minimum

3. Motion of the workers, machines and

transport is waste. Instead of automating

wasted motion, the operation itself should be

improved.

4. Waiting for a machine to process should be

eliminated. The principle is to maximize the

utilization of the worker instead of

maximizing the utilization of the machines.

5. Processing wastes should be minimized.

6. Inventory or WIP is material between

operations as a result of large lot production

or processes with long cycle times

7. Defects making defective products are pure

waste. Focus on preventing the occurrence

instead of finding and repairing defects.

Page 477: Keynote 2011

477

There are 5 steps for implementing lean thinking in

an enterprise.

1. IDENTIFY VALUE: The determination of which

features create value in the product is made

from the internal and external customer’s

standpoints. Value is expressed in terms of

how the specific product meets the

customer’s needs at a specific time.

2. IDENTIFY THE VALUE STREAM: Once value is

identified, activities that contribute value are

identified. The entire sequence of activities is

called value stream. Necessary operations

are defined as being a prerequisite to other

value added activities or being an essential

part of the business all other non-value

added activities are transitioned out of the

process.

3. IMPROVE FLOW: Once value added activities

and necessary non-value activities are

identified improvement efforts are directed

toward making the activities flow. Flow is the

uninterrupted movement of product or

service through the system to the customers.

4. ALLOW CUSTOMER PULL: After waste is

removed and flow is established. The

company must make the process responsive

to provide the product or service only when

the customer needs it-not before, not after

5. WORK TOWARD PERFECTION: This effort is

the repeated and constant attempt to

remove non-value activities. [5]

H. 5S

5S is a lean manufacturing housekeeping

methodology to clean and organize the work

place. 5S focuses on having visual order,

organization, cleanliness, and standardization.

SEIRI (SORT OUT)

• Do we find items scattered n our work place

• Are there equipment and tools placed on the floor?

• Are all items sorted out and placed in a designated spots?

SEITON (IN ORDER)

• Are passageways and storage places clearly indicated?

• Are commonly used tools and stationary separated from those seldom used?

• Are containers and boxes stacked up properly?

• Are fire extinguisher and hydrants readily accessible?

SEISO (CLEANING)

• Are the floor surfaces dirty?

• Are machines and equipment dirt?

• Are machine nozzles directed by lubricants?

SEIKETSU (STANDARDISING)

• Is any one’s uniform is dirty or untidy?

• Are there sufficient lights?

• Is the roof leaking?

• Do people eat at designated places only

SHITSUKE (TRAINING)

• Are regular 5s checks conducted?

• Do people follow rules and instructions?

• Do people wear their uniforms ad safety gear properly? [13]

ADVANTAGES OF LEAN

• Using lean wastes can be reduced by 80%.

• Production cost reduction by 50%. • Leads to higher quality and profits. • More strategic focus. • Inventory reduction by 80% while

increase customer service levels.

XXIX. THE FUTURE OF INVENTORY

MANAGEMENT

The management has added a new dimension to inventory management-reverse supply chain

Page 478: Keynote 2011

478

logistics, JIT, Lean Manufacturing Techniques. Environmental management has expanded the number of inventory types that firms have to coordinate. In addition to raw materials, work-in-process, finished goods, and MRO goods, firms now have to deal with post-consumer items such as scrap, returned goods, reusable or recyclable containers, and any number of items that require repair, reuse, recycling, or secondary use in another product. Retailers have the same type problems dealing with quality sustenance in inventory that has been returned due to defective material or manufacture, poor fit, finish, or color.

Finally, supply chain management has had a considerable impact on inventory management. Instead of managing one's inventory to maximize profit and minimize cost for the individual firm, today's firm has to make inventory decisions that benefit the entire supply chain. [1]

XXX. CONCLUSION

The probable outcome of the research is to meet the customer order by adopting an optimal inventory policy by maintaining better inventory management by,

• Reducing Cycle-Time • Increasing Productivity • Effective Utilization of Organization Resources.

REFERENCES

Biederman, David. "Reversing Inventory Management." Traffic World (12 December 2004)

Sucky, Eric. "Inventory Management in Supply Chains: A Bargaining Problem." International Journal of Production Economics 93/94: 253.

http://www.inventorymatters.co.uk.noisegate2.webhoster.co.uk//userFiles/mk_kcurve_case_study.pdf

2009, Levinson Productivity Systems, P.C. www.ct-yankee.com Womack, J. P. & Jones, D. T. 1996. Lean

thinking: Banish waste and create wealth in your corporation. New York: Simon & Schuster.

Waters, C.D.J., Inventory Control and Management”, John Wiley and sons Ltd, Chichester, New York, 1999.

http://www.1000ventures.com/business_guide/im_jit_main.html : Information about Just-In-Time systems

Liker K.J.; The Toyota way: 14 Management principles from the world's Greatest manufacturer; McGraw-Hill, 2004

http:// www.bsieducation.org http://www.scirus.com http://www.slideshare.net http://www.pdf-searchengine.com http://www.scribd.com

Page 479: Keynote 2011

479

64. Effect on Surface Roughness of Turning Process in CNC

Lathe using Nano Fluid as a Coolant *G. Mahesh, **Dr. Murugu Mohan Kumar

*Asst.Prof. Indra Ganesan College of Engineering, Trichy **Dean School of Mechanical Engineering CARE

Abstract Cutting fluids, usually in the form of a liquid, are applied to the chip formation zone in order to improve the cutting conditions. The cutting oil must maintain a strong protective film in that portion of the area between the tool face and the metal being cut where hydrodynamic conditions can exist. Such a film assists the chip in sliding readily over the tool. Besides reducing heat, proper lubrication lowers

power requirements and reduces the rate of tool wear, particularly in machining tough, ductile metals. This paper introducing nano fluid using as a coolant in CNC machine. The use of nano fluid, its increasing the machining condition and it is reducing the production time In this paper, considered the cutting condition for three materials and analyzed surface roughness.

1. Introduction

A variety of machining operations for different industries using a wide range of materials. As a result, they face diverse operational and metallurgical challenges. Additionally, their customers often demand short production runs and fast turnarounds, and they must be able to customize and deliver high-quality end products. If a cutting fluid performs it’s lubricating function satisfactorily the problem of heat removal from the cutting tool, chip, and work is minimized. But, cooling still remains an important function. To perform this function effectively, a copper nano fluid should possess high thermal conductivity so that maximum heat will be absorbed and removed per unit of fluid volume Types of coolant

1. Lard oil

2. Mineral oil

3. Mineral-Lard Cutting Oil Mixture

4. Sulfur zed Fatty-Mineral Oil

5. Soluble Cutting Oils

6. Soda-Water Mixtures

In case of White lead can be mixed with either lard oil or mineral oil to form cutting oil which is especially suitable for difficult machining of very hard metals.

2. Nano fluid as a coolant

It seems that the first nanofluid heat transfer technologies is the one invented by Choi and Eastman [8] in 2001. Since then, more than 20 nanofluid heat transfer technologies and their applications were invented. Most of these coolants are relevant heat transfer enhancement through improving thermal conductivities. Some are relevant to the application of nanofluids in various thermal systems such as car cooling systems, cooling of heat sinks, cooling of electronic chips, enhancement of heat transfer in nuclear engineering etc Choi and Eastman [8] invented a method and apparatus for enhancing heat transfer in fluids such as deionized water, ethylene glycol and

Page 480: Keynote 2011

480

oil by dispersing nanocrystalline particles of substances such as copper, copper oxide, aluminum oxide and the like in the fluids. Nanocrystalline particles are produced and dispersed in the fluid by heating the substance to be dispersed in a vacuum while passing a thin film of the fluid near the heated substance. The fluid is cooled to control its vapor pressure. Figure (1) shows a plot of conductivity as a function of particle volume in deionized water. Fig. (2) shows a plot of conductivity as a function of volume percent of copper in oil. Compared to those of base fluids, the thermal conductivity of nanofluids is increased up to 50 percent.

Fig. (1). Conductivity ratio of nanofluid to base fluid versus particle volume fraction

Fig. (2). Conductivity ratio of nanofluid to base fluid versus particle volume fraction heat transfer compositions and methods for using same to transfer heat between a heat source and a heat sink in a transformer, and in particular to the utilization of nano-particle size conductive material powers such as nano-particle size diamond powders to enhance the thermal capacity and thermal conductivity of heat transfer compositions such as transformer oil. The thermal conductivity was greatly improved with same volume fraction nano particles addition. Recently, Zhang et al. [21] invented a fluid media such as oil or water and a selected effective amount of carbon nanomaterials necessary to enhance the thermal conductivity of the fluid. One of the preferred carbon materials is a high thermal conductivity graphite, exceeding that of the neat fluid to be dispersed therein in thermal conductivity, and ground, milled, or naturally prepared with mean particle size less than 500 nm, and preferably less than 200 nm, and most preferly less than 100nm. The graphite is dispersed in the fluid by one or more of various methods, including ultrasonication, milling, and chemical dispersion. Carbon nanotubes with graphitic structure are another preferred. The thermal conductivity enhancement, compared to the fluid without carbon material, is proportional to the amount of carbon nanomaterials (carbon nanotubes and /or graphite) added. Thermal conductivity can be increased 2505 compared to their onventional analogues.

Oldenburg [22] invented compositions comprising nanorods and methods of making

and using the same, the inclusion of nanrods can enhance the thermal conductivity of heat

Page 481: Keynote 2011

481

transfer medium. Surprisingly, the addition of nanorods provides substantially greater improvements in thermal conductivity than the addition of other nanostructure. Figure 4 shows nanorods dispersed in a fluid. The fluid can be used in a wide range of applications such as heating and cooling of machinery, vehicles, instruments, devices and industrial processes. Such heat transfer fluids are used to transfer heat from a heat source to a heat sink.

.

Fig. (4). Schematic of nano rods dispersed in a fluid: 1- Vessel; 2- Base fluid; 3-Nanorod

SEM image of Cu-based nano particles

SEM image of Cu-based nano particles one day after preparation

3. Experimental results

We have selected three materials like AISI D2 steel, AISI D3 Steel, AISI A2 Steel are work materials used CNC lathe and analyzed experimental result as shown in table

Measured the effect of different coolant have on the tool life of ceramic tools when cutting AISI D2 steel, AISI D3 Steel, AISI A2

Work material AISI D2 steel

AISI D3 Steel

coolant

DOC FEED

SPEED

MRR

FINISH

TOOL LIFE

(in) (ipr) (rpm) (in3/Min)

(mic-in)

(Min)

Dry 0.01 0.004 500 0.24

9-14 11.02

Liquid nit

0.01 0.004 500 0.24

15-26 14.52

Nano coolant

0.01 0.004 500 0.24

21-35 26.42

Page 482: Keynote 2011

482

AISI A2 Steel

coolant DOC FEED SPEED MRR FINISH TOOL LIFE

(in) (ipr) (rpm) (in3/Min) (mic-in)

(Min)

Dry 0.015 0.006 600 0.648 11-35 4.52 Liquid nit

0.015 0.006 600 0.648 19-23 10.52

Nano coolant

0.015 0.006 600 0.648 29-45 14.30

Applications of nano coolant

It’s created next-generation fluid that dramatically improves fluid heat transfer by manipulating atoms on the nano scale. By adding tiny, spherical particles of copper no larger than a few nanometers to a conventional fluid, researchers can improve its ability to transfer heat by up to 40%. Improving heat transfer in oils and coolants for the automotive industry, for example, would help engines increase efficiency, decrease fuel demands, and improve emissions. Development of methods to manufacture diverse, hybrid nanofluids with polymer additives with exceptionally high thermal conductivity while at the same time having low viscous friction.High thermal conductivity and low friction are critical

design parameters in almost every technology requiring heat-transfer fluids (cooling or heating). Another goal will be to develop hybrid nanofluids with enhanced lubrication properties. Applications range from cooling densely packed integrated circuits at the

small scale to heat transfer in nuclear reactors at the large scale.

4. Conclusion

• Little is known about the physical and chemical surface interactions between the nanoparticles and base fluid molecules, in order to understand the mechanisms of enhanced flow and thermal behavior of nanofluids.

• Improved theoretical understanding of complex nanofluids will have an even broader impact

• Development of new experimental methods for characterizing (and understanding) nanofluids in the lab and in nature.

• Nanoscale structure and dynamics of the fluids: using a variety of scattering methods; small-angle x-ray scattering (SAXS), small-angle neutron scattering (SANS), x-ray photon correlation spectroscopy (XPCS), laser based photon correlation spectroscopy (PCS) and static light scattering.

• Development of computer based models of nanofluid phenomena including physical and chemical interactions between nanoparticles and base-fluid molecules.

coolant DOC FEED SPEED MRR FINISH TOOL LIFE

(in) (ipr) (rpm) (in3/Min) (mic-in)

(Min)

Dry 0.01 0.004 550 0.264 8-11 9.23 Liquid nit

0.01 0.004 550 0.264 13-22 12.52

Nano coolant

0.01 0.004 550 0.264 23-37 36.32

Page 483: Keynote 2011

483

Reference

[1] S. U. S. Choi, “Nanofluids: from vision to reality through research,” Journal of Heat

Transfer, vol. 131, no. 3, pp. 1–9, 2009 10 Advances in Mechanical Engineering [2] W. Yu, D. M. France, J. L. Routbort, and S. U. S. Choi, “Review and comparison of nanofluid thermal conductivity and heat transfer enhancements,” Heat Transfer

Engineering, vol. 29, no 5, pp. 432–460, 2008. [3] T. Tyler, O. Shenderova, G. Cunningham, J.Walsh, J. Drobnik, and G. McGuire, “Thermal transport properties of diamondbased nanofluids and nanocomposites,” Diamond and Related

Materials, vol. 15, no. 11-12, pp. 2078–2081, 2006. [4] S. K. Das, S. U. S. Choi, and H. E. Patel, “Heat transfer in nanofluids—a review,” Heat Transfer Engineering, vol. 27, no. 10, pp. 3–19, 2006. [5] M.-S. Liu, M. C.-C. Lin, I.-T. Huang, and C.-C. Wang, “Enhancement of thermal conductivity with carbon nanotube for nanofluids,” International Communications

in Heat and Mass Transfer, vol. 32, no. 9, pp. 1202–1210, 2005. [6] S. U. S. Choi, Z. G. Zhang, and P. Keblinski, “Nanofluids,” in Encyclopedia of

Nanoscience and Nanotechnology, H. S. Nalwa, Ed., vol. 6, pp. 757–737, American Scientific, Los Angeles, Calif, USA, 2004. [7] S.M. S.Murshed, S.-H. Tan, and N.-T. Nguyen, “Temperature dependence of interfacial properties and viscosity of nanofluids for droplet-based microfluidics,” Journal of Physics D, vol. 41, no. 8, Article ID 085502, 5 pages, 2008. [8] Choi, S.U.S., Eastman, J.A.: US20016221275B1 (2001). [9] K.-F. V. Wong, B. L. Bon, S. Vu, and S. Samedi, “Study of nanofluid natural convection phenomena in rectangular enclosures,” in Proceedings of the ASME

International Mechanica Engineering Congress and Exposition (IMECE ’07), vol. 6, pp. 3–13, Seattle,Wash, USA, November 2007. [10] Y. Ju-Nam and J. R. Lead, “Manufactured nanoparticles: an overview of their chemistry, interactions and potential environmental implications,” Science of the

Total Environment, vol. 400, no. 1–3, pp. 396–414, 2008. [11] J. Routbort, et al., Argonne National Lab, Michellin North America, St. Gobain Corp., 2009, .gov/industry/nanomanufacturing/pdfs/nanofluids industrial cooling.pdf. [12] Z. H. Han, F. Y. Cao, and B. Yang, “Synthesis and thermal characterization of phase-changeable indium/polyalphaolefin nanofluids,” Applied Physics Letters, vol. 92, no. 24, Article ID 243104, 3 pages, 2008. [13] G. Donzelli, R. Cerbino, and A. Vailati, “Bistable heat transfer in a nanofluid,” Physical Review Letters, vol. 102, no. 10, Article ID 104503, 4 pages, 2009. [14] S. J. Kim, I. C. Bang, J. Buongiorno, and L. W. Hu, “Study of pool boiling and critical heat flux enhancement in nanofluids,” Bulletin of the Polish Academy of Sciences—

Technical Sciences, vol. 55, no. 2, pp. 211–216, 2007. [15] S. J. Kim, I. C. Bang, J. Buongiorno, and L. W. Hu, “Surface wettability change during pool boiling of nanofluids and its effect on critical heat flux,” International

Journal of Heat and Mass Transfer, vol. 50, no. 19-20, pp. 4105–4116, 2007. [16] J. Boungiorno, L.-W. Hu, S. J. Kim, R. Hannink, B. Truong, and E. Forrest, “Nanofluids for enhanced economics and safety of nuclear reactors: an evaluation of the potential features issues, and research gaps,” Nuclear Technology, vol.

Page 484: Keynote 2011

484

162, no. 1, pp. 80–91, 2008. [17] E. Jackson, Investigation into the pool-

boiling characteristics of gold nanofluids,M.S. thesis, University ofMissouri-Columbia, Columbia, Mo, USA, 2007. [18] J. Buongiorno, L. W. Hu, G. Apostolakis, R. Hannink, T. Lucas, and A. Chupin, “A feasibility assessment of the use of nanofluids to enhance the in-vessel retention capability in light-water reactors,” Nuclear Engineering and Design, vol. 239, no. 5, pp. 941–948, 2009.

[19] “The Future of Geothermal Energy,” MIT, Cambridge, Mass, USA, 2007. [20] P. X. Tran, D. K. lyons, et al., “Nanofluids for Use as Ultra-Deep Drilling Fluids,”U.S.D.O.E., 2007, http://www.netl .doe.gov/publications/factsheets/rd/R&D108.pdf. [21] M. Chopkar, P. K. Das, and I. Manna, “Synthesis and characterization of nanofluid for advanced heat transfer applications,” Scripta Materialia, vol. 55, no. 6, pp. 549–552, 2006

65. Various co-generation systems – A case study

S.Vignesh, S.Vignesh Kumar

J.J.College of Engineering and Technology Tiruchirappalli, 620009, Tamil Nadu

Page 485: Keynote 2011

Abstract— Co-generation is a renewable energy technology for simultaneous generation of heat and electric power from the same primary fuel. In Nellikuppam E.I.D Parry’s sugar factory the effluent treatment plant treats the effluent from the distillery and sugar factory waste water. In the effluent treatment plant there are two Methane Upflow Reactors and one Biologically Controlled Reactor. The effluent is first treated in anaerobic treatment plant followed by aerobic treatment. The plant is producing around 25,000 m3/day of bio-gas with 55% to 65% of methane. At present the biogas produced in effluent treatment plant is utilized for generating steam in the boiler, at a pressure of 14bar. This steam from the boiler is used for process work. Thus only the heat energy is utilized in the process. In this paper we suggest various alternate methods to produce both steam and electric power by providing a cogeneration plant; also the best possible cogeneration system is also suggested after performing detailed analysis to meet the total electric power demand and process steam. We have considered IC engine cogeneration plant, back pressure (steam turbine cogeneration plant) turbine and gas turbine cogeneration plant for our analysis. Among these three cogeneration plants, it is found that the gas turbine cogeneration plant proves to be suitable to meet the electric power and process steam requirements.

Keywords— Cogeneration, Combined Heat and Power, IC Engine Cogeneration, Back Pressure Turbine, Gas Turbine Co-generation.

XXXII. INTRODUCTION

Co-generation is the simultaneous generation of heat and power from the same primary fuel. In Nellikuppam, E.I.D PARRY’S sugar factory the effluent treatment plant treats effluent from distillery and sugar factory waste water. In the effluent treatment plant there are two methane reactors and one biologically controlled reactor. The effluent is first treated in anaerobic treatment plant followed by aerobic treatment. The plant is producing about 25000 cubic meters per day of biogas with 55% to 65%of methane. At present the effluent treatment plant is utilized in the boiler to produce steam at 14 bars. This steam is used for

process work. Thus only heat energy is utilized in this process. This project system revives

the selection of suitable co-generation system for the distillery. .

XXXIII. CO-GENERATION

Co-generation is broadly defined as the sequential production of electrical or mechanical power and process heat from the same primary fuel or energy source.

Fig 1: Conventional Power Plant

C

CG

Fuel

Unutilized

Heat

Work Useful

heat

C Work

Fuel

Unutilized

Heat

Page 486: Keynote 2011

Fig 2: Cogeneration Plant

A co-generation plant can be installed in either a utility or an industrial steam plant. If the system is installed in an industrial steam plant, it is termed as “IN-PLANT POWER GENERATION SYSTEM”. If the co-generation system is installed in a utility site, then it is called as “REJECT HEAT UTILISATION SYSTEM”.In general the co-generation technologies are classified into three types. They are as follows:

i. TOPPING CYCLE In this cycle, the power generation takes

place before the low or medium temperature processes, i.e., energy is first consumed to generate electrical or shaft power.

ii. BOTTOMING CYCLE: In this cycle, the power generation takes

place at high temperature processes, i.e., energy is first consumed to produce steam.

iii. COMBINED CYCLE: This is made of two cyclic plants. The heat

rejected from the “higher” (topping) plant is used as a supply to the “lower” (bottoming) plant. These two plants are closed and cyclic and may use different fluids.

XXXIV. BIO-GAS PRODUCTION

A. Schematic Diagram of Distillery and Effluent

Treatment Plant:

MUR - Methane Up flow reactor

BCCR - Biologically Conditioning Control reactor.

Water trap

Flare

Co

nd

en

ser

BCCR

MUR

MUR

DISTILLERY

BOILER

Page 487: Keynote 2011

B. Calculation of Bio-Gas Production:

In the distillery 45 kl/day of alcohol is produced using Alfa-Laval Bio-Still process. An average of 620 m3 of spent wash per day with COD value of 1,10,000 ppm to 1,30,000 ppm is also produced

COD in terms of tons/day:

1,10,000 ppm X 620 = 68.2 tons/day

1,30,000 ppm X 620 = 80.6 tons /day

Average COD loading per day = 72 tons

Let the COD converted be 76% = 0.76 X 72

= 54.72 tons

540 m3 / day of Bio-gas is produced per ton of COD removal.

Bio-gas production = 54.72 X 540

= 29548.8 m3/day

Bio-gas production is taken as 25,000 m3/day for calculations.

The calorific value of Bio-gas produced = 4500 kcal/m3

XXXV. POWER AND STEAM

REQUIREMENT

A. POWER REQUIREMENT:

For Anaerobic treatment = 140 kWh

For Aerobic treatment = 200 kWh

For distillery process = 220 kWh

Other units (pumps) = 125 kWh

For Confectionary = 200 kWh

Total Power Demand = 885 kWh

B. STEAM REQUIREMENT:

Total amount of alcohol produced = 45 kl/day

Steam requirement/liter of alcohol production is 2.6 kg

Total requirement of steam per day = 45,000 X 2.6

= 1,17,00 kg/day

Steam requirement per hour = 4875 kg/hr

= 4.875 tons/hr

Page 488: Keynote 2011

XXXVI. POSSIBLE COGENERATION

SYSTEMS

Three types of possible co-generation systems may be considered for the evaluation of bio-gas based co-generation. They are,

A. IC Engine Cogeneration System

Fig 3: IC Engine Cogeneration System

It is a combination of IC engine and waste heat recovery system. IC engine uses bio-gas as fuel to produce power and heat energy. The quantity of

25,000 m3/day of bio-gas with 65% of methane content can generate 1635 kW . Four to six numbers of IC engines may be used to burn this amount of bio-gas.

Let the efficiency of the IC engine be 30%

Input energy

= '3W%3W%.85

'%W.5

= 5450 kW

Power produced = 0.3 X 5450

= 1635 kW

= 1.635 MW

Remaining energy = 5450 – 1635 = 3815 kW

Energy available for steam production be 45%

Energy available for steam production = 0.45 X 3815

= 1716.75 KW

Enthalpy at 1 bar [hg] = 2675 KJ/Kg

Steam produced

= 45.43 W.8

'543

= 0.5134 Kg/s

Heat recovery

steam

generator

IC

Engine

Generato

r

Exhaust

WateProcess

Fuel Electricit

Page 489: Keynote 2011

= 1.848 tons/hr

[1] Thermal efficiency

= ,Y>ZY> Z,P=+

DZY>

= 5.3

3%3 X 100

= 30%

[2] Energy utilization factor = Z,P=+$Y-=<Y[ \=B>

DZY>

= ]5.3 $ .3.% X'543 _

3%3

= 0.552

[3] Fuel energy saving ratio

= .3.% W'543

.8 S 5.3..

5450

= 0.239

ADVANTAGES:

1. The electric power generation is higher. 2. It can be operated conveniently at part load.

DISADVANTAGES:

1. This system produces only around 1.85 tons/hr of steam but our requirement is around 5 tons/hr.

2. This system requires bio-gas with 65% of methane content. If the methane content of the bio-gas is reduced below 65%then the output of the engine is greatly affected.

3. The initial cost of the system is very high

B. Steam Turbine Cogeneration System

Boiler Turbine

Distillery

GeneratoFue

Process

Page 490: Keynote 2011

Fig 4: Steam Turbine Cogeneration System

This plant uses bio-gas in the boiler to produce steam. The steam from the boiler is used to run the steam turbine. The steam turbine is connected to the alternator and hence electrical energy is produced. The steam leaving the steam turbine is used for process work.

At present the bio-gas is used in a boiler to produce a saturated steam at 14 bar for process work, but in co-generation the power will be produced using a steam turbine and the exhaust team will be used for process work.

Pressure of steam entering the turbine = 14 bar

Let the degree of super heat = 50 °C

Enthalpy of steam entering the turbine = 250 °C

Enthalpy of steam at 1 bar = 2927.6 kJ/kg

Input energy =

= 5450 kW

Amount of steam produced =

= 1.489 kg/s

= 5.36 tons/hr

Let the efficiency of the turbine be 75%

Power produced

= 1.489 X 0.75 X (2927.6 – 2675.4)

= 281.64 kW

Amount of heat used for process work is m X hg

= 1.489 X 2675.4

= 3983 kW

[1] Thermal efficiency =

=

= 5.16 %

[2] Energy utilization factor

=

=

Page 491: Keynote 2011

= 0.78

[3] Fuel energy saving ratio

= 5450

= 0.47

ADVANTAGES:

1. Fuel energy savings ratio is high.

2. The amount of steam produced is high.

DISADVANTAGES:

1. It does not meet the required power demand.

2. Thermal efficiency of the plant is less.

C. Gas Turbine Cogeneration System

Fig 5: Gas Turbine Cogeneration System

Table 1

Compressor

Turbine

Combustor

WHRB

Stack

Generator

Hot exhaust gases

AIR

Fuel input

Water in Process steam

Page 492: Keynote 2011

This plant uses the bio-gas and produces the power. A positive displacement compressor supplies the bio-gas to the combustor at 12 to 14 bar. This produces power of 913 kW. The exhaust gases of the gas turbine are utilized by a waste heat recovery boiler which produces saturated steam for the distillery. Fire tube boiler is used for waste heat recovery.

It produces around 3.33 tons/hr of steam at 3.5 bar, this plant uses only 19,000 m3/day of bio-gas remaining 6,000 m3/day of bio-gas is used to produce steam by supplementary firing. This steam is around 1.65 tons/hr and the total amount of steam produced is 4.954 tons/hr. thus the required demand of steam and power is met by this co-generation system. The details of various gas turbines that are supplied by kirloskar for co-

generation purposes are given in table 1.

To produce 1 MW power Saturn type is chosen.

The amount of fuel required = 14.2 million btu/hr

But 1 btu/hr = 0.293W

The amount of fuel required in MW = 14.2 X 0.293

= 4.1606 MW

The flow rate of bio-gas required for this input

=

= 19,083 m3/day

Calorific value of bio-gas = 4500 kcal / m3

Presently the bio-gas is produced for about 25,000 m3 /day but only 19,083 m3/day is used in the gas turbine. Thus about 6,000 m3/day of bio-gas is available in excess.

SATURN

T -4700

TYPE – H

TAURUS

MARS T-1200

Electrical Output KW

913 2761

3377

3747 7274

Fuel Input Million Btu/hr

14.2 37.9

44.2 47.3 86.5

Exhaust temperature °c

507 452 526 503 489

Exhaust flow Kg/s

5.8 16.9

16.8 18.4 34.7

Steam output Kg/hr

3330 7927

10205

10351

18557

Stack temperature °c

152 159 149 152 154

Page 493: Keynote 2011

Mass flow rate of exhaust gas = 5.8 kg/s

Specific heat of exhaust gas = 1.565 kJ/kg.K

The exhaust temperature of the gas turbine

= 507 °C

The stack temperature = 152 °C

The degree of heat available for producing steam

= 507 – 152

= 352 °C

Amount of heat available for producing steam

= 5.8 X 1.565 X 355

= 3222.35 kW

The pressure of steam leaving the waste heat recovery boiler is 14 bar.

Enthalpy [hg] at 14 bar = 2787 kJ/kg

The mass flow of steam =

= 0.925 kg/s

= 3330 kg/hr

[1] Thermal Efficiency

The 6,000 m3/day of bio-gas is used for producing steam by supplementary firing.

The total amount of steam produced by this

=

= 1624 kg/hr

The total amount of steam produced is

= 3330 + 1624

DEMAND IC ENGINE SYSTEM

STEAM TURBINE SYSTEM

GAS TURBINE SYSTEM

POWER kW

885 1635 282 913

STEAM tons/hr

4.875 1.58 5.36 4.954

Page 494: Keynote 2011

= 4954 kg/hr

= 4.954 tons/hr

= 1.376 kg/s

=

=

= 22%

[2] Energy utilization factor

=

= 0.845

[3] Fuel energy savings ratio

=

=0.373

ADVANTAGES:

1. Simple in mechanical

construction.

2. Operating pressure is low.

3. More silent in operation.

4. Range of fuel is possible.

5. Smokeless exhaust.

6. Space occupied is less.

7. Weight/HP of system is less.

8. Needs less maintenance.

DISADVANTAGES:

1. Initial cost is high when

compared to steam turbine co-

generation system.

2. Fuel energy savings ratio is

low.

XXXVII. COMPARISON OF

COGENERATION SYSTEMS

Table 2 Performance Parameters

Table 3 Power and Steam

Production

Thermal Efficiency %

Energy Utilization Factor

Fuel Energy Savings Ratio

IC Engine System

30 0.552 0.239

Steam turbine System

5.16 0.78 0.47

Gas Turbine System

22 0.845 0.373

Page 495: Keynote 2011

XXXVIII. CONCLUSION

Among the co-generation systems gas turbine co-generation system is chosen, because the anaerobic digestion is a biological process which does not need any additional input except the electrical energy for pumping. So continuous supply of bio-gas is assured. The gas turbine produces the electric power output continuously without any disturbance; due to this the process efficiency of the distillery will be improved. Many gas turbine systems on landfill gas, digester gas and sewage gas are working successfully in many countries for more than two decades. The complete system is very compact and needs little maintenance so one operator can monitor the whole plant. The person who maintains the existing boiler may be employed for new plant. The gas turbine co-generation system is very much suitable for the process industries because the gas turbine exhaust has 15-18% of oxygen in it and in such cases where steam required is in large quantities, extra fuel can be supplied in the exhaust steam and fired using ducts burners, thus leading to 2-3 times higher steam production rates than that possible from a standard gas turbine exhaust

REFERENCES

[1] M. Atmaca, “Efficiency Analysis of Combined Cogeneration Systems with Steam and Gas Turbines”, Energy Sources: Part-A Recovery

Utilization and Environmental

Effects, vol.33, pp.360-369, 2010.

[2] Arif Hepbasli and Nasrin Ozalp, “Present Status of Cogeneration”, Energy Sources: Part-A Recovery

Utilization and Environmental Effects, vol.24, pp.169-177, 2002.

[3] Miguel A Lozanzo and Jose’ Ramos, “Thermodynamic and Economic Analysis of Simple Cogeneration Systems”, Distributed Generation

and Alternative Energy Journal, vol.25 pp 62-80, 2010.

[4] Moisey O. Fridman , ”Cogeneration: Efficiencies and Economics” Distributed Generation &

Alternative Energy Journal, Vol 17, 2002, pp 71 – 78.

[5] Jorge B. Wong, “Cogeneration System Design: Analysis and Synthesis: A REVIEW OF SOME RELEVANT PROCEDURES AND PROGRAMS”, Distributed

Generation & Alternative Energy

Journal, Vol 18, 2003, pp 6 – 26.

[6] David Dietrich, “A Case Study: Managing Cogeneration Systems”, Distributed Generation & Alternative Energy Journal, Vol 17, 2002, pp 66 – 76.

[7] S. C. Kamate; P. B. Gangavati , ”Exergetic, Thermal, and Fuel Savings Analyses of a 20.70-MW Bagasse-based Cogeneration Plant”, Distributed Generation &

Alternative Energy Journal, Vol 23, 2008, pp45 – 58.

[8] F. William Payne, “Cogeneration/CHP In Industry”, Distributed Generation &

Page 496: Keynote 2011

Alternative Energy Journal, Vol 17, 2002, PP 5-15.

[9] James A Clark, “Combined Heat and Power(CHP): Integration with Industrial Processes”, Encyclopaedia

of Energy Engineering and Technology, 2007.

[10] David Hu and Gerald Hrd, “Waste Recycling and Energy Conservation”, Prentice Hall, 1981.

ACKNOWLEDGEMENT

1. E.I.D Parry India, Nellikuppam, Cuddalore District.

2. Mr Murugesan, Manager Projects, E.I.D Parry India, Nellikuppam, Cuddalore District.

3. Mr N C Ananda Kumar, Deputy Manager/Distillery, E.I.D Parry India, Nellikuppam, Cuddalore District.

Page 497: Keynote 2011

66. THERMAL CONDUCTIVITY IMPROVEMENT OF

LOW TEMPERATURE ENERGY STORAGE SYSTEM

USING NANO-PARTICLE

*K Karunamurthy1, K Murugu Mohan Kumar

1, R Suresh Isravel , A L Vignesh

2

1Department of Mechanical Engineering, CARE School of Engineering, Tiruchirappalli, India.

2Students of J J College of Engineering and Technology, Tiruchirappalli, India.

Abstract:- Low Temperature Energy Storage System (LTES) stores the thermal energy from solar, exhaust gases and waste heat from industries. To achieve this energy storage, the medium adopted is Phase Change Materials (PCM). PCM is preferred because of their higher storage density, with less volume. The disadvantage of PCM for using as LTES is that, the thermal conductivity of PCM is less and this requires more time period and more surface area of contact, for loading and unloading of thermal energy. To overcome this problem, an attempt was made to incorporate CuO Nano particles in the paraffin PCM to improve its thermal conductivity. The thermal conductivity of LTES is determined both analytically and experimentally. Incorporating nano-particle in the PCM has improved the thermal conductivity of the LTES. Maxwell- Garnett Equation is used to determine the thermal conductivity of PCM analytically and Transient Hot Wire Thermal Conductivity Measuring Apparatus KD2 probe is used to determine the thermal conductivity experimentally.

Key Words: Low Temperature Energy Storage System, Thermal Energy Storage, Phase Change Materials, Nanomaterials.

*Corresponding Author:

[email protected]

I. INTRODUCTION

In the industries more amount of heat is dumped to the surrounding, without recovering them properly. Also solar energy is available abundantly among the other renewable energy sources. The availability of solar energy is not continuous and available only during the daytime and hence to properly utilize solar energy, an effective thermal energy storage system is mandatory. The effective thermal energy storage system can increase the period of operation of solar energy utilization devices. There are various methods available to store thermal energy, but each and every method has its own advantages and disadvantages. Therefore we have to select an optimum method of thermal energy storage method for the specific application.

A. Various Methods of Storing Thermal

Energy

The storage of thermal energy is performed by increasing the internal energy of a material as sensible heat, latent heat, and thermo-

chemical heat, or combination of these.

(i) Sensible Heat Storage (SHS) system

It uses the specific heat capacity of the substance, and the temperature of a material (in

Page 498: Keynote 2011

solid or liquid state). Temperature of the substance increases during charging and decreases during discharging.

Q = m cp (Tf –Ti)

(ii) Latent Heat Storage Systems

Latent Heat Storage (LHS) is based on absorption or release of heat when a storage material undergoes a phase change. The storage capacity of the LHS system with a PCM medium is given by

Q = m cp (Tm –Ti) + m L + m cp (Tf – Tm).

(iii) Thermo – Chemical Systems

Thermo-chemical systems rely on the energy absorbed and released in breaking and reforming molecular bonds in a completely reversible chemical reaction. In this case, the stored heat depends on the amount of storage material, the endothermic heat of reaction, and the extent of conversion.

Amongst various thermal energy storage techniques, latent heat energy storage is attractive. The problems with latent heat storage are low thermal conductivity, variation of properties of PCM when used repeatedly, incongruent melting, and high cost.

B. Need for PCM as LTES.

• Melting temperature of the PCM is in the desired operating temperature range.

• High latent heat of fusion per unit volume so that the required volume of the container to store a given amount of energy is less.

• High specific heat to provide for additional significant sensible heat storage.

• High thermal conductivity of both solid and liquid phases to assist the charging and discharging of energy of the storage systems.

• Small volume changes on phase transformation and small vapor pressure at operating temperatures to reduce the containment problem.

II. WHY TO IMPROVE THE THERMAL

CONDUCTIVITY OF PCM FOR LTES? In a latent heat storage system, the solid-

liquid interface moves away from the heat transfer surface during phase change. During this process, the surface heat flux decreases due to the increasing thermal resistance of the growing thickness of the molten / solidified medium. In the case of solidification, conduction is the only transport mechanism, and in most cases, it is very poor. In the case of melting, natural convection can occur in the molten layer and this generally increases the heat transfer rate compared to the solidification process (if the layer is thick enough to allow natural convection to occur). However, low rate of heat transfer can be increased considerably by using a suitable heat transfer enhancement technique.

III. METHODS OF INCREASING

THERMAL CONDUCTIVITY OF LTES

Several methods are there to enhance the heat transfer in latent heat storage system of LTES. Increasing the thermal conductivity of the PCM will increase the rate of heat transfer in latent heat energy storage. The various methods to improve the thermal conductivity of PCM are, use of finned tubes with different configurations, inserting metal matrix in the PCM, using PCM dispersed with high conducting particles, micro encapsulation of PCM, using metal screens/spheres placed inside the PCM, employing aluminum foam to enhance the heat transfer process in latent heat storage system. In this paper an attempt is made to increase the thermal conductivity of the paraffin PCM by dispersing CuO nano-particles. Thus the thermal conductivity of the PCM is improved and the thermal energy loading and unloading time of PCM is also considerably reduced.

Paraffin is the most widely used PCM for LTES, because of its easy availability and its properties remain unaltered even after multiple cycles of charging and discharging.

Page 499: Keynote 2011

IV. BLENDING OF NANO PARTICLES

WITH PCM

Ultra sonic stirrer is used for blending nano particles with PCM. Ultrasonication is an advanced mixing technology providing higher shear and stirring energy without scale-up limitations. It does also allow, to control the governing parameters, such as power input, reactor design, residence time, particle, or reactant concentration independently. The ultrasonic cavitation induces intense micro mixing and dissipates high power locally. Copper oxide is mixed with paraffin wax in ultrasonic frequency generated from the ultrasonic stirrer is 5- 10 MHz The stirrer is run for 8 hours for stable suspension of Nano particles with no precipitation. Nano particle is mixed with various proportions. The various concentrations of mixing are

VOLUME OF

CONCENTRATION

AMOUNT OF

NANO MATERIALS

USED in gm

.01 5

.02 10

.03 15

.04 20

.05 25

.10 50

.15 100

V. DETERMINATION OF THERMAL

CONDUCTIVITY

A. Analytical Method Maxwell Garnett Equation is used to

determine the thermal conductivity of PCM for LTES. The Maxwell – Garnett equation is

Where

kp - is the thermal conductivity of the

dispersed particles. Thermal conductivity of

CuO= 6540 W/mK.

kl - is the thermal conductivity of the

dispersion liquid, Thermal conductivity of

paraffin = 0.214 W/mK

φ- is the particle volume concentration of the

suspension.

B. Experimental Method Transient Hotwire Thermal Conductivity

Measuring apparatus is used to determine the thermal conductivity of PCM blended with nano particles in various proportions.

Volume of

concentrati

on (φ)

Thermal

Conductivity Of

Composite

(W/mK) at 45

(°C)

Increase

in thermal

conductivit

y

0 0.214 0

.01 0.220 2.8%

.02 0.2266 4.04%

.03 0.2331 8.925%

.04 0.2397 12.05%

.05 0.2465 15.1869

.15 0.2825 32%

.2 0.3222 50.56%

Page 500: Keynote 2011

Fig.1 Transient Hotwire Thermal Conductivity Measuring Apparatus

VI. RESULTS

Volume Concentr-ation

(Φ)

Experimental Value

(W/mK) at

45 °C

Analytical Value

(W/mK)

ERROR

0 0.214 0.214 0

.01 0.2291 0.220 4.1%

.02 0.2391 0.2266 5.51%

.03 0.2467 0.2331 5.83%

.04 0.2567 0.2397 7.09%

.05 0.2706 0.2465 9.77%

.1 0.3420 0.2825 21.06%

.15 0.3802 0.3222 18%

The thermal conductivity of the raw paraffin was 0.2 W/mK, the value of thermal conductivity was determined using transient hot wire thermal conductivity apparatus. After

dispersing the nano particle with the paraffin the thermal conductivity of the PCM has increased to 0.3802 W/mK.

VII. CONCLUSION

The PCM are used in various applications i.e. space heating / cooling, solar cooking, green house heating, water heating and waste heat recovery systems. The problem with the most commonly used PCM (paraffin) is poor thermal conductivity. From the above experiment performed it is evident that dispersing nanoparticles with paraffin had resulted in the improved thermal conductivity of the PCM. This improved thermal conductivity of PCM overcomes the poor rate of heat transfer in the thermal energy storage applications. Further there is a scope to determine the correct proportion of mixing the nano particle to the paraffin and also there is scope to determine the better nano particle that can be dispersed with the paraffin.

REFRENCES

[1] Abhat, “Low temperature latent heat thermal energy storage: heat storage materials”, Solar

Energy 30 (1983) 313–332.

[2] Bel’en Zalba ,Jose M Marin , Luisa F. Cabeza , Harald Mehling, “Review on thermal energy storage with phase change: materials”, heat

transfer analysis and applications.

[3] Jose´ M. Marin , Bele´n Zalba , Luisa F. Cabeza , Harald Mehling, “Improvement of a thermal energy storage using plates with paraffin–graphite composite”, International

Journal of Heat and Mass Transfer 48 (2005)

2561– 570

Page 501: Keynote 2011

[4] Bele´n Zalbaa, Jose´ M. Marn, Luisa F. Cabezab, Harald Mehling, “Free-cooling of buildings with phase change materials”, International Journal of Refrigeration 27 (2004)

839–849

[5] Manuel Iba´n˜ez ,Ana La´zaro ,Bele´n Zalba ,Luisa F. Cabeza, “An approach to the simulation of PCMs in building applications using TRNSYS”, Applied Thermal Engineering 25

(2005) 1796–1807.

[6] Francis Agyenim, Philip Eames, Mervyn Smyth, “A comparison of heat transfer enhancement in a medium temperature thermal energy storage heat exchanger using fins”, Solar

Energy (2009)

[7] L. Syam Sundar and K.V. Sharma,

“Experimental Determination of Thermal Conductivity of Fluid Containing Oxide Nanoparticles”, .

[8] W. Yu and S.S. Choi, Energy Technology Division, Argonne national Laboratory, “The role of interfacial layers in the enhanced thermal

conductivity of nanofluids: a renovated maxwell model”,

[9] J. L. Zeng, L. X. Sun, F. Xu, Z. C. Tan, Z. H. Zhang, J. Zhang and T. Zhang, “Study of a PCM based energy storage system containing Ag nanoparticles”, Journal of Thermal Analysis and

Calorimetry, Vol 87(2007)2, 369-373.

[10] Ahmet Sari, Ali Karaipekli, “Thermal conductivity and latent heat thermal energy storage characteristics of paraffin/expanded graphite composite as phase change material”, Applied Thermal Engineering 27 (2007), 1271 –

1277.

ACKNOWLEDGEMENTS

1. Dr S Suresh Assistant Professor, Department of Mechanical Engineering, National Institute of Technology, Tiruchirappalli.

Page 502: Keynote 2011

67. Design of Jigs, Fixtures, Gauging and the Process Sheet Plan for the Production of Preparation of

Auxiliary Gear Box-Mark IV Gears and Shafts

1Harini Reddy. S,

2 Ravindran. R,

#Shakthivel. D,

# Shivakumar. P

1 M.E-CAD/CAM, Dept. of Mechanical Engineering, Dr. MCET, Pollachi-642003, Tamil Nadu, India.

2 Assistant Professor, Dept. of Mechanical Engineering, Dr. MCET, Pollachi-642003, Tamil Nadu, India.

#Ashok Leyland Ltd., Plant-1, Hosur-635126, Tamil Nadu, India.

Abstract-Manufacturing of a component includes several processes to convert the raw material to an optimized finished product. This paper involves the jig and fixture design and the corresponding requirements for manufacturing the discrete components of the Auxiliary Gear Box-Mark IV used in Stallion 4X4. The Design process starts with the planning of several machining and manufacturing process of a particular component with the dimensional specifications and the tooling required. Process Sheets are designed using Enterprise Resource Planning. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the organization and manage the connections to outside stakeholders. Gauging is the most important aspect in the manufacturing of the component. Gauges are used after each machining operation which decides whether the component can be accepted, rejected or sent for re-work. These can be designed by considering the dimensions of the component. Jigs and fixtures are generally used for guiding, locating the tool and position or constrain the work piece respectively during the machining operation. 3-2-1 principle can be used to locate a work piece on the machine which arrests all the 9 degrees of freedom; whereas the remaining 3 degrees of freedom can be arrested by using clamps. While clamping, it is to be considered that the clamping forces should be greater than the tool forces in order to avoid the dislocation of the component. These clamping forces should not deform the surface of the work piece at any cost. For different processes different fixtures are designed to constrain the specific areas of the work piece which undergoes the material removal process at various points. Thus the jigs and fixtures are to be designed for the discrete components as per the desired requirements. Designs are made using Auto CAD for 2D drawings whereas 3D

modeling is done using Autodesk Inventor Professional.

Key words-Auxiliary gear box, process sheet, gauging, jigs & fixtures, clamping forces.

I. INTRODUCTION

The Ashok Leyland is an Indian vehicle manufacturing company. Over the years, products of this company have built a reputation for reliability and ruggedness. It is the leading supplier of logistics vehicles to Indian Army. The Mk.4 Stallion 4X4 is a further development of the previous Mk.3, further refined for desert type environments. It is an upgraded and enhanced model of Stallion 4x4 Mk III. This vehicle has an engine power of 177 PS @ 2400 RPM. This Ashok Leyland vehicle has a pay load of 5 T for cross country and 7.5 T on the highway. Vehicle is powered by the Ashok Leyland W06DTI turbocharged diesel engine [6].

Stallion 4x4 Mk IV is equipped with an integral power steering and dual line brakes. The vehicle has a 6 speed synchromesh gearbox and runs at maximum speed of 82 km/h. It has a number of improvements and upgraded drive train. Currently Indian Army uses over 40,000 Stallion trucks of all variants. Vehicle has an operation range from -40°C to +55°C. It meets EURO II emission requirements.

Page 503: Keynote 2011

Fig. 1. Stallion 4X4.

The Auxiliary gear box transmits drive from the main gear box to the rear and front driving axles. It is a constant mesh two speed gear box having ratios 1:1 and 2.105:1 in the Mark IV of Stallion 4X4 from Ashok Leyland. This drive can be transmitted to the rear axle only or to both the rear and front axles whenever necessary. The low ratio however is available only on four wheels drive.

All the shafts are mounted on taper roller bearings. Lubrication is by splash. Fig 2. shows the general layout of the shafts and gears in the auxiliary gear box and the working is described in the Fig. 3. The input shaft carries a spur gear A and a helical gear B both bush mounted. On the lay shaft the spur gear C is spline mounted while the helical gear D is integral with the shaft. The output drive consists of two shafts. The rear output shaft carries an integral gear E.

Fig. 2. Auxiliary Gear Box.

It also indicates the relative position of the selector mechanism when the high/low selector is in the neutral position. The drive to the auxiliary gear box is received by a driving flange on the front end of the input shaft. Since the sliding dog is in neutral position the bush mounted gears A and B do not transmit drive to the lay shaft. Therefore no drive is transmitted to the axles.

Fig. 3 Working of Auxiliary Gear Box.

The AGB consists of the discrete components for which there exist the dimensional variations. They are the most important 3 gears and 3 shafts. The sectional view of the AGB is shown in the Fig. 4, which consists of all the components, out of which only the 6 components are concentrated for preparing the process sheets and the gauges.

Jigs & fixtures are designed for those components according to the requirement and then these components are been manufactured that satisfies the AGB mark IV requirements.

Page 504: Keynote 2011

Fig. 4. Sectional View of Auxiliary Gear Box.

II. METHODOLOGY

A. Process Sheet Generation

Process Sheet is the plan of having the entire data of the whole process that is to be undergone by the component from the raw material stage to the finished product. This process is done using the Enterprise Resource Planning software by which the entire process can be made in the form of the report generation.

The Process Sheet consists of all the details of the component including the dimensions, tooling, gauging used, manufacturing / machining processes undergone for each component. These details can be acquired by the detailed study of the process that is to be undergone by the component. This study of the process of a component eliminates the 7 wastes of Toyota Production system.

ERP is the software used in Ashok Leyland for the process sheet generation or

process validation, planning. The input will be in the form of the data entry process whereas the output is in the form of a report generated for a component which consists of the entire details required for the component to satisfy the needs during the manufacturing of the component. The input screen is as shown in the Fig. 5 and Fig. 6.

Fig. 5. Input in ERP – Process Sheet Generation.

Fig. 6. Input for various details in ERP.

The output which is in the report format is used by the operators during the manufaturing of the component which is helpful in avoiding the delays in the set up time or the sequence of operations followed. Thus the entire process is studied for each component and then the process sheet is prepared for each component.

B. Gauging

It is one more important thing in manufacturing a component. After each process, the component is checked with the help of the gauges inorder to decide whether the componnet can be accepted or rejected. In case of rejection, it is again inspected so that it can be decided whether it can be sent ot rework or scrap. If it is sent to rework, again the inspention can be done using the gauges and the process continues.

Page 505: Keynote 2011

The reference for the Gauge dimensions is the Gaugemakers’s Tolerance Chart, which gives the details about the inside measurements and outside measurements. By calculating all the details required, there are three parameters mainly required for deciding the gauge dimensions. Wear Limit is the one which is the limit upto which the gauge can be used. Go and Nogo Limits are used for checking the component dimensions. Here are the formulae used to calculate the gauge dimensions required for the particular dimensions of the required component (see Table I) [9].

TABLE I

GAUGEMAKER’S TOLERANCE CHART

From the table,

G= Higher limit for workpiece

K = Lower limit fot workpiece

H= tolerance on cylinder plug or cylinder bar gauges

Hs= Tolerance on spherical gauges

H1= Tolerance on gauges for shafts

Hp= Tolerance on reference disks for gap gauges

y= Margin, outside the GO workpiece limit, of the wear limit of gauges for holes

y1= Margin, outside the GO workpiece limit, of the wear limit of gauges for shafts

z= distance between centre of tolerance zone of new GO gauges for holes and GO workpiece limit;

z1= distance between centre of tolerance zone of new GO gauges for shafts and GO workpiece limit;

α= safety zone provided for compensating measuring uncertainities of gauges for holes of nominal diameter over 180 mm

α1= safety zone provided for compensating measuring uncertainities of gauges for shafts of

nominal diameter over 180 mm

y’ and y’1= difference in absolute value between y and α or y1 and α1

Gauges for

Gauge Size

Nominal Size

Upto 180 mm Above 180 mm

Gauges Reference Gauges

Gauges Reference Gauges

Basic Size

Mfg Tol

Basic Size

Mfg Tol

Basic Size

Mfg Tol

Basic Size

Mfg Tol

Inside Measurements

NO GO

G

H/2

Not Provided

G-x

Hs/2

Not Provided

H/2

*H/2

GO NEW

K+z

H/2

K+z

H/2

Hs/2

WEAR LIMIT

K-z - - - K-y+ α

- -

Outside Measurements

NO GO

K H1

/2 K

Hp

/2 K+ α1

H1

/2 K+ α1

Hp

/2

GO NEW

G+y1

- G-z1

±Hp

/2

G-z1

H1

/2 G+y1

Hp

/2

WEAR LIMIT

G+y1

- G+y1

Hp

/2

G+y1-α1

- G+y1-α

Hp

/2

Page 506: Keynote 2011

*H/2 should be used only when spherical gauges are not used

Fig. 7. Caliper Gauge for the Output Shaft

As per the dimensions required, the caliper gauge for measuring the diameter of the shaft id designed by calculating the GO, NOGO and Wear Limit for the gauge and is shown as in fig. 7.

C. Fixture Design

3-2-1 locating principle can be used for the design of the fixtures for the component [2, 4]. Using this method, the 3 locators in the first plane can arrest the 5 degrees of freedom, 2 locators in the next plane arrests the 3 degrees fo freedom, the remaining 1 locator arrests the 1 degree of freedom. Totally 9 degrees of freedom can be arrested by using the 3-2-1 locating pronciple. The remaining 3 degrees of freedom can be arrested by using the clamps at the appropriate locations on the workpiece, such that it does not deform the workpiece and also doesnot dislocate it. Each fixture consists of the base place, a bung, the clamps with studs and

heel pins, brackets for the locating handle, cutter setter and latch [3].

The clamping is done by using the clamping levers which can be easily fix and locate the workpiece as well as to remove it from the location. The most important factor in deciding the clamps is the clamping forces and the cutting force calculations. It should be in such a way that the clamping forces should be more than the cutting forces, so that while cutting action is taking place the workpiece does not deform.

III. CALCULATION OF MACHINING DATA

Let D= Tool diameter in inches

W=width of cut in inches

d= depthof cut in inches

f= feed per tooth ininches

N= Number of cutter teeth

RPM= spindle speed

RMR= Material removal rate in in3/min

Pc= spindle power, hp

E= efficiency

Vc= SFM, tool surface feet per min

F c=Cutting force, lbf

µ= coefficient of friction

N= Clamping force, lbf

RPM = '∗`

ab (1)

RMR = (W)(d)(f)(N) (2)

Pc = Pv* RMR (3)

Page 507: Keynote 2011

F c=Pd ∗E∗..

Vd (4)

N=F dµ

(5)

Finally, N > F c

Therefore, the Clamping Forces are higher than the Cutting Forces which is suggestible [5].

By using the above data, the clamping forces and the tool forces can be calculated and compared. If the clamping forces are higher than the cutting forces, it can be taken granted that the designed fixture can suit the operation with the clamping action. Thus, the fixtures are designed and used. For designing purpose, the 2D drawings are made using Auto CAD and the 3D modeling are done in Autodesk Inventor Professional.

Fig. 8. 3D Modeling of Milling Fixture for Output Rear in Autodesk Inventor Professional

Fig. 8. 3D Modeling of Output Rear with a flat, in Autodesk Inventor Professional

IV. CONCLUSION

The Process Sheets are thus prepared for the discrete components of the Auxiliary Gear Box using ERP. Gauges are also designed for those components and then these details can be included in the process sheet. This is the input to the shop floor for performing the step by step procedure. The Drill Jigs and Milling Fixtures are then designed for the components according to the dimensions of the component using the designing tools like Auto CAD and Autodesk Inventor Professional. After finishing the design part, the drawings are released to the tool room to manufacture the same and then those can be used in the machine shop for the manufacturing of the component.

ACKNOWLEDGMENT

The authors wish to thank Mr. Ravichandran. S (Divisional Manager-HR), Mr. Venkat Subramaniam. R (Deputy Manager-HR), Mr. Suresh E.V. (Manager-Project Neptune) and Mr. Dayalan. S (PEP). This Project Work is been supported by Ashok Leyland Ltd., Hosur, Tamil Nadu, India.

REFERENCES

[1] Nirav P. Maniar, D. P. Vakharia, A. B. Andhare and Chetan M. Patel, “ Design fo 28 Operations, 4 Axis - 360º Indexing Milling Fixture for CNC”, International

Page 508: Keynote 2011

Journal for Recent Trends in Engineering, Vol. 1, No. 5, May 2009

[2] Necmettin Kaya, “Machining Fixture locating and clamping position optimization using genetic algorithms”, © 2005 Elsevier B. V.

[3] Javier Echave and Jami J. Shah, “Automatic Set-up and Fixture Planning for 3-Axis Milling”, © 1999 by ASME.

[4] Selvakumar. S, K.P. Arulshri, K. P. Padmanaban and K. S. K. Sasikumar, “Clamping Force Optimization for Minimum Deformation of Workpiece by Dynamic Analysis of Workpiece-fixture System” World Applied Sciences Journal 11 (7): 840-846, 2010.

[5] Tool Design (MET 3331) , Dr Nasseri, Fall07, Machining and clamping forces

[6] http://defenceproducts.ashokleyland.com/products/products.html

[7] Machine Tool Design Handbook - CMTI, Bangalore.

[8] Design Data, PSG college of Technology, Coimbatore.

[9] Gaugemaker’s Tolerance Chart.

Page 509: Keynote 2011
Page 510: Keynote 2011