ijaet volume 7 issue 3 july 2014

489
VOLUME-7, ISSUE-3 Date: International Journal of Advances in Engineering & Technology (IJAET) Smooth, Simple and Timely Publishing of Review and Research Articles! ISSN : 2231-1963 July2014

Upload: p-singh-ijaet

Post on 07-May-2015

338 views

Category:

Technology


13 download

DESCRIPTION

IJAET is honoured to announce the publication of the volume 7 issue 3 for July 2014. Papers published in IJAET provides a glimpse into a few of the many high quality research activities conducted by the young & dynamic researchers around the world. This Issue holds outstanding papers from various disciplines submitted by scholars, researchers, engineers and scientists who have been involved in active research, scholarly, and creative activities. The Journal is available online, please visit the following website: http://www.ijaet.org/.

TRANSCRIPT

Page 1: Ijaet volume 7 issue 3 july 2014

VOLUME-7, ISSUE-3

Date:

International Journal of Advances in

Engineering & Technology (IJAET)

Smooth, Simple and Timely Publishing of Review and Research Articles!

ISSN : 2231-1963

July—2014

Page 2: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, Jul. 2014.

©IJAET ISSN: 2231-1963

i Vol. 7, Issue 3, pp. i-vi

Table of Content

S. No. Article Title & Authors (Volume 7, Issue 3, July-2014) Page No’s

1.

Data Warehouse Design and Implementation Based on Quality

Requirements

Khalid Ibrahim Mohammed

642-651

2.

A New Mechanism in Hmipv6 to Improve Micro and Macro

Mobility

Sahar Abdul Aziz Al-Talib

652-665

3.

Efficiency Optimization of Vector-Controlled Induction Motor

Drive

Hussein Sarhan

666-674

4.

Flexible Differential Frequency-to-Voltage and Voltage-to-

Frequency Converters using Monolithic Analogue

Reconfigurable Devices

Ivailo Milanov Pandiev

675-683

5.

A Review Search Bitmap Image for Sub Image and the Padding

Problem

Omeed Kamal Khorsheed

684-691

6.

Potential Use of Phase Change Materials with Reference to

Thermal Energy Systems in South Africa

Basakayi J.K., Storm C.P.

692-700

7.

Improving Software Quality in the Service Process Industry

using Agility with Software Reusable Components as Software

Product Line: An Empirical Study of Indian Service Providers

Charles Ikerionwu, Richard Foley, Edwin Gray

701-711

8. Produce Low-Pass and High-Pass Image Filter in Java

Omeed Kamal Khorsheed

712-722

9.

The Comparative Analysis of Social Network in International

and Local Corporate Business

Mohmed Y. Mohmed AL-SABAAWI

723-732

Page 3: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, Jul. 2014.

©IJAET ISSN: 2231-1963

ii Vol. 7, Issue 3, pp. i-vi

10.

Precise Calculation Unit Based on a Hardware Implementation

of a Formal Neuron in a FPGA Platform

Mohamed ATIBI, Abdelattif BENNIS, Mohamed

BOUSSAA

733-742

11.

Temperature Profiling at Southern Latitudes by Deploying

Microwave Radiometer

A. K. Pradhan, S. Mondal, L. A. T. Machado and P. K.

Karmakar

743-755

12.

Development and Evaluation of Trolley-cum-Batch Dryer for

Paddy

Mohammed Shafiq Alam and V K Sehgal

756-764

13.

Joint Change Detection and Image Registration Method for

Multitemporal SAR Images

Lijinu M Thankachan, Jeny Jose

765-772

14.

Load - Settlement Behaviour of Granular Pile in Black Cotton

Soil

Siddharth Arora, Rakesh Kumar and P. K. Jain

773-781

15.

Harmonic Study of VFDS and Filter Design: A Case Study for

Sugar Industry with Cogeneration

V. P. Gosavi and S. M. Shinde

782-789

16.

Precipitation and Kinetics of Ferrous Carbonate in Simulated

Brine Solution and its Impact on CO2 Corrosion of Steel

G. S. Das

790-797

17.

Performance Comparison of Power System Stabilizer with and

Without Facts Device

Amit Kumar Vidyarthi, Subrahmanyam Tanala, Ashish

Dhar Diwan

798-806

18. Hydrological Study of Man (Chandrabhaga) River

Shirgire Anil Vasant, Talegaokar S.D.

807-817

19. Crop Detection by Machine Vision for Weed Management 818-826

Page 4: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, Jul. 2014.

©IJAET ISSN: 2231-1963

iii Vol. 7, Issue 3, pp. i-vi

Ashitosh K Shinde and Mrudang Y Shukla

20. Detection & Control of Downey Mildew Disease in Grape Field

Vikramsinh Kadam, Mrudang Shukla

826-837

21. Ground Water Status- A Case Study of Allahabad, UP, India

Ayush Mittal, Munesh Kumar

838-844

22.

Clustering and Noise Detection For Geographic Knowledge

Discovery

Sneha N S and Pushpa

845-855

23. Proxy Driven FP Growth Based Prefetching

Devender Banga and Sunitha Cheepurisetti

856-862

24.

Search Network Future Generation Network for Information

Interchange

G. S. Satisha

863-867

25.

A Brief Survey on Bio Inspired Optimization Algorithms For

Molecular Docking

Mayukh Mukhopadhyay

868-878

26. Heat Transfer Analysis of Cold Storage

Upamanyu Bangale and Samir Deshmukh

879-886

27.

Localized RGB Color Histogram Feature Descriptor for Image

Retrieval

K. Prasanthi Jasmine, P. Rajesh Kumar

887-895

28. Witricity for Wireless Sensor Nodes

M. Karthika and C. Venkatesh

896-904

29.

Study of Swelling Behaviour of Black Cotton Soil Improved

with Sand Column

Aparna, P.K. Jain and Rakesh Kumar

905-910

30. Effective Fault Handling Algorithm for Load Balancing using

Ant Colony Optimization in Cloud Computing 911-916

Page 5: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, Jul. 2014.

©IJAET ISSN: 2231-1963

iv Vol. 7, Issue 3, pp. i-vi

Divya Rastogi and Farhat Ullah Khan

31. Handling Selfishness over Mobile Ad Hoc Network

Madhuri D. Mane and B. M. Patil

917-922

32.

A New Approach to Design Low Power CMOS Flash A/D

Converter

C Mohan and T Ravisekhar

923-929

33.

Optimization and Comparative Analysis of Non-Renewable and

Renewable System

Swati Negi and Lini Mathew

930-937

34.

A Feed Forward Artificial Neural Network based System to

Minimize DOS Attack in Wireless Network

Tapasya Pandit & Anil Dudy

938-947

35.

Improving Performance of Delay Aware Data Collection Using

Sleep and Wake Up Approach in Wireless Sensor Network

Paralkar S. S. and B. M. Patil

948-956

36.

Improved New Visual Cryptographic Scheme Using One

Shared Image

Gowramma B.H, Shyla M.G, Vivekananda

957-966

37.

Segmentation of Brain Tumour from Mri Images by Improved

Fuzzy System

Sumitharaj.R, Shanthi.K

967-973

38.

Implementation of Classroom Attendance System Based on

Face Recognition in Class

Ajinkya Patil, Mrudang Shukla

974-979

39.

A Tune-In Optimization Process of AISI 4140 in Raw Turning

Operation using CVD Coated Insert

C. Rajesh

980-990

40.

A Modified Single-frame Learning based Super-Resolution and

its Observations

Vaishali R. Bagul and Varsha M. Jain

991-997

Page 6: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, Jul. 2014.

©IJAET ISSN: 2231-1963

v Vol. 7, Issue 3, pp. i-vi

41.

Design Modification and Analysis of Two Wheeler Cooling

Fins-A Review

Mohsin A. Ali and S.M Kherde

998-1002

42.

Virtual Wireless Keyboard System with Co-ordinate Mapping

Souvik Roy, Ajay Kumar Singh, Aman Mittal, Kunal

Thakral

1003-1008

43. Secure Key Management in Ad-Hoc Network: A Review

Anju Chahal and Anuj Kumar, Auradha

1009-1017

44. Prediction of Study Track by Aptitude Test using Java

Deepali Joshi and Priyanka Desai

1018-1026

45.

Unsteady MHD Three Dimensional Flow of Maxwell Fluid

Through a Porous Medium in a Parallel Plate Channel under

the Influence of Inclined Magnetic Field

L. Sreekala, M. Veera Krishna, L. Hari Krishna and E.

Kesava Reddy

1027-1037

46.

Video Streaming Adaptivity and Efficiency in Social

Networking Sites

G. Divya and E R Aruna

1038-1043

47.

Intrusion Detection System using Dynamic Agent Selection and

Configuration

Manish Kumar, M. Hanumanthappa

1044-1052

48.

Evaluation of Characteristic Properties of Red Mud For

Possible use as a Geotechnical Material in Civil Construction

Kusum Deelwal, Kishan Dharavath, Mukul Kulshreshtha

1053-1059

49.

Performance Analysis of IEEE 802.11e EDCA with QoS

Enhancements Through Adapting AIFSN Parameter

Vandita Grover and Vidusha Madan

1060-1066

50. Data Derivation Investigation

S. S. Kadam, P.B. Kumbharkar

1067-1074

Page 7: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, Jul. 2014.

©IJAET ISSN: 2231-1963

vi Vol. 7, Issue 3, pp. i-vi

51.

Design and Implementation of Online Patient Monitoring

System

Harsha G S

1075-1081

52.

Comparison Between Classical and Modern Methods of

Direction of Arrival (DOA) Estimation

Mujahid F. Al-Azzo, Khalaf I. Al-Sabaaw

1082-1090

53.

Modelling Lean, Agile, Leagile Manufacturing Strategies: An

Fuzzy Analytical Hierarchy Process Approach For Ready

Made Ware (Clothing) Industry in Mosul, Iraq

Thaeir Ahmed Saadoon Al Samman

1091-1108

Members of IJAET Fraternity A - N

Page 8: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

642 Vol. 7, Issue 3, pp. 642-651

DATA WAREHOUSE DESIGN AND IMPLEMENTATION BASED

ON QUALITY REQUIREMENTS

Khalid Ibrahim Mohammed Department of Computer Science, College of Computer, University of Anbar, Iraq.

ABSTRACT

The data warehouses are considered modern ancient techniques, since the early days for the relational

databases, the idea of the keeping a historical data for reference when it needed has been originated, and the

idea was primitive to create archives for the historical data to save these data, despite of the usage of a special

techniques for the recovery of these data from the different storage modes. This research applied of structured

databases for a trading company operating across the continents, has a set of branches each one has its own

stores and showrooms, and the company branch’s group of sections with specific activities, such as stores

management, showrooms management, accounting management, contracts and other departments. It also

assumes that the company center exported software to manage databases for all branches to ensure the safety

performance, standardization of processors and prevent the possible errors and bottlenecks problems. Also the

research provides this methods the best requirements have been used for the applied of the data warehouse

(DW), the information that managed by such a applied must be with high accuracy. It must be emphasized to

ensure compatibility information and hedge its security, in schemes domain, been applied to a comparison

between the two schemes (Star and Snowflake Schemas) with the concepts of multidimensional database. It

turns out that Star Schema is better than Snowflake Schema in (Query complexity, Query performance,

Foreign Key Joins),And finally it has been concluded that Star Schema center fact and change, while Snowflake

Schema center fact and not change.

KEYWORDS: Data Warehouses, OLAP Operation, ETL, DSS, Data Quality.

I. INTRODUCTION

A data warehouse is a subject-oriented, integrated, nonvolatile, and time-variant collection of data in

support of management’s decisions. The data warehouse contains granular corporate data. Data in the

data warehouse is able to be used for many different purposes, including sitting and waiting for future

requirements which are unknown today [1]. Data warehouse provides the primary support for

Decision Support Systems (DSS) and Business Intelligence (BI) systems. Data warehouse, combined

with On-Line Analytical Processing (OLAP) operations, has become and more popular in Decision

Support Systems and Business Intelligence systems. The most popular data model of Data warehouse

is multidimensional model, which consists of a group of dimension tables and one fact table

according to the functional requirements [2]. The purpose of a data warehouse is to ensure the

appropriate data is available to the appropriate end user at the appropriate time [3]. Data warehouses

are based on multidimensional modeling. Using On-Line Analytical Processing tools, decision

makers navigate through and analyze multidimensional data [4].

Data warehouse uses a data model that is based on multidimensional data model. This model is also

known as a data cube which allows data to be modeled and viewed in multiple dimensions [5]. And

the schema of a data warehouse lies on two kinds of elements: facts and dimensions. Facts are used to

memorize measures about situations or events. Dimensions are used to analyze these measures,

particularly through aggregations operations (counting, summation, average, etc.) [6, 7]. Data

Quality (DQ) is the crucial factor in data warehouse creation and data integration. The data

warehouse must fail and cause a great economic loss and decision fault without insight analysis of

Page 9: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

643 Vol. 7, Issue 3, pp. 642-651

data problems [8]. The quality of data is often evaluated to determine usability and to establish the

processes necessary for improving data quality. Data quality may be measured objectively or

subjectively. Data quality is a state of completeness, validity, consistency, timeliness and accuracy

that make data appropriate for a specific use [9]. The paper is divided into seven sections. Section 1

introduction, Definition of Data Warehouse and The Quality of Data Warehouse. Section 2 presents

related work, Section 3 presents Data Warehouse Creation and the main idea is that a Data warehouse

database gathers data from an overseas trading company databases. Section 4describes Data

Warehouse Design For this study, we suppose a hypothetical company with many branches around

the world, each branch has so many stores and showrooms scattered within the branch location. Each

branch has a database to manage branch information. Section 5 describes our evaluation Study of

Quality Criteria for DW, which covers aspects related both to quality and performance of our

approach, and the obtained results, and work on compare between star schema and snowflake schema.

Section 6 provides conclusions. Finally, Section 7 describes open issues and our planned future work.

1.1 Definition of Data Warehouse

A data warehouse is a relational database that is designed for query and analysis rather than for

transaction processing. It usually contains historical data derived from transaction data, but it can

include data from other sources. It separates analysis workload from transaction workload and

enables an organization to consolidate data from several sources. In addition to a relational database,

a data warehouse environment can include an extraction, transportation, transformation, and loading

(ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other

applications that manage the process of gathering data and delivering it to business users. A common

way of introducing data warehousing is to refer to the characteristics of a data warehouse as set forth

by William Inmon [10]:

1. Subject Oriented.

2. Integrated.

3. Nonvolatile.

4. Time Variant.

1.2 The Quality of Data Warehouse

Data quality has been defined as the fraction of performance over expectancy, or as the loss imparted

to society from the time a product is shipped [11]. The believe was the best definition is the one found

in [12, 13, and 14]: data quality is defined as "fitness for use". The nature of this definition directly

implies that the concept of data quality is relative. For example, data semantics is different for each

distinct user. The main purpose of data quality is about horrific data - data which is missing or

incorrect or invalid in some perspective. A large term is that, data quality is attained when business

uses data that is comprehensive, understandable, and consistent, indulging the main data quality

magnitude is the first step to data quality perfection which is a method and able to understand in an

effective and efficient manner, data has to satisfy a set of quality criteria. Data gratifying the quality

criterion is said to be of high quality [9].

II. RELATED WORK

In this section we will review related work in Data Warehouse Design and Implementation Based on

Quality Requirements. We will start with the former. The paper introduced by Panos Vassiladis,

Mokrane Bouzegeghoub and Christoph Quix, (2000), the proposed approach covers the full lifecycle

of the data warehouse, and allows capturing the interrelationships between different quality factors

and helps the interested user to organize them in order to fulfill specific quality goals. Furthermore,

they prove how the quality management of the data warehouse can guide the process of data

warehouse evolution, by tracking the interrelationships between the components of the data

warehouse. Finally, they presented a case study, as a proof of concept for the proposed methodology

[15]. The paper introduced by Leo Willyanto Santoso and Kartika Gunadi, (2006), this paper

describes a study which explores modeling of the dynamic parts of the data warehouse. This

metamodel enables data warehouse management, design and evolution based on a high level

conceptual perspective, which can be linked to the actual structural and physical aspects of the data

Page 10: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

644 Vol. 7, Issue 3, pp. 642-651

warehouse architecture. Moreover, this metamodel is capable of modeling complex activities, their

interrelationships, the relationship of activities with data sources and execution details [16]. The paper

introduced by Amer Nizar Abu Ali and Haifa Yousef Abu-Addose, (2010), The aim of this paper is to

discover the main critical success factors (CSF) that leaded to an efficient implementation of DW in

different organizations, by comparing two organizations namely: First American Corporation (FAC)

and Whirlpool to come up with a more general (CSF) to guide other organizations in implementing

DW efficiently. The result from this study showed that FAC Corporation had greater returns from

data warehousing than Whirlpool. After that and based on them extensive study of these organizations

and other related resource according to CSFs, they categorized these (CSF) into five main categories

to help other organization in implementing DW efficiently and avoiding data warehouse killers, based

on these factors [17]. The paper introduced by Manjunath T.N, Ravindra S Hegadi, (2013), The

proposed model evaluates the data quality of decision databases and evaluates the model at different

dimensions like accuracy derivation integrity, consistency, timeliness, completeness, validity,

precision and interpretability, on various data sets after migration. The proposed data quality

assessment model evaluates the data at different dimensions to give confidence for the end users to

rely on their businesses. Author extended to classify various data sets which are suitable for decision

making. The results reveal the proposed model is performing an average of 12.8 percent of

improvement in evaluation criteria dimensions with respect to the selected case study [18].

III. DATA WAREHOUSE CREATION

The main idea is that a Data warehouse database gathers data from an overseas trading company

databases. For each branch of the supposed company we have a database consisting of the following

schemas:

Contracting schema consists a contract and contractor date.

Stores schema managing storing information.

Showrooms schema to manage showrooms information for any branch of the supposed

company.

At the top of the above schemas, an accounting schema was installed which manages all

accounting operations for any branch or the while company.

All information is stored into fully relational tables according to the known third normal form. The

data integrity is maintained by using a foreign keys relationship between related tables, non-null

constraints, check constraints, and oracle database triggers are used for the same purpose. Many

indexes are created to be used by oracle optimizer to minimize DML and query response time.

Security constraints are maintained using oracle privileges. Oracle OLAP policy is taken in

consideration.

IV. DATA WAREHOUSE DESIGN

As mentioned above a warehouse home is installed on the same machine. The data warehouse is

stored on a separate oracle table spaces and configured to use the above relational online tables as a

source data. So the mentioned schemas are treated as data locations. Oracle warehouse builder is a

java program, which are used warehouse managements. The locations of data sources are:

1. Accounting schema.

2. Stores schema.

3. Contracting schema.

4. Showrooms schema.

For this study, we suppose a hypothetical company with many branches around the world, each

branch has so many stores and showrooms scattered within the branch location. Each branch has a

database to manage branch information. Within each supposed branch database there are the

following schemas which work according to OLAP policies and maintain securities and data

integrity. The schemas are: Accounting schema, Contracting schema, Stores schema and showrooms

schema. All branches databases are connected to each other over WAN.

Page 11: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

645 Vol. 7, Issue 3, pp. 642-651

Figure 1. Distributed database for hypothetical company.

overseas company, as a base (or a source) for warehouse database. This paper suppose that each node

belongs to one company branch, so each branch controls its own data. The main office of the

company, controls the while data also with the central node. The warehouse database could be at the

central node, as the company needs. We suppose that all nodes use the same programs, which are

applied the database(s). Within each node, each activity is represented by a database schema, i.e.

stores, showrooms, contracting, and other schemas. The core of all schemas is the accounting schema.

According to jobs load, each schema could be installed on a separate database or on the same

database. All related databases around company branches are cooperated within the same WAN.

V. STUDY OF QUALITY CRITERIA FOR DW

In this study, we will carry out some of the criteria, and these criteria are:

5.1. Data warehouse using snowflake schema

Using oracle warehouse policies, each database has the following snow flaking modules:

1. Sales module.

2. Supplying module.

5.1.1. Sales module It consist of the following relational tables

Table 1. Explain the relational table

Table name Table type Oracle schema (owner)

Sales_sh Fact table Showrooms

showrooms Dimensional table Showrooms

Items Dimensional table Accounting

Currencies Dimensional table Accounting

Customers Dimensional table Showrooms

Locations Dimensional table Accounting

The following diagram depicts the relations between the above dimensional and fact tables

Figure 2. Sales module.

Page 12: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

646 Vol. 7, Issue 3, pp. 642-651

Figure 2 above, represents all entities within the sales module. Any entity is designed using the Third

Normal Form (3NF) rule, so it has a primary key. The most important tools used to implement

integrity and validations are oracle constraints. After supplying data to the above module, and

transferring it to oracle warehouse design center, retrieving data (557,441 rows) from fact table sales

which are shown in the following figure 3, which mentions the detailed information for each single

sales within each showroom, and location. It consists of: Voucher (doc) no. And date, the sold item,

sold quantity and price. This data is available at the corresponding node (branch) and the center. Of

course, the same data would be transferred to warehouse database for historical purpose.

Figure 3. Sales data.

Customers dimensional table consists of some personal date about the customer, like mobile, email,

and location which is useful to contact him. Also it indicates the date of last purchase, and the total

amount purchased for last year. This data is available for the corresponding node and the center; also

it refers to the warehouse database. (See figure 4 customer data) The dimensional table of customers

would is shown below.

Figure 4. Customers data.

5.1.2. Supplying module Supplying the company by materials according to company usual contracts is managed by this

module according to snowflake design. It consists of the following relational tables.

Table 2. Within supplying module

Table name Table type Oracle schema (owner)

STR_RECIEVING Fact table Stores

Contracts Dimensional table contracting

Items Dimensional table Accounting

Currencies Dimensional table Accounting

Stores Dimensional table Stores

Locations Dimensional table Accounting

Daily_headers Dimensional table Accounting

Page 13: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

647 Vol. 7, Issue 3, pp. 642-651

The following diagram Fig 5 depicts the relations between the above dimensional and fact tables.

They are obey 3NF rule, so they have their primary key constraints, and constrained to each other

using foreign keys constraints. The fact table STR_RECIEVING consists of all charges information

received at company stores (contented by stores table owned by stores schema), according to the

contracts (contented by contracts table owned by contracting schema). Daily headers dimensional

table represent the accounting information for each contract. Using oracle triggers when new record

inserted into the STR_RECIEVING fact table, some other accounting data would be created into

details table row related (through foreign key) to Daily headers dimensional table. Also any charge

value could be converted to the wanted currency using the data maintained by currencies dimensional

table owned by accounting schema.

Figure 5. Supplying module.

For security reasons, direct excess to object fact table is not allowed, un imaginary view is created

(named str_recieving_v) , then all users are allowed to generate a DML (data manipulation language

instructions) on this view. A certain piece of code (oracle trigger) is written to manipulate data,

according to server policies (data integrity and consistency) as user supplies data to the imaginary

view. After supplying data to the above module, and transferring it to oracle warehouse design

center, retrieving data (415,511rows) from fact table str_recieving as shown in the following figure.

Figure 6. Received charges on str_recieving fact table.

During charges insertion, a background process (oracle trigger) should update the stock dimension

data to reflect the latest information about quantities in stock at each node and the center. The stock

data contains the quantity balance, quantity in and out for the current year. It’s available at the

Page 14: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

648 Vol. 7, Issue 3, pp. 642-651

corresponding node and the center, at the online database, and at warehouse database for previous

years. Stock data could be like figure 7 as viewed by oracle SQL developer.

Figure 7. Stock dimensional table data.

5.2. Data warehouse using star schema

As a study case using warehouse star schema, we have:

1. Stocktaking module.

2. Accounting module.

5.2.1. Stocktaking module

Its manage for the current stock within any store for the company, its data is determined by the

following table.

Table 3. Cooperated within stocktaking module

Table name Table type Oracle schema (owner)

Stock Fact table Stores

Items Dimensional table Accounting

Stores Dimensional table Stores

Currencies Dimensional table Accounting

showrooms Dimensional table Showrooms

Locations Dimensional table Accounting

contracts Dimensional table Contracting

Str_recieving Fact table Stores

To_show_t Fact table Stores

The stock fact table stands for the actual stock balances within each store belongs to each branch, and

the whole company at the center. The following diagram depicts the relations between the below

dimensions.

Figure 8. Stocktaking module as a warehouse star schema.

Page 15: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

649 Vol. 7, Issue 3, pp. 642-651

DML (data manipulation language instructions) is done on stock fact table through oracle triggers

which are the most trusted programs to maintain the highest level of integrity and security, so the

imaginary view ( named stock_v) was created, users are allowed to supply data to that view, then the

server would process the supplied data using oracle trigger. Querying the renormalized stock fact

table within the star schema module, using oracle design center is depicted as below (no. of rows on

stock table within our study case is 15,150). This figure 9 query execution is allowed for all users

(public).

Figure 9. Stocktaking on oracle warehouse design center.

5.2.2. Accounting module

One of the most importance aspects of accounting functions is the calculations of the daily cash

within each showroom belongs to the company. The daily totals for each branch and the grand total

could be calculated. Timely based cash could be accumulated later on demand.

Table 4. The tables needed for this activity

Table name Type Oracle schema (owner)

Daily cash Fact table Accounting

Show_sales Fact table Accounting

Showrooms Dimensional table Showroom

Currencies_tab Dimensional table Accounting

Locations Dimensional table Accounting

Customers Dimensional table showrooms

The daily cash is a view used to reflect the actual cash with each showroom on daily base.

Figure 10. Daily cash using warehouse star schema.

Page 16: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

650 Vol. 7, Issue 3, pp. 642-651

Using inner SQL joins, one could retrieve data about daily cash as follows.

Figure 11. Grand daily cash as depicted by Oracle warehouse design center.

VI. CONCLUSIONS

The following expected conclusions have been drawn:

1. Reduce the query response time and Data Manipulation Language and using many indexes which

are created to be used by oracle optimizer.

2. Star Schema is best of them Snowflake Schema the following points are reached:

Query complexity: Star Schema the query is very simple and easy to understand, while

Snowflake Schema is more complex query due to multiple foreign key which joins between

dimension tables .

Query performance: Star Schema High performance. Database engine can optimize and boost

the query performance based on predictable framework, while Snowflake Schema is more

foreign key joins; therefore, longer execution time of query in compare with star schema.

Foreign Key Joins: Star Schema Fewer Joins, while Snowflake Schema has higher number of

joins.

And finally it has been concluded that Star Schema center fact and change, while Snowflake Schema

center fact and not change.

VII. FUTURE WORKS

1. Using any other criteria in development implementation of the proposed system.

2. Using statistical methods to implement other criteria of Data Warehouse.

3. Applying algorithm Metadata and comparing between bitmap index and b-tree index.

4. Applying this work for a real organization not prototype warehouse.

5. Take advantage of the above standards in improving the performance of the use of the data

warehouse and institutions according to their environment.

ACKNOWLEDGEMENTS

I would like to thank my supervisor Assist. Prof. Dr. Murtadha Mohammed Hamad for his

guidance, encouragement and assistance throughout the preparation of this study. I would also like to

extend thanks and gratitude to all teaching and administrative staff in the College of Computer –

University of Anbar, and all those who lent me a helping hand and assistance. Finally, special thanks

are due to members of my family for their patience, sacrifice and encouragement.

Page 17: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

651 Vol. 7, Issue 3, pp. 642-651

REFERENCES

[1] H. William Inmon, “Building the Data Warehouse”. Fourth Edition Published by Wiley Publishing, Inc.,

Indianapolis, Indiana.2005.

[2] R. Kimball, “The data warehouse lifecycle toolkit”. 1st Edition ed.: New York, John Wiley and Sons,

1998.

[3] K.W. Chau, Y. Cao, M. Anson and J. Zhang, “Application of Data Warehouse and Decision Support

System in Construction Management”. Automation in Construction 12 (2) (2002) 213-224.

[4] Nicolas Prat, Isabelle Comyn-Wattiau and Jacky Akoka , “Combining objects with rules to represent

aggregation knowledge in data warehouse and OLAP systems”. Data & Knowledge Engineering 70 (2011)

732–752.

[5] Singhal, Anoop, “Data Warehousing and Data Mining Techniques for Cyber Security”. Springer. United

states of America.2007.

[6] Bhanali, Neera,”Strategic Data Warehousing: Achieving Alignment with Business”. CRC Prees .

United State of America. 2010.

[7] Wang, John. “Encyclopedia of Data Warehousing and Mining “.Second Edition. Published by

Information Science Reference. United States of America. 2009.

[8] Yu. Huang, Xiao-yi. Zhang, Yuan Zhen , Guo-quan. Jiang, “A Universal Data Cleaning Framework

Based on User Model “, IEEE,ISECS International Computing, Communication, Control , and Management,

Sanya, China, Aug 2009.

[9] T.N. Manjunath, S. Hegadi Ravindra, G.K. Ravikumar, “Analysis of Data Quality Aspects in Data

Warehouse Systems”. International Journal of Computer Science and Information Technologies, Vol. 2 (1),

2011, 477-485.

[10] Paul Lane, “Oracle Database Data Warehousing Guide”. 11g Release 2 (11.2), September 2011.

[11] D.H. Besterfield, C. Besterfield-Michna, G. Besterfield, and M. Besterfield-Sacre, ” Total Quality

Management”. Prentice Hall, 1995.

[12] G.K. Tayi, D.P. Ballou, “Examining Data Quality”. In Communications of the ACM, 41(2), 1998. pp.

54-57.

[13] K. Orr,” Data quality and systems theory”. Communications of the ACM, 41(2), 1998. pp. 66-71.

[14] R.Y. Wang, D. Strong, L.M. Guarascio, “Beyond Accuracy: What Data Quality Means to Data

Consumers”. Technical Report TDQM-94-10, Total Data Quality Management Research Program, MIT Sloan

School of Management, Cambridge, Mass., 1994.

[15] Panos vassiladis, Mokrane Bouzegeghoub and Christoph Quix. “Towards quality-oriented data

warehouse usage and evolution”. Information Systems Vol. 25, No. 2, pp. 89-l 15, 2000.

[16] Leo Willyanto Santoso and Kartika Gunadi. “A proposal of data quality for data warehouses

environment”. Journal Information VOL. 7, NO. 2, November 2006: 143 – 148

[17] Amer Nizar AbuAli and Haifa Yousef Abu-Addose. “Data Warehouse Critical Success Factors”.

European Journal of Scientific Research ISSN 1450-216X Vol.42 No.2 (2010), pp.326-335.

[18] T.N. Manjunath, S. Hegadi Ravindra, “Data Quality Assessment Model for Data Migration Business

Enterprise”. International Journal of Engineering and Technology (IJET) ISSN: 0975-4024, Vol 5 No 1 Feb-

Mar 2013.

AUTHOR PROFILE

Khalid Ibrahim Mohammed was born in BAGHDAD, IRAQ, in 1981, received his

Bachelor’s Degree in computer Science from AL-MAMON UNIVERSITY COLLEGE,

BAGHDAD, IRAQ during the year 2006 and M. Sc. in computer Science from College of

Computer Anbar University, ANBAR , IRAQ during the year 2013. He is having total 8

years of Industry and teaching experience. His areas of interests are Data Warehouse &

Business Intelligence, multimedia and Databases.

Page 18: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

652 Vol. 7, Issue 3, pp. 652-665

A NEW MECHANISM IN HMIPV6 TO IMPROVE MICRO AND

MACRO MOBILITY

Sahar Abdul Aziz Al-Talib Computer and Information Engineering

Electronics Engineering, University of Mosul, Iraq

ABSTRACT The paper proposes new mechanism in Hierarchical Mobile IPv6 (HMIPv6) implementation for utilization in

IPv6 Enterprise Gateway. This mechanism provides seamless mobility and fast handover of a mobile node in

HMIPv6 networks. Besides it provides continuous communication among a member of HMIPv6 components

while roaming. The mechanism anticipates future movement of mobile node and accordingly provides an

effective updating mechanism. A bitmap has been proposed to help in applying the new mechanism as will be

explained throughout this paper. Instead of scanning the prefix bits which might be up to 64 bits for subnet

discovery, about 10% of this length will be sufficient to determine the hierarchical topology. This will enable

fast Micro and Macro HMIPv6 in Enterprise Gateway.

KEYWORDS: Hierarchical Mobile IPv6, bitmap, Enterprise Gateway, Mobility Anchor Point.

I. INTRODUCTION

IPv6 Enterprise Gateway is an apparatus or device which intercommunicate the public network with

enterprise network. The enterprise network includes many sub-networks with different access routers

or home anchors. For flexibility, HMIPv6 is used to organize these enterprise networks in hierarchal

way. HMIPv6 stands for Hierarchical Mobile Internet Protocol Version 6. HMIPv6 is provisioning

the different access areas by assigning mobile anchor point (MAP) to each area. The MAP stands on

behalf of the enterprise gateway (home agent) in its particular coverage area. Besides, it will reduce

the control messages exchanged and decrease the time when the mobile node roaming to another

network. The mobile node can use the local MAP to keep the communication on without the need to

communicate with enterprise gateway.

It is known that Hierarchical Mobile IPv6 (HMIPv6) provides a flexible mechanism for local mobility

management within visited networks. The main problem in hierarchical mobility is that the

communication between the correspondent node (CN) and the mobile node (MN) suffers from

significant delay in case of the mobile node roaming between different MAPs or between different

access routers (ARs) in the same MAP coverage area. This happens because the MN in the visited

network acquires a local care of address (LCoA), therefore it receives the router advertisement (RA)

and uses its prefix to build its new LCoA. This process causes a communication delay between the

CN and the MN for a while, especially when the roaming happens between different MAPs. This

process involves more binding update messages to be sent not just to the local MAP, but also to the

enterprise gateway as shown in Figure 1, the messages flow to acquire new CoA starts after MN

reached the foreign network.

The patent in [1] anticipates the probable CoA’s based on the number of vicinity Access Routers,

which means the number of probable CoA in each single movement, and then the MAP should

multicast the traffic to all of these addresses. In this paper, the Home Anchor creates the anticipated

CoA based on the received bitmap from MN, this Bitmap refers to the strongest signal that the MN

Page 19: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

653 Vol. 7, Issue 3, pp. 652-665

was triggered by the foreign router, this means Home Anchor have a single probable address to tunnel

the packet to, besides the current CoA.

RFC4260 [2] proposed the idea of anticipating the new CoA and tunnel the packets between the

previous Access Router and the New Access Router. The differences between the work in this paper

and in [2] is that packet loss will likely occur if the process is performed too late or too early with

respect to the time in which the mobile node detaches from the previous access router and attach to

the new one, or the packet loss is likely occur in case the BU cache in the home anchor is updated but

the layer 2 handover does not complete yet. The work in [3] proposed a scheme that reduced the 82%

of the total cost of the macro mobility handover of the original HMIPv6.

Kumar et al. in [4] proposed an analytical model which shows the performance and applicability of

MIPv6 and HMIPv6 against some key parameters in terms of cost.

An adaptive MAP selection based on active overload prevention (MAP-AOP) is proposed in [5]. The

MAP periodically evaluates the load status by using dynamic weighted load evaluation algorithm, and

then sends the load information to the covered access routers (AR) by using the expanded routing

advertisement message in a dynamic manner.

In this paper, a solution to the explained problem has been proposed in which the mobile node sends

binding update request at the moment when it initiates the movement to the foreign network, and to

add a bitmap field to the binding update request to recognize different components of the HMIPv6.

The paper is organized as follows. Section 2 describes the methodology of the proposed mechanism

supported by figures; section 3 illustrates the new operation of micro and macro-mobility. Binding

update message format is introduced in section 4 where the bitmap idea comes from. Finally, section

5 concludes the work and presents the future work.

II. METHODOLOGY

The objective of this paper is provided by a method for multi-cell cooperative communication

comprising: detecting a beacon from a new access router, translating the beacon received from the

new access router to a bitmap, sending a binding update request message that contains the bitmap to a

current mobile access point, assessing the bitmap, tunneling data packets from a correspondent node

through the enterprise gateway to the current destination and to the new destination simultaneously,

sending an update message once the mobile node reaches the new destination, refreshing binding

cache tables and tunneling the data packets only to the new destination according to the new address

of mobile node.

Page 20: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

654 Vol. 7, Issue 3, pp. 652-665

Figure 1 (a): Flow Control of Micro and Macro Mobility Process (part 1)

Page 21: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

655 Vol. 7, Issue 3, pp. 652-665

Figure 1 (b): Flow Control of Micro and Macro Mobility Process (part 2)

Some of the most important key points in this paper:

Adding a bitmap which is a number of binary bits proposed to be used instead of the subnet

prefix to specify the subnet.

Page 22: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

656 Vol. 7, Issue 3, pp. 652-665

Propose to update the binding update message to include a bitmap field; this field is

occupied by a lowest part which refers to the router’s prefix and a highest part which refers

to the home anchor’s address, the proposed message can be called binding update request

which should be sent as soon as the MN initiates the movement to the anticipated foreign

network.

The mobile node has the ability to measure the beacon strength in order to decide which

anchor point to join as the future home agent.

Binding Update message: it is mentioned in HMIPv6 standard [6, 7], this message is sent

by the mobile node to the home anchor or the home agent (in some cases it is sent to

correspondent node), it is used to register the MN’s Care of Address in its home anchor or

its home agent, but this message is sent after the MN acquires a CoA in its foreign network.

Binding cache Table: it is a mapping table that resides in the Enterprise Gateway (home

agent) and home anchor points to translate the value of the bitmap to its equivalent prefix.

Also it is used to track the MN’s movement.

The proposed solution contributes to both inter-domain (macro-mobility) and intra-domain

(micro-mobility) in terms of handover delay. This will improves multimedia applications.

1. The New Operation of Micro and Macro-mobility:

a) Operation of Micro Mobility: As shown in Figure 1, the proposed solution suggested sending a

binding update request (BU req.) when the MN initiates its movement to the foreign network. The

MN will sense the strong signal from the foreign network router. It translates this to the bitmap

field by setting up the corresponding bit. The contributed bitmap in this paper requires updating the

present binding update message by using the reserved bits already exist for future use.

Then the MN sends the updated BU req. message to the upstream Home Anchor. When the home

anchor receives the updated BU req., it will check the highest bits in the bitmap, as we can see in

the decision box (Figure 1), if these bits refers to the home anchor itself, the home anchor will learn

that this is a micro-mobility and create a new address (CoA) for the MN according to the lowest

bits of the bitmap.

Now, the Home Anchor cache table has two CoAs for a particular MN one for the previous MN’s

location and the other for the new location. It starts to tunnel the packets to both CoAs for a while

till the handover preparation is finalized. Therefore the MN will have a seamless mobility and

continuous communication, even before obtaining the new CoA from the future foreign router.

When the MN reaches the foreign router area and obtains the CoA, it will send a Binding update

message which includes the HoA and the CoA to the upstream Home Anchor, then the Home

Anchor will refresh its binding cache and delete the old CoA.

Finally, the Home Anchor will tunnel the packets only to the new CoA.

b) Operation of Macro Mobility

This section explains the mobility between two routers each of them belongs to different Home

Anchor or different domains. The process flow is shown in Figure 1.

Assume the MN initiates the movement to a router that is connected to different home Anchor. The

MN receives the beacon from the foreign router and recognizes it as different from the old one from

the prefix. It reflects this in the higher bits of the bit map. It updates the BU request message

accordingly with the new bitmap then sends the message to the upstream Home Anchor. When the

Home Anchor receives the BU req., it will check the highest bits in the bitmap, in this case it will

recognize that these bits refers to another Anchor, therefore it forwards the BU req. to the Enterprise

Gateway (home agent). At the same time, the home anchor translates the lowest part of the bitmap to

its prefix and adds it to the MAC address to form the new CoA. The Home Anchor will tunnel the

packets to the current CoA and the new CoA at the same time. On the other hand, the Enterprise

Gateway receives the BU req. and translates the highest bits of the bitmap to the equivalent Home

Anchor address. Besides, it refreshes the cache table with this new value and starts to tunnel the

packets to the current Home Anchor and the new Home Anchor. But the traffic which is destined to

the new Anchor will not be forwarded to the MN till the MN reaches the particular foreign router and

obtained a new CoA. At this time the MN will send a BU message to the Enterprise Gateway. As a

result, the Home Agent refreshes its cache table to delete the previous Anchor and includes the

Page 23: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

657 Vol. 7, Issue 3, pp. 652-665

updated one then tunnels the traffic to the new Home Anchor. The new home anchor in turn will

tunnel the packets to the new CoA of the mobile node in the new location.

2. Binding update message format

The main contribution in this work is based on the idea of using the reserved bits in the BU message

shown in Figure 2.

As can be seen in Figure 2 the binding update message has reserved bits starting from bit 5 up to bit

15, so some or all of these bits can be used to implement the proposed mechanism in this paper. For

example, if 6 bits are used, the highest 2- bits can be assigned to the Home Anchors with possibilities

of (00, 01, 10, 11), the lowest 4-bits can be assigned to the access routers (ARs) which can handle 24

or 16 routers starting from 0000 to 1111.

0 1 2 3

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

| Sequence # |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

|A|H|L|K|M| Reserved | Lifetime |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

| |

| |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Figure 2: Binding update message format [3]

Figure 3 shows the system architecture for the proposed mechanism, here are some abbreviations:

CN: Correspondent Node.

MN: Mobile Node.

BU req.: Binding Update request message.

ACK: Acknowledgement message

HoA: MN’s Home Address.

CoA: MN’s Care of Address in the foreign network.

Based on the architecture, the MN initiates its movement from the home network to Home Anchor-1.

As shown in Figure 3, MN receives the beacon from R2 in Home Anchor1‘s coverage area and it

translates this signal to the bitmap by setting the correspondent bit. The MN will send a BU request

message which contains the bitmap to the Home Anchor1. Home Anchor1 builds its bitmap mapping

table by translating the received bitmap to CoA2, then it replies back an acknowledgment message to

MN. The MN receives the acknowledgment and sends another BU request message to the Enterprise

Gateway (home Agent). The Enterprise Gateway receives the BU req. and checks the highest bits in

the bitmap field and translates it to the equivalent home Anchor’s addresses. In this architecture, the

highest bits refer to the Home Anchor-1, then the Home Agent refreshes its bitmap mapping table to

add the Home Anchor’s addresses in the table.

Page 24: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

658 Vol. 7, Issue 3, pp. 652-665

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/642407:4040::/64 2407:4050::/64 2407:4060::/64

MNHoA: 2407:3000:200d:312::b6:d2

CoA2: 2407:4020::b6:d2

1. Movement

2. BU req. (HoA: 2407:3000:200d:312::b6:d2, Bit Map:

000010)

Home Address Bitmap

2407:3000:200d:312:b6:d2 000010

Care of Address

2407:4020::b6:d23.

Binding cache Table

Lowest bit represents->R4. Ack

5. BU (HoA: 2407:3000:200d:312::b6:d2,

Bit Map: 000010)2407:4000::2 (HA1)

Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

00

Binding cache Table

The Highest bits represent Home

Anchors address

6.

5.

MN

Figure 3: MN moves from Home Network to the R2’s Network

Assume there is a correspondent node (CN) in the Home Network vicinity as shown in Figure 4, the

CN will send the packets to the home agent address (HoA), and the Home Agent receives the traffic

and checks the binding cache table. It will find out the HoA refers to the Home Anchor-1, therefore

the Home Agent tunnels the packets to Home Anchor-1, and the Home Anchor-1 in turn tunnels the

packets to CoA2 according to its cache table. Finally the MN receives the packets from R2 as shown

in Figure 4.

Figure 5 shows the Micro-mobility scenario in the proposed mechanism, where MN indicates the

augmentation of the signal received from R3 while it initiates its movement towards R3, thus it

translates the beacon received from R3 to the equivalent bitmap. Then, MN sends a BU req. message

including the bitmap field to the Home Anchor-1 which checks the highest bits of the received

bitmap, the highest bits refers to the same Home Anchor-1. Therefore Home Anchor-1 translates the

received bitmap to the equivalent Care of Address which is CoA3 (new CoA) in this case, and it adds

it to the Home Anchor-1’s binding cache table.

Page 25: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

659 Vol. 7, Issue 3, pp. 652-665

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/642407:4040::/64 2407:4050::/64 2407:4060::/64

MNHoA: 2407:3000:200d:312::b6:d2

CoA2: 2407:4020::b6:d2

Home Address Bitmap

2407:3000:200d:312:b6:d2 000010

Care of Address

2407:4020::b6:d2

2407:4000::2 (HA1)

Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

000010

CN

1. Packet destined to HoA

2. Packet Tunneled to HA1

3. Packet Tunnel to CoA2

4. MN receives and

decapsulate the packet

Figure 4: CN sends Packets to the MN in its Foreign Network (R2)

So far, the Home Anchor-1 tunnels the packets to both CoA2 and CoA3 at the same time, this will

provide a continuous communication between CN and MN. MN still can receive the traffic even

before sending a BU message to Home Anchor-1 through R3.

Figure 6 shows that when MN obtains CoA3 from R3, it will send a BU message to Home Anchor-1

through R3, then Home Anchor-1 checks the Binding cache table again and refreshes it. It will

exclude CoA2 from the table and tunnels the packet only to CoA3 according to the new address of

MN.

Page 26: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

660 Vol. 7, Issue 3, pp. 652-665

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/64 2407:4040::/64 2407:4050::/64 2407:4060::/64

MN

HoA: 2407:3000:200d:312::b6:d2

CoA2: 2407:4020::b6:d2

3. Home Address Bitmap

2407:3000:200d:312:b6:d2

000010

Care of Adress

2407:4020::b6:d2

2407:4000::2 (HA1)

Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

000010

CN

Packet destined to HoA

Packet Tunneled to HA1

4. Tunneling the packets to CoA2

and Probable CoA3

5. MN receives and

decapsulate the packet 1. Movement

MN

2

2. BU req.

(HoA: 2407:3000:200d:312:b6:d2,

Expected bitmap: 000011)

000011 2407:4030::b6:d2

1. Initiate the movement

toward R3

Figure 5: MN starts to move to R3

Figure 7 shows the macro-mobility between two different home anchors. MN initiates its movement

toward R4, the same procedure previously explained will be followed, and MN will decry the

augmentation of R4’s signal and translate it to the equivalent bitmap.

MN sends a BU req. message to Home Anchor-1; here the difference is that the Home Anchor checks

the received bitmap, it finds out that the highest bits of bitmap refers to another home anchor.

Therefore, Home Anchor-1 will forward the BU req. to the upstream Enterprise Gateway (Home

Agent) and at the same time it translates the lowest bits to the equivalent CoA (in this case CoA4).

Now, Home Anchor-1 tunnels the packets to CoA3 and CoA4 as shown in Figure 7, Enterprise

gateway receives the BU req. from Home Anchor-1, and translates the highest bit to the equivalent

home anchor’s address, as shown in figure 7. The Enterprise gateway will add the address of Home

Anchor-2, and then it will start to tunnel the packets to Home Anchor-2. Home Anchor-2 still does not

have a record in its binding cache table to track the new MN’s CoA.

Page 27: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

661 Vol. 7, Issue 3, pp. 652-665

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/64 2407:4060::/64 2407:4070::/64 2407:4080::/64

HoA: 2407:3000:200d:312::b6:d2

CoA3: 2407:4030::b6:d2

2. Home Address Bitmap

2407:3000:200d:312:b6:d2

000010

Care of Adress

2407:4020::b6:d2

2407:4000::2 (HA1)

Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

000010

CN

Packet destined to HoA

Packet Tunneled to HA1

MN receives and

decapsulate the packet

1. BU

(HoA: 2407:3000:200d:312:b6:d2,

CoA3: 2407:3030::b6:d2)

000011 2407:4030::b6:d2

MN

2. Refresh

the table

Figure 6: MN obtained CoA3 from R3

As shown in figure 8, MN acquired CoA4 and sends a BU message to Home Anchor-2 and Home

Anchor-1 simultaneously.

Home Anchor-1 updates its table and sends packets to CoA4 only and at the same time Home

Anchor-2 starts to forward the traffic to CoA4. The MN will send a BU message to the Enterprise

Gateway (Home Agent) telling about CoA4.

Home Agent receives the BU message, checks the MN’s CoA then excludes Home Anchor-1 from the

table and sends the traffic to Home Anhcor-2 only which is the next hop to reach CoA4.

Page 28: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

662 Vol. 7, Issue 3, pp. 652-665

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/642407:4060::/64 2407:4070::/64 2407:4080::/64

HoA: 2407:3000:200d:312::b6:d2

CoA3: 2407:4030::b6:d2

3. Home Address Bitmap

2407:3000:200d:312:b6:d2

000010

Care of Adress

2407:4020::b6:d2

2407:4000::2 (HA1)

6. Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

0010

CN

Packet destined to HoA

2. BU request

(HoA: 2407:3000:200d:312:b6:d2,

Bitmap: 100110)

000011 2407:4030::b6:d2

MN1. Initiates movement toward R4

100110 2407:4060::b6:d2

4. checking the Highest bit

5. a)Forward BU request

(HoA: 2407:3000:200d:312:b6:d2,

Bitmap: 100110)

2407:5000::2 (HA2)100110

7. Tunnel the packet to HA1 & HA2

5. b)Tunnel packets to CoA3 & CoA4

MN

Figure 7: MN initiates movement to Home Anchor-2 (R4)

Page 29: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

663 Vol. 7, Issue 3, pp. 652-665

100110

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/642407:4060::/64 2407:4070::/64 2407:4080::/64

HoA: 2407:3000:200d:312::b6:d2

CoA4: 2407:4060::b6:d2

2. Home Address Bitmap

2407:3000:200d:312:b6:d2

000010

Care of Adress

2407:4020::b6:d2

2407:4000::2 (HA1)

Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

000010

CN

Packet destined to HoA

1. BU

(HoA: 2407:3000:200d:312:b6:d2,

CoA4: 2407:4060::b6:d2)

000011 2407:4030::b6:d2

MN

100110 2407:4060::b6:d2

2. a) Refresh the table

2407:5000::2 (HA2)

Tunnel the packet to HA1 & HA2

Home Address Bitmap

2407:3000:200d:312:b6:d2 0010

Home Address

2407:4060::b6:d2

2. a) Refresh the table

3. Forwards the packet from Home Agent

to CoA4

Figure 8: Home Anchor-2 forwards the traffic to CoA4

Page 30: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

664 Vol. 7, Issue 3, pp. 652-665

100110

Enterprise Gateway

/Home Agent

2407:3000:200d:312::2/64

Home Anchor1

2407:4000::2/64

Home Anchor2

2407:5000::2/64

R1 R2 R3 R4 R5 R6

2407:4010::/64 2407:4020::/64 2407:4030::/642407:4060::/64 2407:4070::/64 2407:4080::/64

HoA: 2407:3000:200d:312::b6:d2

CoA4: 2407:4060::b6:d2

2. Home Address Bitmap

2407:3000:200d:312:b6:d2

000010

Care of Adress

2407:4020::b6:d2

2407:4000::2 (HA1)

2. Home Address Bitmap

2407:3000:200d:312:b6:d2

Home Anchor

000010

CN

Packet destined to HoA

1. BU

(HoA: 2407:3000:200d:312:b6:d2,

CoA4: 2407:4060::b6:d2)

000011 2407:4030::b6:d2

MN

100110 2407:4060::b6:d2

2. a) Refresh the table

2407:5000::2 (HA2)

Home Address Bitmap

2407:3000:200d:312:b6:d2 0010

CoA

2407:4060::b6:d2

Forwards the packet from Home Agent to

CoA4

2. refresh the Home Agent table

Packet tunnel to HA2

Figure 9: Updating the Home Agent Table

III. CONCLUSIONS AND FUTURE WORK

From the proposed and explained scenarios, the MN will have a seamless mobility by anticipating the

new location and new CoA and take some actions as soon as it initiates the movement. On the other

hand, this mechanism will save the network resources by updating the cache table of the server.

Which means the traffic will not keep multicasting to multi probable addresses but only to one

probable address in each single movement. As a result, the real time (multimedia) applications will

experience better service flow.

A future work will study how the multimedia applications are affected when applying the proposed

mechanism.

ACKNOWLEDGEMENT

The author would like to thank Mr. Aus S. Matooq for his help reviewing, commenting and his

feedback on this paper.

Page 31: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

665 Vol. 7, Issue 3, pp. 652-665

REFERENCES

[1]. Omae K., Okajima I., Umeda N., “Mobility Management System, and Mobile Node Used in the

System, Mobility Management Method, Mobility Management Program, And Mobility Management

Node”, International Patent US 7,349,364 B2, published date: 25 March 2008.

[2]. McCann P., “Mobile IPv6 Fast Handovers for 802.11 Networks”, Lucent Technologies, RFC4260,

2005.

[3]. Lee K. et. al., “A Macro Mobility Handover Performance Improvement Scheme for HMIPv6”, Lecture

Notes in Computer Science, Volume 3981, 2006, pp 410-419.

[4]. Kumar V., Lall G C and Dahiya P., “Performance Evaluation of MIPv6 and HMIPv6 in terms of Key

Parameters”, International Journal of Computer Applications 54(4):1-3, September 2012.

[5]. Tao M., Yuan H. and Wei W., “Active overload prevention based adaptive MAP selection in HMIPv6

networks”, Journal of Wireless Networks, February 2014, Volume 20, Issue 2, pp 197-208.

[6]. Soliman H., Castelluccia C.,El Malki K., Bellier L., “Hierarchical Mobile IPv6 Mobility Management

(HMIPv6)”, RFC 4140, 2005.

[7]. Soliman H., Castelluccia C.,El Malki K., Bellier L., “Hierarchical Mobile IPv6 (HMIPv6) Mobility

Management”, RFC 5380, 2008.

AUTHORS BIOGRAPHY

Sahar A. Al-Talib received her BSc. in Electrical and Communication Engineering from

University of Mosul in 1976, her MSc, and PhD in Computer and Communications

Engineering from University Putra Malaysia (UPM), in 2003 and 2006 respectively. She

served at MIMOS Berhad, Technology Park Malaysia, for about 4 years as Staff Researcher

where she enrolled in infrastructure mesh network project (P000091) adopting IEEE

802.16j/e/d wireless technology. She became a lecturer at the Department of Computer and

Information Engineering, Electronics Engineering, University of Mosul, Iraq in 2011. She

was the main inventor of 10 patents and published more than 20 papers in local, regional, and international

journals and conferences. She certified CISCO-CCNA, WiMAX RF Network Engineer, WiMAX CORE

Network Engineer, IBM “Basic Hardware Training Course”- UK, Cii-Honeywell Bull “Software Training

Course on Time Sharing System and Communication Concept”- France, Six-Sigma white belt, Harvard Blended

Leadership Program by Harvard Business School Publishing – USA, Protocol Design and Development for

Mesh Networks by ComNets – Germany.

Page 32: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

666 Vol. 7, Issue 3, pp. 666-674

EFFICIENCY OPTIMIZATION OF VECTOR-CONTROLLED

INDUCTION MOTOR DRIVE

Hussein Sarhan Department of Mechatronics Engineering,

Faculty of Engineering Technology, Amman, Jordan

ABSTRACT This paper presents a new approach that minimizes total losses and optimizes the efficiency of a variable speed

three-phase, squirrel-cage, vector-controlled induction motor drive through optimal control, based on combined

loss-model control and search control. The motor power factor is used as the main control variable. An

improvement of efficiency is obtained by online adjusting the stator current according to the value of power

factor corresponding to the minimum total losses at a given operating point. The drive system is simulated using

Matlab SIMULINK models. A PIC microcontroller is used as minimum loss power factor controller. Simulation

results show that the proposed approach significantly improves the efficiency and dynamic performance of the

drive system at all different operating conditions.

KEYWORDS: Efficiency Optimization, Induction Motor Drive, Optimal Power Factor, Simulation, Vector

Control.

I. INTRODUCTION

Squirrel-cage three-phase induction motors IMs are the workhorse of industries for variable speed

applications in a wide power range. However, the torque and speed control of these motors is difficult

because of their nonlinear and complex structure. In general there are two strategies to control these

drives: scalar control and vector control. Scalar control is due to magnitude variation of the control

variable only. The stator voltage can be used to control the flux, and frequency or slip can be adjusted

to control the torque. Different schemes for scalar control are used, such as: constant V/f ratio,

constant slip, and constant air-gap flux control. Scalar controlled drives have been widely used in

industry, but the inherent coupling effect (both torque and flux are function of stator voltage or current

and frequency) give sluggish response and system is easily prone to instability [1-3]. To improve the

performance of scalar-controlled drives, a feedback by angular rotational speed is used. However, it is

expensive and destroys the mechanical robustness of the drive system. Performance analysis of scalar-

controlled drives shows that scalar control can produce adequate performance in variable speed

drives, where the precision control is not required. These limitations of scalar control can be

overcome by implementing vector (field oriented) control.

Vector control was introduced in 1972 to realize the characteristics of separately-excited DC motor in

induction motor drives by decoupling the control of torque and flux in the motor. This type of control

is applicable to both induction and synchronous motors. Vector control is widely used in drive

systems requiring high dynamic and static performance. The principle of vector control is to control

independently the two Park components of the motor current, responsible for producing the torque

and flux respectively. In that way, the IM drive operates like a separately-excited DC motor drive

(where the torque and the flux are controlled by two independent orthogonal variables: the armature

and field currents, respectively) [4-8].

Vector control schemes are classified according to how the field angle is acquired. If the field angle is

calculated by using stator voltage and currents or hall sensors or flux sensing winding, then it is

known as direct vector control DVC. The field angle can also be obtained by using rotor position

measurement and partial estimation with only machine parameters, but not any other variables, such

Page 33: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

667 Vol. 7, Issue 3, pp. 666-674

as voltages or currents. Using this field angle leads to a class of control schemes, known as indirect

vector control IVC [1, 9].

From energetic point of view, it is well known that three-phase induction motors, especially the

squirrel-cage type are responsible for most of energy consumed by electric motors in industry.

Therefore, motor energy saving solutions by increasing its efficiency has received considerable

attention during the last few decades due to the increase in energy cost. Energy saving can be

achieved by proper motor selection and design, improvement of power supply parameters and

utilizing a suitable optimal control technique [3-5, 10,11]

Induction motor operation under rated conditions is highly efficient. However, in many applications,

when the motor works at variable speed, it has more losses and less efficiency, so it operates far from

the rated point. Under these circumstances, it is not possible to improve the motor efficiency by motor

design or by supply waveform shaping technique. Therefore, a suitable control algorithm that

minimizes the motor losses will rather take place.

Minimum-loss control schemes have can be classified into three categories: search method, loss

model, power factor control. The power factor control scheme has the advantage that the controller

can be stabilized easily and the motor parameter information is not required. However, analytical

generation of the optimal power factor commands remains tedious and restrictive because empirical,

trial and error methods are generally used [4,5]. For this reason, search control using digital

controllers is preferable.

In this paper, a combined minimum-loss control and search control approach is used to find the power

factor, corresponding to the minimum losses in the drive system at a specified operating point. A PIC

microcontroller is used as an optimal power factor controller to online adjust the stator current

(voltage) to achieve the maximum efficiency of the drive system.

II. EFFICIENCY OPTIMIZATION OF VECTOR-CONTROLLED DRIVE SYSTEM

The generalized block diagram of vector-controlled induction motor drive is shown in Figure 1.

Figure 1. Generalized block diagram of vector-controlled induction motor drive.

To perform vector control, the following steps are required [1]:

1. Measurements of motor phase voltages and currents.

2. Transformation motor phase voltages and currents to 2-phase system ( , ) using Clarke

transformation, according to the following equation:

Page 34: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

668 Vol. 7, Issue 3, pp. 666-674

)(2

3

],2

1

2

1[

scsbs

scsbsas

iiki

iiiki

(1)

where:

sai actual stator current of the motor phase A

sbi actual stator current of the motor phase B

sci actual current of the motor phase C

constk

3. Calculation of rotor flux space vector magnitude and position angle through the components of

rotor flux.

4. Transformation of stator currents to qd coordinate system using Park transformation, according

to:

fieldsfieldssq

fieldsieldfssd

iii

iii

cossin

,sincos

(2)

where field is the rotor flux position.

The component sdi is called the direct axis component (the flux-producing component) and sqi is

called the quadrature axis component (the torque-producing component). They are time invariant; flux

and torque control with them is easy.

The values of fieldsin and fieldcos can be calculated by:

rd

rfield

rd

r

field

cos

,sin

(3)

where:

rrrd

22 (4)

The rotor flux linkages can be expressed as:

smrrr

smrrr

iLiL

iLiL

, (5)

5. The stator current torque- sqi and flux- sdi producing components are separately controlled.

6. Calculation of the output stator voltage space vector using decoupling block.

7. Transformation of stator space vector back from qd coordinate system to 2-phase system fixed

with the stator using inverse Park transformation by:

fieldsqfieldsds

fieldsds

iii

fieldisqii

cossin

,sincos

(6)

8. Generation of the output 3-phase voltage using space modulation.

The developed electromagnetic torque of the motor T can be defined as:

qsdr

r

m iL

LPT

4

3 (7)

where P is the number of poles of the motor.

From Equation (7) it is clear that the torque is proportional to the product of the rotor flux linkages

and q-component of the stator current. This resembles the developed torque expression of the DC

motor, which is proportional to the product of the field flux linkages and the armature current. If the

Page 35: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

669 Vol. 7, Issue 3, pp. 666-674

rotor flux linkage is maintained constant, then the torque is simply proportional to the torque

producing component of the stator current, as in the case of the separately excited DC machine with

armature current control, where the torque is proportional to the armature current when the field is

constant.

The power factor p.f is also a function of developed torque, motor speed and rotor flux, and can be

calculated as the ratio of input power to apparent power [3-4]:

dsqsdsqs

dsdsqsqs

iivv

ivivfp

2222.

(8)

The power factor can be used as a criterion for efficiency optimization. The optimal power factor

corresponding to minimum power losses in the drive system can be found analytically or by using

search method of control. To avoid tedious analytical calculations of power factor, a search control is

implemented in this paper. The stator voltage is incrementally adjusted until the controller detects the

minimum total losses at a given operating point. The total power losses P can be calculated as

the difference between the input power inP and the output mechanical power outP :

TivivP dsdsqsqs )(2

3 (9)

The block diagram of proposed efficiency optimization system for vector-controlled induction motor

is shown in Fig. 2. The system consists of 400V,4kW, 1430 rpm, 50Hz three-phase, squirrel-cage

induction motor. The motor is fed from three phase AC-to-AC energy converter, based on voltage

controlled pulse-width modulated inverter. A PIC 16f877A microcontroller is used as a controller in

the system.

Figure 2. Block diagram of vector-controlled optimized system.

To investigate the proposed optimized system, two Matlab SIMULINK models were constructed. The

first model, which is shown in Figure 3, represents the original vector-controlled drive system without

efficiency optimization. The second model, Figure 4, illustrates the efficiency-optimized, vector-

controlled system.

Page 36: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

670 Vol. 7, Issue 3, pp. 666-674

Figure 3. Matlab SIMULINK model of vector–controlled drive system.

Figure 4. Matlab SIMULINK model of efficiency-optimized, vector-controlled drive system.

The PIC microcontroller was linked to the Matlab model by interface circuit shown in Figure 5.

Figure 5. Block diagram of interface circuit between Matlab model and PIC microcontroller model.

Page 37: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

671 Vol. 7, Issue 3, pp. 666-674

III. SIMULATION RESULTS

To show the effect of proposed efficiency optimization technique based on power factor optimization,

a comparative analysis between original drive system and optimized one has been done. Investigation

was provided under the same operating conditions: frequency (speed) and load torque. Figures (6-8)

show the relationship between efficiency and load torque under constant frequency.

Figure 6. Relationship between efficiency and load torque at constant frequency Hzf 50 .

It is clear that the implemented efficiency optimization technique improves the efficiency of the drive

system for the whole frequency range. Improvement is significant at light loads.

Figure 7. Relationship between efficiency and load torque at constant frequency Hzf 35 .

Figure 8. Relationship between efficiency and load torque at constant frequency Hzf 15 .

Page 38: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

672 Vol. 7, Issue 3, pp. 666-674

The relationship between efficiency and frequency at constant load torque is presented in Figures (9-

10). At light load torques the efficiency improvement is very significant for the whole frequency

range.

Figure 9. Relationship between efficiency and frequency at constant load torque mNT .12

Figure 10. Relationship between power factor and frequency at constant load torque mNT .2

Figure (11) is an example of the relationship between power factor and frequency at constant load. It

is clear that the optimized system has better power factor for all frequencies.

Figure 11. Relationship between power factor and frequency at constant load torque mNT .10

It was noticed that the dynamic response of the optimized system has been improved. The oscillations

in electromagnetic torque and angular speed disappeared and the response becomes faster. Examples

Page 39: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

673 Vol. 7, Issue 3, pp. 666-674

of dynamic response at load torque N.m5T and frequency Hz30f are shown in Figures (12,

13).

Figure 12. Dynamic response of electromagnetic torque at load torque N.m5T and frequency

Hz30f .

Figure 13. Dynamic response of angular speed at load torque N.m5T and frequency Hz30f .

Page 40: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

674 Vol. 7, Issue 3, pp. 666-674

IV. CONCLUSIONS

An online efficiency optimization control technique based on detecting optimal power factor, which

minimizes the total operational power losses in vector-controlled induction motor drive is proposed in

this paper. The power factor is used as the main control variable and manipulates the stator current in

order for the motor to operate at its minimum-loss point. Simulation results show that the

implemented method significantly improves the efficiency and dynamic performance of the drive

system, especially when the system operates at light loads and low frequencies.

V. FUTURE WORK

The obtained simulation (theoretical) results should be experimentally tested and validated for

industrial drive systems, operating at variable speeds with different load types. Also, the results

should be compared with that, for scalar-controlled drive system to insure that vector control approach

gives better results and ease to implement.

REFERENCES

[1] Feng-Chieh Lin and Sheng-Ming Yang (2003), “On-line tuning of an efficiency-optimized vector

controlled induction motor drive”, Tamkang Journal of Science and Engineering, Vol. 6, No. 2, pp. 103-110.

[2] C. Thanga Raj, S. P. Srivastava and Pramod Agarwal (2009), “Energy efficient control of three-phase

induction motor- a review”, International Journal of Computer and Electrical Engineering, Vol. 1, No. 1, pp.

1793-8198.

[3] Hussein Sarhan (2011), "Energy efficient control of three-phase induction motor drive", Energy and

Power Engineering, Vol. 3, pp. 107-112.

[4] Hussein Sarhan (2011), "Online energy efficient control of three-phase induction motor drive using

PIC microcontroller", International Review on Modeling and Simulation (I.RE.MO.S), Vol. 4, No. 5, pp. 2278-

2284.

[5] Hussein Sarhan, (2014) "Effect of high-order harmonics on efficiency-optimized three-phase induction

motor drive system performance", International Journal of Enhanced Research in Science Technology and

Engineering, Vol. 3, No. 4, pp. 15-20.

[6] Seena Thomas and Rinu Alice Koshy (2013), “Efficiency optimization with improved transient

performance of indirect vector controlled induction motor drive”, International Journal of Advanced Research

in Electrical, Electronics and Instrumentation Engineering, Vol. 2, Special Issue 1, pp. 374-385.

[7] K. Ranjith Kumar, D. Sakthibala and Dr. S. Palaniswami (2010), “Efficiency optimization of induction

motor drive using soft computing techniques”, International Journal of Computer Applications, Vol. 3, No. 1,

pp. 8875-8887.

[8] Branko D. Blanusa, Branko L. Dokic and Slobodan N. Vukosavic (2009), “Efficiency optimized

control of high performance induction motor drive” Electronics, Vol. 13, No. 2, pp. 8-13.

[9] G. Kohlrusz and D. Fodor (2011), “Comparison of scalar and vector control strategies of induction

motors, Hungarian Journal of Industrial Chemistry, Vol. 39, No. 2, pp. 265-270.

[10] Rateb H. Issa (2013), “Optimal efficiency controller of AC drive system”, International Journal of

Computer Applications, Vol. 62, No. 12, pp. 40-46.

[11] A Taheri and H. Al-Jallad (2012), “Induction motor efficient optimization control based on neural

networks”, International Journal on “Technical and Physical Problems of Engineering, Vol. 4, No. 2, pp. 140-

144.

AUTHORS

Hussein S. Sarhan was born in Amman, Jordan, in 1952. He received the Master and Ph.D

degrees in Electric Drive and Automation from Moscow Power Engineering Institute,

USSR, in 1978 and 1981, respectively.His research areas are induction motor optimization

techniques and energy efficient control of electric drives.Dr. Sarhan is a faculty member/

associate professor in Mechatronics Engineering Department/ Faculty of Engineering

Technology.

Sarhan is a member of Jordanian Engineering Association.

Page 41: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

675 Vol. 7, Issue 3, pp. 675-683

FLEXIBLE DIFFERENTIAL FREQUENCY-TO-VOLTAGE AND

VOLTAGE-TO-FREQUENCY CONVERTERS USING

MONOLITHIC ANALOGUE RECONFIGURABLE DEVICES

Ivailo Milanov Pandiev Department of Electronics, Technical University of Sofia, Sofia, Bulgaria

ABSTRACT This paper presents differential frequency-to-voltage (F/V) and voltage-to-frequency (V/F) converters

employing single CMOS Field Programmable Analog Array (FPAA) device. The proposed electronic circuits

are based on charge-balance conversion method. The F/V converter basically consists of analogue comparator,

differentiator that detects the rising and falling edges of the shaped waveform, voltage-controlled switch and

output low-pass filter, working as an averaging circuit. The V/F conversion circuit includes the same modules.

The input signal for the V/F circuit is applied to an integrator (realized with low-pass filter) through two-

position voltage-controlled switch and the positive feedback is closed by external electrical connections. The

proposed converters can be redesigned during operation, in dependence on the amplitude, frequency range and

noise level of the input signal, without using of external components or applying of external current or voltage.

The functional elements of the converters are realized by employment of the available configurable analogue

modules (CAMs) of the FPAA AN231E04 from Anadigm. The converters have wide-band frequency response

and can operate with single supply voltage of 3.3V. The experimental results show that the linearity error is less

than 0.5% at the frequency range of 0 to 20…25kHz and differential input range operation of 0 - 3V.

KEYWORDS: Mixed-signal circuits, Charge-balance voltage-to-frequency conversion method, Frequency-to-

voltage converter (FVC), Voltage-to-frequency converter (VFC), FPAA, System prototyping.

I. INTRODUCTION

The voltage-to-frequency converters (VFC) are voltage-controlled oscillators whose frequency is

linearly proportional to a control input voltage. The VFCs are useful devices for variety of mixed-

signal (analogue and digital) processing systems, such as A/D conversion systems with a

microcontroller, temperature meters, long-term integrators with infinite hold and telemetry systems

[1-4]. The dual process of the voltage-to-frequency (V/F) conversion is the frequency-to-voltage

(F/V) conversion. Besides, its independent applications to a speed indicator and motor speed control

system (tachometer), the FVC paired with the VFC is used for high noise immunity data transmission

systems and FSK generation and decoding.

The basic electrical parameters that define the quality of the F/V and the V/F converters are maximum

working frequency, sensitivity and linearity error. For the commercial monolithic F/V and the V/F

converters, such as AD650 (from Analog Devices), LM2907 (from Texas Instruments) and LM331

(from Texas Instruments) the transfer function and the related electrical parameters are formed by the

internal active building blocks and a group of external passive components. Moreover, the accuracy of

the electrical parameters is largely determined by the manufacturing tolerances and the temperature

drift of the values of the external passive components. A variety of F/V and the V/F converter

prototype circuits for mixed-signal signal processing, are available in the literature [5-15].

Furthermore, some of them realize only V/F conversion or F/V conversion, while others implement

both functions of conversion [5]. The majority of the published circuits of F/V and the V/F converters

have fixed value of the sensitivity and possibility for modification of this value is not demonstrated. In

many cases the amplification of small signals (such as bio-signals) requires modification of the

Page 42: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

676 Vol. 7, Issue 3, pp. 675-683

sensitivity according to the variation of the amplitude, the bandwidth or the noise level. To solve this

problem, it can be used Field Programmable Analog Array (FPAA) devices. The FPAA are

programmable CMOS analogue integrated circuits based on SC technology, which can be configured

not only in the design process of a device, but also during the operation. As well as FPGA the

programmable analogue arrays provide cost-efficient and relatively fast design of complex electronic

circuits.

The use of these reconfigurable analogue devices provides the F/V and the V/F converters with ability

to modify the electrical parameters according to the parameters of the input signals. Furthermore, by

using of FPAAs the noise level can be obtained with significantly lower values due to the differential

inputs and outputs. Among different devices and technologies in the market [16-

offers dynamically programmed FPAAs (or dynamically programmed analogue signal processors -

dpASPs), in particular the AN231E04 that is a reconfigurable programmable analogue device based

on switched-

configurable analogue blocks (CABs), surrounded by programmable interconnect resources and

analogue input/output cells.

The VFC in [10] is realized by using single FPAA AN221E04 and convert the input voltage from 0 to

3V into square waves within frequency range from 0 to 7kHz. Thus, the sensitivity of [10] is 2kHz/V

and the achieved relative linearity error is approximately equal to 2%. As a flexible differential

building block, however, it is desirable to realize the V/F converter in a form of compatible with the

F/V converter. With this in mind, novel FPAA-based circuits are developed for F/V and V/F

conversion.

The organization of the paper is as follows: the structures and the principles of operation of the F/V

and the V/F converters are described in section II; for the proposed electronic circuits in section II the

transfer functions and methods for modification of the transmission coefficients are given; in section

III are illustrated the experimental test of the proposed converters based on AN231K04-DVLP3 –

development board [20], which is built around the AN231E04 device and finally in section IV the

concluding remarks and directions for future work are discussed.

Nomenclature

ost Pulse with duration, determined by the master clock frequency for all CAMs

inf Frequency of input signal

cf Corner frequency of low-pass filter

2K Transmission coefficient of the SumFilter1 CAM

refU Reference voltage

VK Sensitivity of F/V converter

outf Frequency of output signal

inU Input voltage

intt os

out

tf

1

- amount of time required to reach comparator threshold

fK Sensitivity of V/F converter

dc Direct current

II. CIRCUITS DESCRIPTION

2.1. F/V converter

The circuit diagram of the F/V converter is shown on Fig. 1. It is built by selection among the

available configurable analogue modules (CAMs) of the AN231E04, shown in Table 1. The

differential input signal is applied between terminals 11 and 12 and average output voltage is obtained

between terminals 15 and 16. After shaping of the input signal by a Comparator1, the Differentiator1

detects the rising and falling edges of the shaped waveform in synchronism with the non-overlapping

the master clock quartz stabilized signal, used for the proper work of the whole FPAA. The

Page 43: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

677 Vol. 7, Issue 3, pp. 675-683

differentiator produces short pulses with duration ost , determined by the master clock frequency for

all CAMs, and the polarity depending on the direction of change of the comparator’s output voltage.

When the output of the differentiator is equal to zero, the voltage steering switch GainSwitch1 diverts

differential voltage equal to 0V to the output of the gain stage; this is called the integration period.

When the differentiator produces short pulse with negative value, the switch GainSwitch1 diverts the

voltage K2Uref (Uref produces by the voltage source Voltage1 CAM with fixed value equal to +2V)

from the SumFilter1 CAM to the output of the gain stage; this is called the reset period. The output

low-pass filter implemented by FilterBilinear1 CAM works as an averaging circuit ( cin ff , where

cf is the corner frequency). As the frequency inf of the input signal increases, the amount of charge

injected into the averaging circuit increases proportionally.

The average output voltage avoU , , obtained between terminals 17 and 18, that is proportional to the

input frequency is given as

(1) inrefos

t

ref

in

avo fUtKdtUKT

Uos

2

0

2, .1

,

where inin fT /1 is the period of the input signal.

Therefore, based on the formula (1), the sensitivity of the proposed F/V converter is obtained as

(2) refosV UtKK 2 .

Output

voltage

avoU ,

VMR

Input

signal

VMR

inf

SumFilter1

Voltage1

GainSwitch1

Comparator1

Differentiator1

Comparator2

FilterBilinear1

Figure 1. FPAA configuration of F/V converter.

The analysis of equation (2) shows that the change of the F/V converter’s sensitivity can be

accomplished in two ways. First, for a fixed value of ost , for example equal to s20 (at master clock

frequency equal to kHz50 ), for modification of the transmission coefficient 2K from 0.1 to 1 the

sensitivity will vary from kHzmV /4 to kHzmV /40 . Second, for a fixed value of the 2K , for

example equal to 0.5 when change the clock frequency from kHz10 ( stos 100 ) to kHz100

( stos 10 ) the sensitivity varies from kHzmV /100 to kHzmV /10 . Moreover, the change of the

coefficients of the CAMs can be implemented in the operating mode of the F/V converter.

Page 44: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

678 Vol. 7, Issue 3, pp. 675-683

Table 1. FPAA CAMs for F/V and V/F conversion.

Name Options Parameters Clocks

SumFilter1

Output changes On: Phase 1

Input 1: non-inverting

Input 2: inverting

Corner frequency [kHz]: 1

Gain1 (Upper input): K1=1

Gain 2: (Lower input): K2

Clock A: 50kHz

(Clock 3)

Voltage1

Polarity: Positive (+2V) - -

GainSwitch1

Compare control to: Signal

ground

Select input 1 when: Control low

Comparator sampling phase:

Phase 2

Gain stage: half cycle

Gain1 (Upper input): 1.00

Gain 2: (Lower input):

1:00

Clock A: 50kHz

(Clock 3)

Comparator1

Compare to: Dual input

Input sampling: Phase 1

Output polarity: Non-inverted

Hysteresis: 0mV

- Clock A: 50kHz

(Clock 3)

Differentiator1

- Differentiation constant

[us]: 1

Clock A: 50kHz

(Clock 3)

FilterBilinear1

Filter type: Low pass

Input sampling phase: Phase 2

Polarity: Non-inverted

Corner frequency [kHz]:

0.055

Gain: 1.00

Cint1=Cint2=Cint=7.9pF

Clock A: 50kHz

(Clock 3)

Comparator2

Compare to: Signal ground

Input sampling: Phase 2

Output polarity: Non-inverted

Hysteresis: 0mV

- Clock A: 50kHz

(Clock 3)

The stability of the VK in equation (2) is determined by the stability of the master clock frequency,

which is derived from the crystal oscillator and the accuracy of the parameters of the CAMs, whose

values are derived from the same clock frequency. Therefore, the variation of the sensitivity VK of

the proposed F/V converter will not cause changing of the nonlinearity error.

For normal operation of the F/V converter the inverting input (terminals 23 and 24) of the comparator

and the input (terminals 01 and 02) of the SumFilter1 are connected to voltage mid-rail (VMR) equal

to V5.1 .

2.2. V/F converter

The circuit diagram of the V/F converter is shown on Fig. 2. As can be seen, for the implementation

of the V/F converter is used the F/V converter shown on Fig. 1. For this purpose, the input voltage

inU is applied between terminals 01 and 02, terminals 11 and 12 are connected to a VMR and the

voltage between terminals 17 and 18 is applied directly to the terminals 24 and 23. Thus it closes the

positive feedback between the output of the averaging circuit and the inverting input of the

Comparator1. The output differential signal with frequency outf is formed by the Comparator2 and

can be obtained between terminals 19 and 20.

When the circuit operated as a V/F converter, the transformation from voltage to frequency is based

on a comparison of the input voltage’s value inU to the internal voltage equal to V2 , implemented

by Voltage1 CAM.

The principle of operation of the circuit shown on Fig. 2 can be described as follows. At the beginning

of a cycle, the input voltage is applied to the input of the FilterBilinear1 and its output voltage

decreases. This is the integration period. When the FilterBilinear1 output voltage (terminals 17 and

18) crosses signal ground, the Comparator1 triggers a one-shot whose time period ost is determined

by the clock frequency. During this period, a voltage of ( refin UKU 2 ) is applied to the input of the

FilterBilinear1 and the output voltage starts to increase. This is the reset period of the circuit. After

the reset period has ended, the circuit begins another integration period, and starts ramping downward

again. The amount of time required to reach the comparator threshold is given as

Page 45: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

679 Vol. 7, Issue 3, pp. 675-683

(3)

int

2

int

1817

1817int

)(

C

U

UKUC

t

dt

dU

Ut

in

refinos

,

where the pFC 9.7int is the internal capacitor of the FilterBilinear1.

The output frequency can be found as

(4) refos

in

os

outUKt

U

ttf

2int

1

.

The integration capacitor intC has no effect on the transfer function, but merely determines the

amplitude of the sawtooth signal out of the averaging circuit.

Output

signal - outf

Input

voltage

VMR

inU

SumFilter1

Voltage1

GainSwitch1

Comparator1

Differentiator1

Comparator2

FilterBilinear1

Figure 2. FPAA configuration of V/F converter.

From (4) for the instability of output frequency is found

(5) ref

ref

os

os

in

in

out

out

U

U

K

K

t

t

U

U

f

f

2

2 .

The first term in formula (5) is the regulating effect. The second, third and fourth term reflect

confusing influence in the circuit. The output frequency is linearly dependent on the input voltage.

The transmission coefficient 2K is interrelated with the corner frequency value of the CAM and the

ratios of the two switched capacitors with values pFCC outin 9.7 . The voltage refU is produced by

a dc voltage source, which making a connection to the on-chip voltage references. There are no

capacitors or dynamic switches. It produces a dc output so that the output is continuous and valid

during the both phases of the clock signals [20].

According to the formula (4), it is concluded that the sensitivity of the proposed V/F converter is

governed by the following equation

(6) refosf UtKK 2/1 .

Similar to equation (2), the analysis of equation (6) shows that the variation of the sensitivity can be

achieved by changing the value of the 2K or by changing the master clock frequency of the circuit.

III. EXPERIMENTAL RESULTS AND DISCUSSION

The experimental test that has been used for validation of the proposed converters is based on

AN231K04-DVLP3 –Development board [20], which is built around the AN231E04 device. The

power supply voltage of the AN231E04 is equal to +3.3V.

Page 46: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

680 Vol. 7, Issue 3, pp. 675-683

The measured average output voltages of the prototype V/F converter are plotted on Fig. 3 against the

frequency of the output signal at two values of the conversion sensitivity. The conversion sensitivities

1VK and 2VK are obtained for gain 2K equal to 0.5 and 1, respectively. The clock frequency of the

circuit is chosen to be 50kHz. However, for the frequency change from 0 to 25kHz the average output

voltage varies from 0 to 0.5 V (or 1V). In this operating range an error of less than 0.5% is achieved.

In comparison with the F/V converters, presented in [5] and [7], the proposed converter has higher

operating frequency and greater sensitivity. The F/V converters reported in [6] and [8] has a wider

bandwidth (> 1MHz) and good linearity, but the sensitivity is a fixed value and the modification of the

electrical parameters required redesigning of the electronic circuits. Also the F/V converter presented

in [6] can use only digital input signals, while the created F/V converter can operate with both

analogue and digital signals.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25

measured

calculated

VU avo ,,

kHz

mVKV 201

kHz

mVKV 402

kHzfin ,

Figure 3. Average output voltage versus the frequency of the input signal at two values of the conversion

sensitivity obtained by the F/V converter.

The oscillation frequency outf of the prototype V/F converter against the input voltage is plotted on

Fig. 4. The gain 2K is set to 0.5 and 1, respectively. The clock frequency of the circuit is set equal to

50kHz. When the input voltage changes from 0 to 0.4 V (or 0.8 V) the oscillation frequency varies

from 0 to 20kHz, the linearity error is again not higher than 0.5%. Thus, a figure of merit (FOM =

sensitivity (MHz) / error (%)) is equal to 0.1. The result is in a good agreement with the calculated by

(4). Fig. 5 and 6 present the measured output waveform at the input voltage equal to mV20 and

mV400 , respectively. The clock frequency is set equal to kHz50 and the sensitivity is

VkHzK f /25 . The output frequencies at these input voltages are 500.572Hz and 10.0013kHz,

respectively. For the both waveforms to channel 2 (CH2) the output signal from the inverting output is

applied (terminal 19 of the FPAA) and to the channel 1 (CH1) the signal from the non-inverting

output is applied (terminal 20 of the FPAA). The differential output signal (M) is the difference of the

voltages applied to the both channels. Notably, that with variation of the input voltage the output

frequency varies proportionally, as the pulse duration is unaffected ( stos 20 ) and the error is less

than 0.2%.

Page 47: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

681 Vol. 7, Issue 3, pp. 675-683

0

5

10

15

20

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

measured

calculated

measured

calculated

VUin ,

kHzfout,

V

kHzK f 251

V

kHzK f 502

V

kHzK f 502

Figure 4. The oscillation frequency versus the input voltage at two values of the conversion sensitivity obtained

by the V/F converter.

Figure 5. The output waveform of the V/F converter at VkHzK f /25 and mVU in 20 .

Figure 6. The output waveform of the V/F converter at VkHzK f /25 and mVU in 400 .

Page 48: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

682 Vol. 7, Issue 3, pp. 675-683

Table 2. Comparison with prior designs of V/F converters.

This work [5] [9] [10] [11] [13] [14] [15]

Technology CMOS CMOS CMOS CMOS N/A CMOS CMOS CMOS

Max. fout 20kHz 20kHz 8kHz 7kHz 3.5kHz 52.95MHz 260kHz 100MHz

Uin 0-3V 0-1V 0.1-10V 0-3V 0-5V 0-0.9V 0-2V

0-10V in

100mV

steps

Sensitivity 25…250kHz/V 1kHz/V 0.8kHz/V 2kHz/V 0.7kHz/V 58MHz/V 130kHz/V -

Error 0.5% N/A 0.02% 1.95% < 1% < 8.5% < 7% within

±5ppm

FOM 0.05…0.5 N/A 0.04 0,00102 0.0007 6.28 0.019 -

Power at

max fout 65mW N/A N/A 95mW N/A 0.218mW N/A

N/A

Year 2014 1986 1997 2004 2005 2007 2010 2013

The measured results compared with several prior works are summarized in Table 2. In comparison

with the V/F converter, presented in [9], the proposed V/F converter has higher error ( 0.5%), but the

maximum frequency is higher and the sensitivity with significantly greater value can be obtained. The

V/F converter presented in [13] is characterized by higher operating frequency and sensitivity, but the

error accept relatively large value (equal to 8.5%), which may affects to the conversion of weak

signals from various sensors. Obviously, the error for the converters reported in [5], [10], [11] and

[14] are higher than the proposed V/F converter, while the input voltage range for the most circuits is

from 0 to several volts. For V/F converter reported in [15] the operating frequency bandwidth is

higher and the error is relatively small (within ±5ppm), but the sensitivity is with fixed value and

under a step equal to 100mV are obtained 100 discrete values for the frequency of the output signal in

the range up to 100MHz.

IV. CONCLUSION AND FUTURE WORK

In this paper a differential F/V and V/F converters by using of a charge-balance method was

proposed. The selected FPAA is an Anadigm AN231E04, where the mixed-signal processing is

implemented. Moreover, for the created V/F converter the linearity error not exceeding 0.5% was

achieved at sensitivity changes from 25kHz/V to 250kHz/V. In comparison with prior works the

variation of the sensitivity depending on the parameters of the input signal could be done by changing

the parameters of configurable analogue blocks without changing the structure of the circuit or

without adding of external passive components. The stability of the sensitivity was determined by the

stability of the master clock frequency, which is derived from the external quartz crystal oscillator and

the accuracy of the parameters of the configurable blocks, whose values are derived from the same

clock frequency. Therefore, these converters can be used as flexible mixed-signal building blocks in

extracting weak differential signals from transducers and other signal sources.

Future work will be focused on implementation of the proposed F/V and V/F converters in

programmable systems-on-chips (PSoCs), which combine analogue and digital functions.

Furthermore, the development of the proposed electronic circuits will be oriented to modelling and

design of PLL-based frequency synthesizers and synchronizers.

REFERENCES

[1]. Kester, W., & Bryant J. (2004). Analog-digital conversion, USA Analog Devices.

[2]. Paiva, M. (2002) “Voltage-to-frequency/frequency-to-voltage converter (AN795)”. Retrieved from

Microchip website: http://ww1.microchip.com/downloads/ en/appnotes/00795a.pdf.

[3]. Bryant J. (1989) “Voltage-to-frequency converters (AN-361)”. Retrieved from Analog Devices

website: http://www.analog.com/static/imported-files/application_notes/84860375AN361.pdf

Page 49: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

683 Vol. 7, Issue 3, pp. 675-683

[4]. Klonowski, P. (1990) “Analog-to-digital conversion using voltage-to-frequency converters (AN-276)”.

Retrieved from Analog Devices website: http://www.analog.com/static/imported-

files/application_notes/185321581AN276.pdf.

[5]. Matsumoto, H., & Fujiwara, K. (1986) “Switched- capacitor frequency-to-voltage and voltage-to-

frequency converters”, IEEE Transactions on Circuits and Systems, Vol. 33 (8), pp 836–838.

[6]. Djemouai, A., Sawan, M., & Slamani, M. (2001) “New frequency-locked loop based on CMOS

frequency-to-voltage converter: design and implementation”, IEEE Transactions on Circuits and

Systems II: Analog and Digital Signal Processing, Vol. 48 (5), pp 441–449.

[7]. Gupta, K., Bhardwaj, M., & Singh, B. (2012) “Design and analysis of CMOS frequency to voltage

converter using 0.35μm technology”, International Journal of Scientific & Engineering Research, Vol.

3 (5), pp 1–9. Retrieved from www.ijser.org/researchpaper%5CDesign-and-Analysis-of-CMOS-

Frequency- to-Voltage-Converter.pdf.

[8]. Singh, M., & Sahu P.P. (2013) “A wideband linear sinusoidal frequency to voltage converter with fast

response time”, Procedia Engineering, Vol. 64, pp 26-35.

[9]. Trofimenkoff, F.N., Sabouri, F., Jichang, Qin & Haslett, J.W. (1997) “A square-rooting voltage-to-

frequency converter”, IEEE Transactions on Instrumentation and Measurement, Vol. 46 (5), pp 1208–

1211.

[10]. Yakimov, P., Manolov, E., & Hristov, M. (2004) “Design and implementation of a V-f converter using

FPAA”, 27th International Spring Seminar on Electronics Technology: Meeting the Challenges of

Electronics Technology Progress, Vol. 1, pp 126-129.

[11]. Stork, M. (2005) “Sigma-delta voltage to frequency converter with phase modulapossibility”, Turkish

Journal of Electrical Engineering & Computer Sciences, Vol. 13 (1), pp 61–67.

[12]. Wang, C.-C, Lee, T.-Je, Li C.-C. & Hu, R. (2006) “An all-MOS high-linearity voltage-to-frequency

converter chip with 520-kHz/V sensitivity”, IEEE Transactions on Circuits and Systems II: Express

Briefs, Vol. 53 (8), pp 744–747.

[13]. Wang, C.-C, Lee, T.-Je, Li C.-C. & Hu, R. (2007) “Voltage-to-frequency converter with high

sensitivity using all-MOS voltage window comparator”, Microelectronics Journal, Vol. 38 (2), pp

197–202.

[14]. Valero, M.R., Aldea, C., Celma, S. Calvo, B., & Medrano, N. (2010) “A CMOS differential voltage-to-

frequency converter with temperature drift compensation”, 2010 European Workshop on Smart

Objects: Systems, Technologies and Applications (RFID Sys Tech), Ciudad, Spain.

[15]. Hino, R., Clement, J. M., & Fajardo, P. (2013) “A 100MHz voltage to frequency converter”, Journal of

Physics: Conference Series, Vol. 425 (21), pp 1-4.

[16]. Lattice ispPAC-POWR1014/A (2012). Retrieved from http://www.latticesemi.com/

documents/DS1018.pdf.

[17]. Cypress PSoC® Mixed-Signal Array (2012). Retrieved from http://www.cypress.com

[18]. Zetex TRAC-S2 (2012), Retrieved from http://www.datasheetcatalog.org/datasheets/

208/306841_DS.pdf.

[19]. Anadigm dpASP (2013). Retrieved from http://www.anadigm.com/dpasp.asp

[20]. AN231K04-DVLP3 – AnadigmApex Development Board (2013). Retrieved from

http://www.anadigm.com/_doc/UM231000-K001.pdf.

AUTHOR

Ivailo Pandiev received his M.Sc and Ph.D. degrees in Electronic Engineering from

Technical University of Sofia (TU-Sofia), Bulgaria, in 1996 and 2000, respectively. He is

currently an associate professor in the Department of Electronics at the TU-Sofia. His

research interests include analysis and design of analogue and mixed-signal circuits and

systems.

Page 50: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

684 Vol. 7, Issue 3, pp. 684-691

A Review SEARCH BITMAP IMAGE FOR SUB IMAGE AND

THE PADDING PROBLEM

Omeed Kamal Khorsheed

School of Engineering, Dept. of S\w Engineering Koya University, Iraq - Erbil

ABSTRACT In this paper we will discuss the algorithm of search Bitmap image for sub image or duplicate images ,It is

difficult for people to find or recognize if sub images are Found in large Image .This algorithm is a part of

Image Processing Technique to help people solve Search problem .This algorithm has possibility to Convert

the image to Binary Matrix and to individualize The Reading Buffer Size according to the padding edge. This

algorithm is the first step to create Image Search Engine which can find matching image without looking to

image name text or alternative text tag.

KEYWORDS: Sub Image, Image Search Engine, Image Processing, Bitmap, Padding, algorithm, Duplicate,

Matrix, Bitmap file header, Java, Looping, Coordinate.

I. INTRODUCTION

After The Huge a evolution in multimedia world ,Images search and images manipulation process

must grow up to be powerful and effective especially with Image sharing web site , Millions of

images are published on the web every day , How we can make search for an image or a part of it

easy like we search text in Google?

Images are not usually named with conveniently text, and the alternative Tag all the time is empty. So

the images need to be searched by their pixel.

Now it is important to compare images with images or sub image and to locate images that contain the

pattern which we are looking for it. The pattern can be a picture of anything Number, Character,

person, building or just duplicate to the same image, and the search operation is to retrieve matching

images. This retrieval is called the sub image searching.

In this paper, we propose an algorithm for searching sub image into large image and to locate the

indexes to all matching Pattern. This algorithm can solve any Image search for 24-bit Bitmap image

format , And we will discuss The padding problem in bitmap file .The search process need big loops

in four levels deep and the Inner condition that find any different in pixel must rest the looping

and exit out the Inner loop to the main loop which start with New Line[7] .

II. SURVEY OF BITMAP IMAGE

The bitmap is very strictly bound to hardware platform architecture, Bitmap was created by Microsoft

and IBM, Bitmap file format compatible with Intel File format which called the little endian format

[1, 2, 3, 5, and 8] because of the byte order that an Intel processor uses internally to store values.

Bitmap file has four types of headers; those types are differentiated by the size member, which is the

first dword in each of the structures.

1. BITMAP CORE HEADER

2. BITMAP INFO HEADER

3. BITMAP V4 HEADER

4. BITMAP V5 HEADER

Page 51: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

685 Vol. 7, Issue 3, pp. 684-691

In “BITMAP INFO HEADER” there is a color table describes how pixel values correspond to RGB

color values. RGB is a model for describing colors that are produced by emitting light [4].Bitmap

pixels stored in order binary integers format this order can by (red, blue, green) but in 24-bit there is

reverse ordering (blue, green, and red) [5] so each image format has deferent bits order which known

as color Mask . Bitmap pixels stored in upside way (down to up) and that mean The first line in the

file is the bottom line of the image and the last line in the file is the first line of the image.[6]

2.2. Bitmap Image Most popular Format:

8-bit image is 8 bits file and it use 1 bytes for color information.

16-bit image is 16 bits file and it use 2 bytes for color information.

24-bit image is 24 bits file and it use 3 bytes for color information.

32-bit image is 32 bits file and it use 4 bytes for color information.

2.3. 24-bit Bitmap Image Padding Problem:

Bitmap file BITMAPINFOHEADER Header has two parts (14-byte file header and a 40-byte

information header) , These two parts Contain information about Data stored in the file and the

order of that data .IF we look at The 24-bit Bitmap Header We will find Describe For Image Size,

Row Length (width) , Column Length (height) and the RGB (Red/Green/Blue) color information

values (pixel) which are used in the file to present the image Details , RGB need 3 bytes in the 24-

bit Bitmap image. [1, 2, 3, 5, 6, 8]

Each pixel is represented by Triple of RGB. Depending on the width of the image, pad maybe used at

the end of each Row. If width is a multiple of 4, no pad is required, otherwise at the end of each line

we need to add a number which will make the width divisible by 4 this number is Gap with Zero

Value.

When OS Load an Image to the memory there is a scan Operation, each scan line is padded out to an

even 4-byte boundary (even 4-byte has odd 3 bytes of RGB). That mean every line in the 24-bit image

file has point to determine the end of line this point called Pad and it must be even number of 4-bytes

Triple, in the same time and at the same scan line it must has RGB of 3-bytes number, and we can

calculate the scan line length by counting the RGB 3- bytes according to this equation:

Scan Row length=Scan Row width * 3

Dependence on the above equation the result could be odd number, but the boundary must be even

number which means maybe the Row has gap between Row length and Row width, this gap known as

Padding Problem, we can get the Image size, Width and Height from the Bitmap Header.

2.4. Example for padding:

If we have a Bitmap image with depth 24-bit and the area (X * Y) of this image is 15 * 15.Now we

can calculate the color pixel information like that (15 * 3=45 bytes) in each row.

45 bytes is the length of every row so now we have to compute if row has pad or not, since 24-bit

image is an odd numbers 3 bytes color depth we have to find the modulo of row by 4 like this (45

MOD 4=1), by looking to the modulo result if it’s Zero that meaning image format depends on

Padding and no gap found.

In our example the modulo give us the result 1, now we have to find out how many bytes of padding

in this image.

Image_row_length=image_width * 3;

Image_row_length=15 * 3;

Padding= 4 –( image_row_length %4);

Padding=4 –( 1);

Padding=(3);

Image_real _Row_Length= Image_row_length +Padding

Image_real _ Row _Length= 45+3;

Figure 1:24-bit Image Row With Padding Gap

Page 52: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

686 Vol. 7, Issue 3, pp. 684-691

Padding result tell us we need to use 3 bytes to fill the gap between image row color info and the end

of the each row, that gap has the value of zero[9].

So we can find The Image Length By this equation:

Image_Length=(( Image_row_length + Padding) * Image_ column _length *3)

2.5. Another Example:

In this Example we will see an 24-bit bitmap image with area 12 * 12.

When we calculate the color pixel info it will be (12 * 3=36 bytes) in each row 300 bytes is the

length of each row so now we have to compute if row has padding or not like this (36 MOD 4=0), the

modulo result is Zero that meaning Padding will be Zero and No gap between image row color

information and the end of row [9].

Image_real _Row_Length= Image_row_length +0.

Image_real _Row_Length= 36 +0 =36.

Figure 2:24-bit Image Row with No Padding Gap

The Image Length:

Image_Length=(( Image_row_length) * Image_ column _length *3)

2.6. Algorithm to solve 24-bit Padding Problem:

//Find 24-bit Image Real Length Padding

Image_row_length=image_width * 3;

if (image_row_length % 4 !=0){

Padding= 4 –( image_row_length %4);

}

else{

Padding=0;

}

Image_real _Length=(( Image_row_length +Padding) *Image_ column _length *3)

//End

After we have explained the problem of Padding in 24-bit image we can suggest a way to calculate

the padding for many deferent format of Bitmap Image like 8-bit ,16-bit,24-bit and 32-bit depends on

Bit Count :

2.7. Algorithm to solve 8-bit, 16-bit, 24-bit and 32-bit Padding:

Image_row_length=image_width *( image_Bit_Count / 8);

if (image_row_length % 4 !=0){

Padding= 4 –( image_row_length % 4);

}

else{

Padding=0;

}

Image_real _Length=(( Image_row_length +Padding) *Image_ column _length *3);

III. IMPLEMENTATION

In this section we will describe the implementation details of our algorithm . This algorithm has two

class .Sub Image Search class and Convert Image To Matrix class, Sub Image Search class will

implement the search operation to find the matching indexes, Convert Image To Matrix class has the

responsibility to Read The Bitmap image file header and individualize the padding and File Buffer

Size , After that it will convert the image RGB color data to a matrix of integer .

3.1.Test Media:

We have an image for Numbers from 1 to 20 ,this image is 24-bit bitmap image

Page 53: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

687 Vol. 7, Issue 3, pp. 684-691

Figure 3: LargeImageNumbers.bmp

And the sub image could by any Number image, Let’s take number five as example

Figure 4 :SubImageN5.bmp

3.2 Java Class:

3.2.1 ConvertImageToMatrix Class:

Our first step is converting the images to matrix of integers , In order to create the matrix we must

read the image header to get the file size ,Image width , Image height and the Bit count , After that

we have to Check if it’s 24-bit Image or not [9].

There is a java code designed to read image header and get the Bitmap details [9], You can find this

code on:

http://www.javaworld.com/javaworld/javatips/jw-javatip43.html

3.2.1.2 Read Bitmap Header:

In order to get the bitmap details we have to read the two parts of bitmap header

BITMAPFILEHEADER(14-byte) and BITMAPINFOHEADER(40-byte), All the date in header

stored in binary and that mean we have to use the binary operators {|,&,<} [6,7,9] .

Get File Size:

try {

FileInputStream fs = new FileInputStream(imageFile);

int bflen = 14; // 14 byte BITMAPFILEHEADER

byte bf[] = new byte[bflen];

fs.read(bf,0, bflen); // Read the file header

int nsize = (((int)bf[5] & 0xff) << 24)

| (((int)bf[4] & 0xff) << 16)

| (((int)bf[3] & 0xff) << 8)

| (int)bf[2] & 0xff;

}

catch (Exception e) {

Page 54: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

688 Vol. 7, Issue 3, pp. 684-691

System.out.println(e.getMessage());

}

Get image width , Height ,Bit count:

try {

FileInputStream fs = new FileInputStream(imageFile);

int bilen=40; // 40-byte BITMAPINFOHEADER

byte bi[] = new byte[bilen];

fs.read(bi, 0, bilen); // Read the bitmap header

int nbisize = (((int)bi[3] & 0xff) << 24)

| (((int)bi[2] & 0xff) << 16)

| (((int)bi[1] & 0xff) << 8)

| (int)bi[0] & 0xff;

int nwidth = (((int)bi[7] & 0xff) << 24)

| (((int)bi[6] & 0xff) << 16)

| (((int)bi[5] & 0xff) << 8)

| (int)bi[4] & 0xff;

int nheight = (((int)bi[11] & 0xff) << 24)

| (((int)bi[10] & 0xff) << 16)

| (((int)bi[9] & 0xff) << 8)

| (int)bi[8] & 0xff;

int nbitcount = (((int)bi[15] & 0xff) << 8) | (int)bi[14] & 0xff;

int nsizeimage = (((int)bi[23] & 0xff) << 24)

| (((int)bi[22] & 0xff) << 16)

| (((int)bi[21] & 0xff) << 8)

| (int)bi[20] & 0xff;

if (nbitcount != 24){

System.err.println("Use only 24bit color .bmp files");

System.exit(1);

}

}

catch (Exception e) {

System.out.println(e.getMessage());

}

In this step we checked if the image is 24-bit , If not the code will print error Message and exit.

3.2.1.3 Handling Padding problem and Buffer Size:

After we get the image details, We must handling Padding problem in order to set the Buffer Reader

Size:

int padding =(nwidth * 3)%4;

int npad=0;

if(padding!=0){

npad = 4-( (nwidth * 3)%4); // the Gap Size

}

else{

npad=0;// No Gap

}

//-----Set Buffer Size

int imageBufferSize=( nwidth + npad) * 3 * nheight;

3.2.1.3 Reading Image Data And Store it in matrix :

Now we can store image data into imageFileRGBData matrix depend on the buffer reader size

imageBuffersize.

int[ ][ ] imageFileRGBData = null; is the Matrix to representing the image pixels

try {

int ndata[] = new int [nheight * nwidth];

this.imageFileRGBData = new int[nheight][nwidth]; // [line][column]

byte brgb[] = new byte [imageBufferSize];

Page 55: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

689 Vol. 7, Issue 3, pp. 684-691

fs.read (brgb, 0, imageBufferSize); // Read the bitmap

int nindex = 0; // current postion in brgb

for (int j = 0; j < nheight; j++){ // by lines, from bottom to top

for (int i = 0; i < nwidth; i++){ // by columns

int rgbValue = (255 & 0xff) << 24

| (((int)brgb[nindex + 2] & 0xff) << 16)

| (((int)brgb[nindex + 1] & 0xff) << 8)

| (int)brgb[nindex] & 0xff;

ndata[nwidth * (nheight - j - 1) + i] = rgbValue;

this.imageFileRGBData[nheight - j - 1][i] = rgbValue;

nindex += 3;

}

nindex += npad;

}

}

catch (Exception ee) {

System.out.println(ee.getMessage());

}

Conversion step to both Image (the large image and sub image) will done as object of

ConvertImageToMatrix class.

3.2.2 SubImageSearch Class:

In this class the first step is to create the two object of image matrix like this:

ConvertImageToMatrix largeImage = new ConvertImageToMatrix("LargeImageNumbers.bmp");

ConvertImageToMatrix subImage = new ConvertImageToMatrix("SubImageN5.bmp");

int[][] large=largeImage.imageFileRGBData;//The Matrix of Large Image

int[][] sub=subImage.imageFileRGBData;//The Matrix of Sub Image

Now we have two matrix of integers, large matrix and sub matrix so we can now search for sub

matrix in large matrix.To start search for matching we must get the first pixel value from the sub

matrix as start point for search.

int pixelIndex = sub[0][0];//Search Start Point is the First Pixel(x,y) From Sub Image

3.2.2.1 SubImageSearch Four Level Looping:

We have to start our search loop by row lines scan, In each row we will scan all column with nested

loop to find matching point (x,y), And from the matching point we will search for any different

between sub and large depend on sub image size ,that mean we need new two nested loop looking for

different .

If we found the first different we have to leave the row looping index and start over with next index.

In this case we must to use Branching Statement ( Continue Label) [7] between row loop and column

so we can exit the internal fourth loop to the main loop.

If different not found we will print out the point of matching (x,y) and continue searching for another

matching point in case the sub image exists more than one time.

We used foundCount as matching Counter. And rowIndex to save matching Row index, and

columnIndex to save matching Column index , Both have the initial value -1 to be sure index is out of

large image.

public static int foundCount = 0;//counter for Matching if found

public static int rowIndex = -1;//-1 is out of image

public static int columnIndex = -1;//-1 is out of image

//-----------------Row lines scan

for ( int row = 0 ; row <= large.length - sub.length ; row++ ){

// Column Scan

continuescan://Column Scan Label

for ( int column = 0 ; column < large[0].length - sub[0].length; column++ ){

if (large[row][column] != pixelIndex) continue; // No Match With Start Point Go to Next

row

Page 56: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

690 Vol. 7, Issue 3, pp. 684-691

// If there is matching With Start Point we have to Check All Pixel in sub with Pixel in large From

this Point

for ( int r2 = 0 ; r2 < sub.length ; r2++ )

for ( int c2 = 0 ; c2 < sub[0].length ; c2++ )

// Now we are looking for Any different Pixel

if ( large[row + r2][column + c2] != sub[r2][c2]){

continue continuescan;//there is no match Go to New Line (Next Y)

}//end if Any different

//-------Save The Matching Point Index

rowIndex = column;

columnIndex = row;

//--------Check if Index is positive and the Point is In Image

if ( rowIndex != -1 && columnIndex !=-1){

System.out.println(++foundCount+" : The Sub Image Match Found at (" + rowIndex + " , " +

columnIndex+")");

}//end if positive

}//end for column

}//end for row

//--------Check if Index is negative and the Point is Out Image

if( rowIndex == -1 || columnIndex ==-1){

System.out.println("No Match Found");

}//end if negative

3.2.2.2 The Test Result:

When we run the algorithm with the tow image the result will be :

Figure 5 :result of searching to the sub image

Now we can see the coordinate of matching point like this:

Figure 6 :Matching found in two Places

Page 57: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

691 Vol. 7, Issue 3, pp. 684-691

IV. LIMITATIONS

Our algorithm can work only with 24-bit bitmap image format. But as we know there are many ways

to convert none 24-bit images to this format.

V. CONCLUSION

In this algorithm, we handled the problem of Searching for Sub Image and how to find out the

Padding and real Buffer size when we are reading bitmap image .

Our proposal algorithm Can detect the matching Indexes in the large image for sub image however it

occurs in the target large image .For the implementation of this algorithm we used the java language.

VI. Future Work

Our future work will be to create and development an image search engine which can search in image

contains instead of Text or HTML meta data, This search engine can be helpful to find the target

image from internet or any image storage, after that we will try to handle video frame search to

lookup for still image. The next future research will be about Low-Pass and High-Pass Image filters.

REFERENCES

[1] Dr. William T. Verts, "An Essay on Endian Order" , April 19, 1996

[2] Dr. David Kirkby (G8WRB) , atlc "bmp format", http://atlc.sourceforge.net/bmp.html

[3] Microsoft MSDN , "Bitmap Storage”,

[4] Microsoft MSDN ,"Bitmap Header Types" ,

[5] John Miano, "Compressed Image File Formats: JPEG, PNG, GIF, XBM, BMP"

[6] (DUE 9816443) and the Hypermedia and Visualization Laboratory, "24-bit BMP FILES", Georgia State University, G.

Scott Owen

[7] The Java Tutorials, ”Branching Statements", [8] Charlap, David, "The BMP File Format: Part I," Dr. Dobb's Journal, March 01, 1995

[ 9] John D. Mitchell , "How to read 8- and 24-bit Microsoft Windows bitmaps in Java ", JavaWorld ,Dec 1, 1997

AUTHORS BIOGRAPHY

O. K. Khorsheed born in Baghdad in 1974. BSC computer science from al-mustansiriya

University. Higher diploma degree from Informatics Institute of Higher Studies. Master

degree from Jordan (Arab academic for banking and financial science) school of (IT)

computer information system. Lecturer in Koya University since 2005 in school of

engineering S\W engineering department.

Page 58: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

692 Vol. 7, Issue 3, pp. 692-700

POTENTIAL USE OF PHASE CHANGE MATERIALS WITH

REFERENCE TO THERMAL ENERGY SYSTEMS IN SOUTH

AFRICA

Basakayi J.K. 1; Storm C.P.2 1Department of Mechanical Engineering, Vaal University of Technology,

2School of Mechanical and Nuclear Engineering, North West University, Potchefstroom

ABSTRACT The supply of fossil energy source such as coal is becoming less reliable. The CO2 has to be reduced. At the same

time energy consumption is increasing in South Africa.

One of the solutions is to make use of phase change materials in the thermal systems. Phase change materials

absorb and release thermal energy at a constant temperature. In addition, they have high density energy storage.

Integrating phase change materials in thermal systems can improve the efficiency and reliability of those systems.

Different potential applications of Phase change materials are provided in this paper that can be valuable in the

context of South Africa for storing energy and control temperature purpose.

KEYWORDS: Phase change material, thermal energy &latent heat storage.

I. INTRODUCTION

Lack of reliable and cost effective energy is one of the major problems facing social and economic

development of each country. South Africa has an increasing population. The excessive demand of

energy consumption keeps on increasing with the growing of the population. Industries and production

companies also require abundant and reliable energy to run their activities.

In South Africa, electricity is essentially generated from coal [1]. The disadvantages associated with

this traditional type of energy are numerous: the pollution of the atmosphere, depletion of the Ozone

and limited amount of coal.

To deal with the high demand of energy and the problems linked to the use of energy produced by fossil

fuel, the two possible ways are: investigate other forms of energy such as solar energy and well manage

the existing energy.

Since the crisis of energy during the period of 1970s, other types of energy have been considered.

Renewable energy is one type of energy that is constantly and regularly replenished and will never

come to an end anytime soon. Among the types of renewable energy investigated there are: Bio-mass,

wind, solar energy, geothermal.

Solar energy is one of the most promising types of renewable energy that offers a better alternative to

the energy generated by the fossil fuel. It is available, clean and abundant [2].

The excessive solar energy can be also stored when it is available for later use. It is about well managing

the existing energy by making use of some form of storage when sun is shining and more solar radiation

is accessible.

Different techniques used for storing thermal energy are: Sensible energy, latent heat energy and

thermo-chemical reaction. Among those three methods considered for storing thermal energy, the latent

heat energy emerges as the best option thanks to the advantages of releasing and absorbing energy at

constant temperature for the storage material used and also to the possibility of having high energy

Page 59: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

693 Vol. 7, Issue 3, pp. 692-700

density storage compared to the sensible heat storage. Thermo-chemical storage system can provide

high density energy but it is still under development [3].

Latent heat storage systems make use of the material called Phase Change Material (PCM) due to the

change of phase that occurs during the discharge and the charge processes. In addition of storing and

releasing the latent thermal energy during the charging and discharging processes, PCM can be also

used for thermal control for different applications purposes.

Since solar energy is not available during night or cloudy day, the integration of a storage system in a

solar thermal energy as a latent heat can increase the reliability and the performance of thermal systems.

In applications for temperature control, the focus is on the temperature regulation and for storage of

heat or cold with high storage density; the emphasis is on the amount of heat supplied.

The applications found in literature for PCMs include solar hot water heating, solar cooking, solar

absorption, solar green house, thermal control of instruments, heating of building. Some applications

for thermal control are: spacecraft thermal control environment, integration of PCMs in delicate and

precise instrument thermal control.

Many investigations have been carried out elsewhere and successful applications are increasing in

different developed countries. No practical applications are found in the market here in South Africa

expect for few studies that are done in academic institutions for research purposes.

The immediate objective of this paper is to provide a basic understanding of the behaviour of PCM,

indicate how this material can be selected for a given application and finally provide different potential

applications of PCM with reference to thermal energy systems in South Africa, so that the integration

of the PCM can contribute in solving the problem of energy in this country.

This paper focuses on the potential applications of PCM in South Africa. The second section gives some

constructive ideas for using PCM. The basic concept of Latent heat storage systems is explain in the

third section. In the fourth section, the selection of PCM for different applications is presented. The

following section covers the potential applications of PCM here in South Africa. The last section

includes the future work which will be on the modelling of a latent heat storage.

II. WHY IS THE APPLICATION OF PCM RELEVANT IN SOUTH AFRICA?

More energy from sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on

the planet in a year (4.1 × 1020 J) [1]. Most areas in South Africa average more than 2 500 hours of

sunshine per year, and average solar radiation levels range between 4.5 and 6.5 kWh/m2 in one day [4].

South Africa has therefore one of the highest solar radiations in the world.

Today, many solar energy systems able to convert solar radiation directly into thermal energy have been

developed for low, medium and high temperature heating applications in different developed countries.

Solar thermal power generation and solar space heating are examples of industrial applications;

domestic applications include solar water heating and solar absorption refrigeration. The previously-

mentioned examples are only a sample of applications of solar energy to name but a few.

Although solar energy has its advantages, it remains, however, intermittent and unpredictable. Its total

available value is seasonal and often dependent on meteorological conditions of a location. Therefore,

solar energy cannot, for example, be trusted to produce cooling during periods of low solar energy

irradiation for a solar absorption machine. Some form of thermal energy storage is necessary for

effective utilisation of such an energy source, to meet demand on cloudy days and at night. Thermal

energy storage, therefore, plays an important role in conserving thermal energy, leading to an

improvement in the performance and reliability of a range of energy systems.

Thermal energy storage refers to a number of technologies which store energy in a thermal accumulator

for later re-use. Integrating thermal energy storage in a thermal system has a potential to increase

effective use of thermal energy equipment and it can facilitate large scale switching [5]. This energy

storage is used for correcting the mismatch between the supply and the demand of energy.

The use of PCM as an energy storage medium is now worldwide considered as an option with a number

of advantages. The combination of solar energy and the use of PCMs in any thermal energy system

may result on alleviating the problem of pollution due to the use of fossil sources energy, assisting the

management of energy with the high demand of energy increasing by storing when it is available and

Page 60: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

694 Vol. 7, Issue 3, pp. 692-700

use later or during cloudy and low solar radiation. It is also important to make use of PCM wherever

there is a need to control the temperature of space, equipment or human body.

III. LATENT HEAT STORAGE SYSTEMS

Latent Heat Thermal Energy Storage (LHS) is based on the heat absorption or release when a storage

material undergoes a phase change from solid to liquid, solid to solid or liquid to gas or vice-versa.

Figure 1 summarises the classification of LHS materials.

Fig.1 Classification of the LHS materials (modified after Shama et al., 2004)

Latent heat storage can be accomplished through solid-liquid, liquid-gas, solid-gas, and solid-solid

phase transformations. Only two are of practical interest, solid-liquid and solid-solid [6]. Solid-gas and

liquid-gas transition have a higher latent heat of fusion, but their large volume changes on phase

transition are associated with containment problems and rule out their potential utility in thermal storage

systems. Solid-liquid PCMs present the advantages of smaller volume change during the phase change

process and longer lifespan [6]:

The storage capacity of a LHS system with a PCM is given by:

)]T(TC)T(TCHm[aQ mflpimspm --

Where Q = quantity of heat stored;

m = mass of the PCM;

am = fraction melted;

H = latent heat of fusion;

Csp = average specific heat between temperatures Ti and Tm;

Clp = average specific heat between temperatures Tm and Tf;

Ti = Initial temperature;

Tf = Final temperature;

Tm = Melting point

Latent heat storage materials

Organic

ParaffinsNon-Paraffins

Inorganic

Salt Hydrates

Metallic

Eutectics

Inorganic -

InorganicInorganic-Inorganic

Organic-Organic

Miscellaneous

Page 61: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

695 Vol. 7, Issue 3, pp. 692-700

T

SOLID MELTING LIQUID

Fig.2 Thermal energy stored in a PCM as a function of temperature (modified after Rubitherm, 2008)

Figure 2 illustrates the change in stored energy (internal energy) as a function of temperature. At the

beginning of the heating process the material is in a solid state. Before it reaches the melting point Tm,

the heat absorbed is sensible heat. Starting at melting point, the material undergoes a change of state

from a solid to a liquid. During this process, the material absorbs heat known as enthalpy melting. The

temperature remains constant. If the material is heated up further after this process, there will be

sensible heat added on completion of melting.

IV. SELECTION CRITERIA OF A PCM

A PCM is selected depending on the application. The following criteria are considered for the selection

of a PCM [7]:

1. The thermo-physical properties:- Melting temperature in the desired operating temperature range, -

High latent heat of fusion per unit volume, - High specific heat and high thermal conductivity (the low

thermal conductivity is one of major problems in use of PCM), -Small volume changes on phase

transformation and congruent melting for a constant storage capacity.

2. Kinetic properties: Among kinetic properties to be considered are: High nucleation rate and high rate

of crystal growth.

3. Chemical properties: Chemical stability, complete reversible freeze/melt cycle, no-degradation after

a larger number of freeze/melt cycles, non-corrosiveness to the construction materials, non-toxic, non-

flammable and non-explosive for safety.

4. Economic: PCMs should be cost effective, abundant and be available on a large scale.

There are different ranges of PCMs. Table 1 presents some commercial PCMs available on the market.

Table 1: Some commercial PCMs available on the market (Zalba et al., 2003)

Commercial Name Type of product Melting point (oC) Heat of fusion (kJ/kg) Source (Producer)

RT 20 Paraffin 22 125 Rubitherm

SP 22 A4 Eutectic 22 165 Rubitherm

SP 25 A8 Eutectic 25 180 Rubitherm

ClimSEL C22 Salt hydrate 22 144 Climator

ClimSEL C28 Salt hydrate 28 126 Climator

In Table 2, the advantages and disadvantages of organic and inorganic materials are presented. From

the behaviour of the PCM shown in Table 2, an appropriate PCM can be selected for a given application.

Table 2: Comparisons of organic and inorganic PCMs (Zalba et al., 2003)

ORGANIC INORGANIC

ADVANTAGES

non-corrosive great change enthalpy

Q

Tm

Page 62: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

696 Vol. 7, Issue 3, pp. 692-700

non-toxic good thermal conductivity

little or no supercooling cheap and non-flammable

chemically and thermally stable

DISADVANTAGES

lower phase change enthalpy undercooling

low thermal conductivity corrosion to most metals

phase separation

phase segregation

lack of thermal stability

V. APPLICATIONS OF PHASE CHANGE MATERIALS

In this section, some potential applications of PCM that can be useful in South Africa to address the

issue of energy are presented.

5.1. Solar water heating and Space heating

Water heating accounts for up to 40% of household energy consumption in South Africa [8]. Up to now,

solar water heating system use in South Africa makes use mostly of sensible heat technology in their

systems. It has been proved that the integration of PCM in a solar water heating system can improve

the efficiency of the system. By decreasing the volume of geyser and by providing thermal energy stored

at all most constant temperature for a long period compared to sensible heating system. 5 - 14 times

more energy can be stored by using latent heat than the sensible heat [9]. Using the latent heat storage with PCM like in hospital, restaurants and residential area will

tremendously improve the life of population with an efficient energy system [10].

In a latent heat storage solar water heating systems, water is heated up during the sunshine period. The

heat is transferred to the PCM. During cloudy or period of low solar radiation, the hot water is taken

out and is replaced by cold water, which absorbs energy from the PCM. The energy is released by the

PCM during the solidification process (changing from liquid to solid).

A study done by Mehling et al. [11] showed that a PCM module at the top of a stratified water tank

increased the energy storage and improved the performance of the tank. Paraffin wax, stearic acid, Na2

SO4. 10 H2O have been tested and good results were obtained for solar heating systems [12]. Hasan et

al. [13] recommended that myristic acid, palmitic acid and stearic, with melting temperature between

50o C – 70o C are the promising PCMs for water heating.

In South Africa, space heating is one of the largest energy loads on a typical house. Using a LTE system

can reduce the amount of energy consumed for these loads by storing excess thermal energy that is

either available during the day (diurnal storage) or available in the summer (seasonal storage).

Solar air heater integrated with PCM can be used for heating space. Strich et al. [14] used transparent

insulation material and translucent PCM in the wall for heating the air for the ventilation of the house.

Paraffin wax was used as PCM. The efficiency of solar energy absorbed into the PCM and transferred

to the ventilation air was 45% on average.

5.2 Solar cookers

The use of solar stoves can have great significance in South Africa if they are used in combination with

PCM. The utilization of PCM in the solar cooker would assist to conserve the environment by not using

fuel wood for cooking as it is currently used. According to CSIR, rural households in South Africa use

between 4.5 and 6.7 million tonnes of fuelwood per year [15].

Some box-type solar cooker with PCM has been tested and proved to be efficient to cook food even

during the evening (Buddhi and Sahoo [16] and Sharma et al. [17]). Commercial grade stearic acid,

acetamide, erythritol are some of PCMs that can be used for this application.

5.3 Solar cooling

Often during summertime, there is a need to cool the buildings and houses. Cooling TES systems can

be used to reduce the amount of energy consumed for space cooling. PCMs are used as a storage media

for space cooling. As examples of PCMs used for space cooling, there are: inorganic salt hydrates,

organic paraffin waxes and mixture of these, Rubitherm RT 5.

Page 63: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

697 Vol. 7, Issue 3, pp. 692-700

Typically building services that include lighting, electronic devices and appliances account for 57% of

the electricity consumption and thermal comfort systems (Heating, Ventilation and Air Conditioning -

HVAC) make up the remaining 43% [18].

Agyenim et al. [19] investigated the possibility of integrating latent thermal energy storage to the hot

side of a LiBr absorption cooling system to cover 100% of the peak cooling load for a three bedroom

house on the hottest summer day in Cardiff, Wales. A 100 l of Erythritol was required to provide about

4.4 hours of cooling at peak loads based on the optimum Coefficient Of Performance (COP) of 0.7

PCMs can be used to store coolness. In this application, the cold stored from ambient air during night

is relieved to the indoor ambient during the hottest hours of the day [7].

5.4 Intermittent Power Plants

Integrating PCM in a latent heat storage in a power plant for example to smooth out the supply of

electricity from intermittent power plants for the utility grid can be beneficial. This can result on

generating electricity more effectively during the distribution and to take control and manage the energy

more adequately. Electricity can be produced by solar energy that was stored several days earlier by

storing excess energy in large thermal reservoirs. Nitrate salt represents possible PCM for applications

beyond 100 °C [19].

In his investigation, Hunold et al. [20] proved that phase change storage is technical feasible and

proposed a storage design for a power plant. South Africa generates electricity from different power

plants. By incorporating PCM in the power plant, it is possible to improve the distribution of electricity

in South Africa.

5.5 Waste heat recovery systems

Air conditioning system ejects heat. The exhausting temperature of the compressor is relatively high

when using Freon as refrigerant. Therefore, it can be recovered using an accumulator and gets heat of

higher temperature.

Zu et al. [21] developed a heat recovery system using PCM to recover the ejected heat of air

conditioning system and producing low temperature hot water for washing and bathing. It was observed

that the efficiency ratio of the system is improved effectively when all the rejected heat from air

conditioning systems is recovered. Erythritol, stearic acid have been used and tested as PCMs for this

application.

5.6 Greenhouse heating

Greenhouse requires control of temperature, humidity, solar irradiance and internal gas composition

with rational consumption of energy.

Some PCMs were used for this application: CaCl2.6H2O, Na2SO4.10H2O, paraffins. Nishina and

Takakura [22] compared conventional greenhouse with PCM storage type storage. Efficiency of the

greenhouse with PCM storage integrated with solar collector was 59% and able to maintain 8 oC inside

the greenhouse at night.

5.7 Electrical Power Devices

PCMs with high melt temperatures can be used in conjunction with electronic power-producing

systems. Radiators used to collect solar energy can be packed with PCM to store the energy via phase

change at the melt temperature.

This stored energy can then be converted into electrical power by using the large temperature difference

between the radiator and deep space in either thermionic or thermoelectric devices.

Preliminary analytical and experimental studies reported in [23] indicate the feasibility of PCM

application, and materials have been found with suitable properties for such PCM systems.

To maintain high photovoltaic (PV) efficiency and increasing operating PV life by maintaining them at

a lower temperature, PCMs are integrated into PV panels to absorb excess heat by latent heat absorption

mechanism and regulate PV temperature. The results show that such systems are financially viable in

higher temperature and high solar radiation environment [24].

Page 64: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

698 Vol. 7, Issue 3, pp. 692-700

5.8 Transport

Containers with PCMs can be used to transport film, food, waste products, biological samples, etc.

Such containers represent isothermal protection systems for perishable cargo.

Applications in transport and storage containers are in most cases basically an insulated environment

with heat capacity increased by PCM.

When food or medications are transported, they must be kept in a certain range of temperature. PCM

are very suitable in transportation of these products (food and medications).

Companies such as Rubitherm GmbH, Sofrigam, PCM thermal Solutions design transport boxes for

sensitive materials [7].

5.9 Delicate instrument thermal control

For delicate, highly temperature-sensitive instruments, PCM can be used to maintain these instruments

within extremely small temperature ranges.

Precise Thermal Control of Instruments Temperature - sensitive instruments required to deliver highly

accurate responses have been protected by PCM thermal control systems. Russian investigators have studied the feasibility of using PCM to precisely control the temperature of gravity meters which require

a relative accuracy of 10-8[24].

5. 10.Applications for the human body

The common approach to use PCM to control and stabilize the temperature of the human body is to

integrate the PCM into clothes. Approaches are: macro encapsulation, microencapsulation, composite

materials.

Pocket heater (used to release heat when a person is freezing), Vests for different applications,

developed by Climator [25]. Those vests are developed to cool the body of people who work in hot

environments (such as mine), or with extreme physical exercise. Clothes and underwear are used to

reduce sweating. Other applications include: Kidney belt, plumeaus and sleeping bags and shoe inlets.

Another important application can be found in medical field: the use of ice for cooling as a treatment

of different sports injuries.

In Textile, PCMS make it possible to engineer fabrics that help regulate human body temperature.

Microencapsulation makes phase change materials light and portable enough to incorporate into

textiles, either as a top-coat or as an addition to the fabric fibers [26].

VI. CONCLUSION AND FUTURE WORK

Make use of the PCM can be useful and relevant in South Africa if this material is used in conjunction

with solar energy and waste thermal energy system. It can significantly contribute to solve some

problems related to energy. Several problems may be resolved such as pollution of environment, load

shading, management of electrical power, deforestation and energy consumption.

There is a necessity to turn to the PCM with some of applications point out in this paper; life of millions

of South Africans can be alleviated. To attain this result, there is a need to do more research in the field

of energy, to inform people about other possible solutions to the problem of energy and apply those

solution such as the integration of PCM in different thermal systems.

In the next paper, the focus will be on modelling a latent heat storage system for solar refrigeration

system. A mathematical model will be proposed for a latent shell and tube heat exchanger in order to

understand the mechanics of a latent heat storage.

REFERENCE

[1] Energy department: Energy n.d., Republic of South Africa Electricity basic, viewed 14 May 2014, from

http://www.energy.gov.za/files/electricity_frame.html.

[2] Basic research needs for solar energy utilization n.d., Report on the Basic Energy Sciences Workshop on

Solar Energy Utilization, viewed 15 May 2014 from http:// www.sc

.doe.gov/bes/reports/files/SEU_rpt.pdf

[3] Stine, W.B. & Harrigan, R.W., (1986) Solar energy systems. John Wiley and sons Inc.

Page 65: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

699 Vol. 7, Issue 3, pp. 692-700

[4] Naidoo, N., 2013. Light the way for solar, viewed 14 January 2014 http://www.cncafrica.com/special-

report/2013/09/12/africa-well-positioned-to-light-the way-for solar/

[5] Dincer, I. & Rosen, M.A., (2002) Thermal energy storage. Systems and applications. John Willey &

Sons,Ltd, Baffins Lane, Chichester, West Sussex PO 19 1 UD, England.

[6] Sharma, S., & Kazunobu, S., Latent heat storage materials and Systems: a review, International Journal

of Green Energy 2, 1-56, 2005

[7] Zalba, B. Marin, J.M., Cabeza, L.F & Mehling, H., Review on thermal energy storage with phase change:

materials, heat transfer analysis and applications, Applied Thermal Engineering 23, 251-283, 2002

[8] It’s going to be a bright, sunshiny day. Solar solutions n.d, viewed 18 May 2014, from

http:www.nedbank.co.za/website/content/nedbank_solar/Solar-Solutions.aspx

[9] Sharma A., Tyagi, V.V & Buddhi D., Review on thermal energy storage with phase change materials

and applications, Renewable and sustainable Energy Reviews 13, 318-345,2009.

[10] Basakayi, J.K, (2012) Modelling and design of a latent heat storage system with reference to solar

refrigeration. Master thesis. University of

Johannesburg.https://ujdigispace.uj.za/handle/10210/7883/Kantole.pdf

[11] Mehling, H., Cabeza, L.F, Hippeli , S. & Hiebler, J.,, Improvement of stratified hot water heat stores

using a PCM-module, Proceedings of EuroSun 2002, Bologna (Italy), 2002.

[12] Gowtham, M., Chander, M.S, Mallikarujanan, S.S, & Karthikeyan, N.,Concentrated Parabolic Solar

Distiller with latent heat storage capacity, International Journal of Chemical Engineering and

Applications, 2(3),2011.

[13] Hasan A., & Sayig A.A., Some fatty acids as Phase Change Thermal Energy Storage Materials,

Renewable Energy 4 69-76. Elsevier Science Ltd, 1994.

[14] Strith, U., Novak, P.,Thermal storage of solar energy in the wall for building ventilation. IEA, ECES IA

Annex 17. Advanced Thermal Energy storage Techniques –Feasibility Studies and Demonstration

Projects 2nd Workshop. Ljubljana, Slovenia, 2002.

[15] Sustaining the wood for the trees, viewed 15 May 2014 from

http://www.csir.co.za/enews/2013_mar/07.html

[16] Buddhi, D., Sahoo, L.K.,Solar cooker with latent heat storage design and experimental testing, Energy

Convers. Manage, 38(5):493-501, 1997.

[17] Sharma, S.D., Iwata, T., Kitano, H. & Sagara, K.,Thermal performance of a solar cooker based on an

evacuated tube solar collector with a PCM storage unit. Solar energy 78, 416-426, 2005.

[18] Agyenim , F.B., n.d, Solar air conditioning and the need for energy storage for meeting domestic

cooling, viewed 15 April 2014, from www.freebs.com/agyenim-aboffourwebsite.

[19] Doerte, L., (2013) Thermal Energy Storage for Concentrated Solar Thermal Power Plants. Seminars in

engineering science. Lehigh University, viewed 16 May 2014, from

http://elib.dlr.de/86888/1/Laing_Lehigh-University _13-09-2013.pdf

[20] Hunold, D., (2014) Parabolic trough solar power plants- The largest thermal oil plants in the world,

viewed 16 May 2014,from http:// www.heat11.com/fileadmin/uploads/Whitepaper/heat11-CSP_HTF-

Plants_whitepaper.pdf?PHPSESSID=cd921472a801413eOaO253a14b513545

[21] Zhu,N., & Wang,S., Dynamic characteristics and energy performance of buildings using phase change

materials: A review , Energy Conversion and Management, 50 (12) 3169-3181,(2009)

[22] Nishina, H. & Takakura. T. Greenhouse heating by means of latent heat storage

units. Acta Hortic.148:751-754,1984

[23] Kiziroglou, M.E.,n.d, Design and fabrication of heat storage thermoelectric harvesting Devices, viewed,

18 May 2014, from

https://spiral.imperial.ac.uk/bitstream/10044/1/11116/6/13_IEEE_TIE_FinalSubmission_v4%20

(EPSfigures).pdf

[24] Hasan, A., McCormack, S., J, Huang, M.J., Norton, B., Energy and cost saving of a photovoltaic Phase

change materials (PV-PCM) system through temperature regulation and performance enhancement of

Photovoltaics, Energies 7, 1318-1331,2014

[25] Hale, D.V, Hoover, M.J & O’Neill, M.J., (1971) Phase change materials handbook, Alabama: NASA-

CR-61313.Lockeed missiles and Space Company.

[26] Mehling, H. & Cabeza, L.F., (2008) Heat and Cold storage with PCM. Berlin- Heldbergy: Springer.

[27] Phase Change Materials in Textiles. StudyMode.com. Retrieved 04, 2012, from

http://www.studymode.com/essays/Phase-Change-Materials-In-Textiles-958775.htm.

Page 66: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

700 Vol. 7, Issue 3, pp. 692-700

AUTHORS

Basakayi, J.K graduated with a BEng degree from the Universite de Mbujimayi (U.M), The

Democratic Republic of Congo in 2001. He then completed a Master of Engineering degree

in 2012 from the University of Johannesburg in South Africa.

In 2007, Basakayi has worked at the University of Johannesburg as a Part Time Lecturer in

Mechanical Engineering before joining the Cape Peninsula University of Technology from

2008 until 2013. He is a currently lecturer at Vaal University of Technology (VUT) in

mechanical engineering Department. He is a member of South African Institute of

Mechanical Engineering (SaiMech).

Storm C.P. received the B.Eng degree from the University of Pretoria and M. Eng degree

from the North West University, South Africa, in 1982 and 1994 respectively, and the Phd.

Degree from the North-West University in 1998. He was Director of the School of

Mechanical and nuclear engineering from 2007 until 2013 at the

North West University. He is currently Professor of Engineering. Since 1997, he has been

actively involved as a researcher and lecturer in the area of thermodynamics and

Optimization of thermal systems. He did publish a number of papers in the field of

thermodynamics and power plant.

Prof Storm worked at Eskom from 1982 until 1999 as assistant Engineer, manager and Senior Engineer

respectively.

Page 67: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

701 Vol. 7, Issue 3, pp. 701-711

IMPROVING SOFTWARE QUALITY IN THE SERVICE PROCESS

INDUSTRY USING AGILITY WITH SOFTWARE REUSABLE

COMPONENTS AS SOFTWARE PRODUCT LINE: AN

EMPIRICAL STUDY OF INDIAN SERVICE PROVIDERS

Charles Ikerionwu, Richard Foley, Edwin Gray School of Engineering and Built Environment, Glasgow Caledonian University,

Cowcaddens Road, Glasgow, G4 0BA, Scotland, UK

ABSTRACT In a software – based business process outsourcing (BPO) environment, the quality of services delivered to clients

hinges on the quality of software used in processing the service. Software quality attributes have been defined by

ISO/IEC standards but different organisations give priorities to specific attributes based on client’s requirements

and the prevailing environment. The aim of this study is to identify and demonstrate the particular software

development process that guarantees an acceptable level of software quality within a specific domain that would

translate to desired quality of services delivered to clients. Therefore, this study through a mixed method approach

investigated BPO service providers in India to ascertain what software quality means to their respective

organisations, what software quality attributes are given priority and how it could be improved. The findings

suggest that software quality is highly dependent on the software development process. The vast majority of

successful organisations operated in-house software development through the establishment of software product

line as a platform to embed software reusable components within an agile framework. Through this process, there

is significant reduction in defect density, thereby improving the software quality. This software quality is also

translated to the quality of services delivered to clients

KEYWORDS: software quality, reusable components, agile framework, software product line engineering,

service provider.

I. INTRODUCTION

Over the years, we have witnessed organisations outsourcing business processes, knowledge processes,

and most recently business process management outsourcing. Given the increasing demand for these

services, and the increased reliance on IT platforms for its processing and delivery; service providers

need to respond by providing quality services at a reduced cost [1]. In this way, client’s competitiveness

is increased through good quality of services provided by service providers. However, [2] demonstrated

through their research findings that quality services that guarantee clients satisfaction must evolve from

the quality of the application software used in processing client’s services.

In recent times there have been reported incidents of clients bearing the brunt of poor services from

service providers. Such poor services have been traced to unreliable application software [3]. For

example, Royal Bank of Scotland outsources its IT and business processes to Indian service providers

and recently experienced a setback when their customers across the UK could not avail many services

due to poor services emanating from unreliable application software used by their service provider in

processing services [4]. According to [2], significant number of service providers in India integrated IT

into their business process organisation in order to be competitive by providing better services through

Page 68: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

702 Vol. 7, Issue 3, pp. 701-711

in-house development of application software with an acceptable level of software quality. This entails

that the service provider develops the application software it requires to process client’s services.

However, there have been no indicator within a specific domain like the business process service

providers what software quality means or to ascertain the level of reliability of application software

required to process client’s services. An earlier study by [5], demonstrated that there is little evidence

that conformance to process standards guarantee good software product quality. [6] extended the study

by empirically studying the life cycle productivity and conformance quality in software products but

concluded by including all the software quality attributes as defined by the ISO standards to be

applicable to every domain. The key problem with this finding is that generalising software quality to

every domain could not be feasible because each organisation indicates what their expectations from

the software product are, based upon specific requirements and prevailing environment. Therefore, this

study would specifically limit its scope to BPO services providers that provide services to clients within

the financial domain. The study comprises of the following specific research questions:

Research question (RQ) 1: Which development paradigm guarantees an acceptable level of software

quality? Primarily this question identifies and demonstrates the development paradigm applied by the

dedicated software development team within the BPO service provider that ensures quality of

application software.

Research question (RQ) 2: What does software quality mean to a service provider? The motivation for

this question is to identify how organisations prioritise specific software quality attribute(s) that suits

their organisation’s requirements.

Research question (RQ) 3: Does the organisational model play any role in determining the quality of

the application software? In answering this question, the researchers could ascertain the effect of

integrating dedicated software development team into a service provider on the quality of services

provided to clients.

II. SOFTWARE QUALITY

To address the issues of software product quality, the International Standard Organisation and

International Electrotechnical Commission recently published a set of software product quality

standards known as ISO/IEC 25010. According to this standard, software product quality is the degree

to which a set of static attributes of a software product satisfies stated and implied needs when the

software product is used under specified conditions [7]. The definition went further to include eight

characteristics of product quality model and specific five key characteristics of the model of software

quality. According to this ISO/IEC release, the five key characteristics of a software quality includes

effectiveness, efficiency, satisfaction, safety, and usability. The definition went further to say; usability

is the extent to which a product can be used by specific users to achieve specified goals with

effectiveness, efficiency and satisfaction in a specified context of use. However, mere building of

standards alone is not sufficient to guarantee software quality because what is considered quality

attribute in one domain might differ from another domain. This suggests that, for a specific domain, the

software development team members would apply a development process that would ensure the

realisation of the organisation’s software quality perception.

[8] is of the view that, “software quality is subjective; i.e., the quality of every software is based on its

non-functional characteristics” This implies that software quality is not only measured by the

implementation of the functional attributes but also the non-functional system attributes. Among the 15

software quality attributes presented by [9], [8] contends that it would be very difficult for every

software to attain optimal level in all the attributes. For this reason, [8] suggested that the management

team of the software development team should at the earliest opportunity identify which attributes best

answers to client’s requirements. For example, the service providers that develop application software

required for service processing would recommend attributes like dependability, reliability, usability,

efficiency and maintainability to the development team to be in-built into the software. The

identification and inclusion of these attributes ensure that the service provider’s perception of software

quality are built into the software and the desired level of service quality is delivered to clients.

Every software system must have evolved from a development process. According to [2], the

development process of an application software has a significant role on the quality of the ensuing

Page 69: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

703 Vol. 7, Issue 3, pp. 701-711

software. [10] introduced the agile software development process with the release of its manifesto that

paid more attention to increasing software quality. Since then, many research findings [11], [12], [13]

have suggested the use of agile software development method for an optimal software quality. The most

significant attribute of agile method that ensures an acceptable level of software quality is its test driven

approach [14]. It is also noteworthy to mention that service providers with several number of clients

would have variety of client requirements which are subject to change even when the requirement is

been processed. Agile method provides agility which ensures that the issues arising from variability of

client’s requirements are significantly contained. Because of these reasons, many organisations have

adopted agile development approach.

III. RESEARCH METHOD

In order to understand the underlying attributes behind the development of application software with an

acceptable level of quality that would guarantee fulfillment of client’s requirements within the financial

domain, we adopted a study of Indian business process service providers through an exploratory

sequential mixed method approach. Through this approach, qualitative data collection was first done

through a case study of seven medium – large scale appropriate service providers representing service

providers in India. This was followed up by a quantitative data collection (which was informed by the

results of the case studies) through a larger scale survey of a significant number of Indian providers.

Mixed method research has become the most popular term for mixing qualitative and quantitative data

in a single study [15]. It has been successfully applied in information system domain [16], and software

engineering [17], [18]. In the words of [19], “mixed method research is the type of research in which

qualitative and quantitative research approaches are used (e.g. use of qualitative and quantitative

viewpoints, data collection, analysis, inference techniques) for the broad purpose of breath and depth

of understanding and corroboration”. [20b] went further to suggest that the aim of mixing methods is

not necessarily for corroboration but rather to expand one’s understanding of the phenomenon.

According to [21], the use of results from a qualitative study to inform a survey will enhance the

sensitivity and accuracy of survey questions. Then, as suggested by [22], the findings of these two

phases are to be integrated to provide the final analysis and conclusion. For these very reasons, in this

study, the selection of mixed method has legitimised the use of multiple approaches in achieving the

research objectives rather than restricting or constraining researchers’ choices.

In this study, at the qualitative level, interviews were tape recorded, manually transcribed by the

researchers, and imported to qualitative data analysis software Nvivo 10.0 for coding and subsequent

analysis. [23] suggests that in exploratory analysis, the researcher’s primary concern is on what emerges

from the interaction between the researcher and the respondent. Subsequently, the content of this

interaction drives the development of codes and identification of themes. Patterns which were identified

from the emanating themes and sub themes from the coding were later subjected to thematic analysis

in order to “locate meaning” in the data. Thematic analysis is a method for identifying, analysing and

reporting themes and patterns within data [24]. In the process, the researcher is examining commonality,

examining differences and relationships [25]. One of the main reasons for applying thematic analysis

is that it does not attach to any predetermined theoretical framework. It is flexible, organises data to the

possible minimal form and reveals the richness of data.

3.1 Case study The researchers identified 7 business process service providers in India through purposeful sampling.

The sampling was based on service provider organisations with onshore and offshore clients with a

minimum of 5 years in operation. The data collection was carried out through face-to-face semi-

structured interview across senior software development team members at their respective offices in

India. Most of these service provider organisations were identified to be third party organisation (owned

by Indian companies) whilst only one is a captive service provider organisations (owned by a large

multinational IT company based in the Silicon Valley, USA). As part of the research study, participants

were asked questions relating to areas such as: the nature of the application software required to process

client’s services; software development paradigm used by the development team; what software quality

means to their organisation; how software quality is maintained at the development level; and its effect

on the services delivered to clients. After the interview transcription, it was imported to Nvivo software

Page 70: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

704 Vol. 7, Issue 3, pp. 701-711

to assist in reducing the text to themes and subthemes. To identify the fine-grained answers to questions

raised during the interview, the researchers applied thematic analysis. Most significant number of

service providers provide such services like analytics, financial services, backend, insurance, animation

and e-commerce to their respective clients.

3.2 Questionnaire The outcomes of the thematic analysis from the case study formed the basis for the development of the

subsequent online questionnaire. In all, the questionnaire study had 27 Likert scale questions which

specifically dwelt on the software development process and software quality attributes. This aimed to

reach a larger population of service providers and validate the main thematic issues identified in the

case study. Through NASSCOM and BPO Association of India, the questionnaire were distributed to

358 potential participants. As part of the study, significant number of the questions centered on the

quality of the application software required for processing client’s services. The questionnaire was

aimed at high level individuals within the IT management, application software development team

members and service process team leaders. Specific questions relating to issues about software quality

identified in the thematic analysis were asked. Such question includes service provider’s perception of

software quality attributes with regard to their organisation and software development paradigm that

guarantees software quality. A total of 156 valid responses were received. The data collected through

the questionnaire was subjected to statistical and descriptive analysis and the outcome was combined

with the results from the thematic analysis to arrive at the research findings.

IV. RESEARCH ANALYSIS

4.1 Case study Dedicated software development team members: Although the research participants are service

providers, all but one of them indicated that their organisations develop the application software

required for service processing. Thus, the lone service provider organisation either obtain application

software from their client, outsource to an IT company or from a vendor. The reason for developing the

software in-house could be summed up when an interviewee said as follows: “Our primary objective is

to provide quality service to our numerous clients and we can only achieve this through the quality of

the application software we use for service processing. And the best way to achieve an optimum level

of software quality is to develop it in-house. We can only trust our product. For every client, our

reputation is at stake and we cannot compromise”. The study indicates that both medium and large

scale service providers have dedicated software development teams who are responsible for the

development of specific application software required for processing of client’s services. These

individuals have, over the years acquired significant experience within the domain and could also add

to the client’s participation in the software development process.

Software development paradigm: In table 1 column 2 below, respondents indicated the development

paradigm they have applied in the development of the application software. With the exception of BB4

organisation that does not develop software, others indicated that their development team have used a

combination of paradigms. It is evidenced that software product line engineering (SPLE) is a

composition of reusable components and software development life cycle [26] and in this study, across

the cases, respondents indicated the use of reusable software components either contained in SPLE or

explicitly applied. Four of the research participants (BB1, BB2, BB3, and BB5) indicated the

combination of SPLE and Agile with reuse while service provider BB7 combined Agile with reuse

components as development paradigm.

Through the thematic analysis presented in table 1 column 3 below, respondents were of the view that

the combined paradigms provided the development team with appropriate measures that guaranteed

software quality. In order to ensure an acceptable level of software quality, the development teams in

their respective service provider organisations adopted software quality measures that include; the test

driven approach provided by agile framework; reusable components that ensure the use of tested and

trusted components; and the overall rigorous testing within the software product line.

Page 71: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

705 Vol. 7, Issue 3, pp. 701-711

Table 1: Software development paradigm and adopted quality measures

Software quality perception: Table 2 below summarises the software quality key attributes identified

by their respective organisations. Respondents indicated what their respective perception of software

quality means with specific reference to their organisations. As evidenced in table 2 below, respondents

indicated their respective software quality perceptions to include; the software must be absolutely

reliable by delivering the client’s requirements; must satisfy the core attribute of usability; must deliver

services within a timescale; must be secured and safe; and must guarantee an expected level of service

quality provided to clients. Although the service provider BB4 does not development software but it

sources software that would provide optimal quality required to providing quality of service to clients.

This means that they would rely on outsiders for successful implementation of client’s requirements. Table 2: Software quality perception

Service

provider

Software quality perception

BB1 Absolutely reliable software that would deliver client's

requirements

BB2 Usability is a core factor within our service process team.

It must deliver within the shortest possible time.

BB3 Must guarantee quality services to our numerous clients.

Provide answers to clients requirements

BB4 Reliable and able to reduce service processing and

delivery turnaround time

BB5 Must provide security to client’s data and safety of users.

The software must be reliable

BB6 Clients satisfaction and absolute reliable

BB7 The software must be reliable and could be easily used by

our processing team members

4.2 Questionnaire

The aim of this questionnaire is twofold: first, to validate the key thematic findings in the case study,

and secondly, to reach a larger number of service providers for a broader study. The questionnaire

returned 156 valid responses, and which represents 44% response rate. This value is consistent with

[27] assertion that the average response rate required for studies at organisational level is 37.2%. In

relative with the key attributes identified in the case study, the following analyses are presented below.

Dedicated software development team members: There are 156 medium and large-scale service

providers that participated in the questionnaire study with a combined 9856 members within their

respective software development teams. These members are specifically integrated into a service

provider organization for the very purpose of developing required application software to process client

services. Their organisations have a total of 12890 clients. This is also in line with the case study where

medium and large-scale service organisations have large number of development team members and

large number of clients.

Service

provider

Software development paradigm Adopted Software quality

measures

BB1 SPLE,Agile Test driven process and SPLE

quality measures

BB2 Our SPLE provides the required platform

for application development through

reusable components in Agile framework

Rigorous testing, reused tested

components

BB3 SPLE, Agile and reusable software

components

Reused components, SPLE

quality features

BB4 NA NA

BB5 Agile, SPLE, reusable software

components

SPLE quality features, test driven

of Agile, tested components

BB6 SPLE SPLE quality features

BB7 Agile + Reusable software components test driven, reused components

Page 72: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

706 Vol. 7, Issue 3, pp. 701-711

Software development paradigm: Figure 1 below presents the software development paradigms service

provider’s development team use in the application software development. In the question, respondents

were specifically asked to select a paradigm(s) that guarantees software quality during the development

process. A total of 76 organisations indicated that their organisations have established a software

product line where they always embed software reusable components into an agile framework. Through

this process, at different times, the development team develops reliable software that provides quality

services to their numerous customers. To the same question, 57 research participants indicated that they

only embed reusable software components into an agile framework to develop quality software.

The number of service providers that indicated the use of SPLE without a fixed development paradigm

is 21 whilst 2 said that they use reusable software components only. Significant majority of the service

providers indicate an initial establishment of a software product line that embedded reusable software

components into an agile framework. This is a key characteristic of software development by service

providers that develop the application software in-house and it is a validation of the same attribute found

in the case study.

Software quality attributes: In order to establish each organisation’s perception of software quality,

respondents were asked to indicate by selecting the appropriate software quality attribute that defines

the quality of the application software their respective organisations use in service processing. In figure

2 below, respondents indicate how they view each software quality attribute with regards to service

processing within their organisation. Attributes like usability, reliability and maintainability are given

the highest priority (132,136,126) at the development level. This implies that the presence of these

attributes in the software developed in-house provides a guarantee that services processed would also

be of good quality.

Figure 1: Software development paradigm

Figure 2: Service provider’s perception software quality attributes

Page 73: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

707 Vol. 7, Issue 3, pp. 701-711

According to ISO/IEC 25010, safety is an attribute of software quality. However, in figure 2 above,

vast majority of respondents were of the view that safety is not or is the least considered software quality

attribute in their respective organisations. This implies that different domains do have different software

quality attribute depending on the prevailing environment and specifically, client’s requirements. This

is an indication of validating the varied software quality attributes, which were indicated by

interviewees in the case study.

V. RESEARCH FINDINGS AND DISCUSSION

5.1 Software development paradigm that guarantees software quality

This finding provides the answer to RQ1. Figure 3 below shows the software product line platform for

the application software development identified in this research. Service providers within a specific

domain with numerous clients use a common platform for quick and reliable software development.

For example, within the financial sector, a service provider would define domain requirements, domain

architecture and the software development life cycle (SDLC) that would be generally applied in

developing several application software suitable for several clients. But different clients would have

different requirements. Thus, the problem of variability caused by differing requirements and changes

in specific client’s requirements could be detected during the specific client’s requirement engineering

in SDLC. Although the software product line has a defined domain requirements and architectural

design, carrying out specific client’s requirement engineering and architectural design within the agile

framework is a significant approach to reducing problems caused by variability. In figure 3 below, the

arrow labeled 1 start the agile framework from where the variability is detected and addressed during

the client’s specific requirement engineering. In this way, the negative effect caused by variable client

requirements would be reduced which would also reduce the defect density of the ensuing software

product.

The software product line in figure 3 comprises of 3 specific components: the domain requirement

engineering, domain architecture design and the software development life cycle. The combined

activities and processes within the SDLC are demonstrated in detail in figure 4 below. In figure 3 above,

the SPLE platform indicates that from the third element (labeled 1, is the beginning of SDLC) of the

software product line, there is continuous testing of the components till the final software product is

produced. There is also a demonstration within the software product line that reusable software

components are embedded into the software development life cycle.

Figure 4 presents a greater detail of the SDLC within the software product line. The component labeled

1 in the figure 4 below corresponds to the client requirement engineering within the software product

line. The development team embedded reusable software components into an agile framework

purposely for increasing the software quality and reducing the turnaround time for development and

processing of client services. Changes in client’s requirement are detected during the normal client’s

requirements engineering and architectural design in agile framework. In the normal sprint cycle of

Domain Requirement Engineering

Domain Architectural Design

Domain reuse software component

Agile framework Assemble & Integrate

Testing Testing

Application Software

Testing

Software development life

cycle

1

2

Figure 3: Software product line engineering

Page 74: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

708 Vol. 7, Issue 3, pp. 701-711

Scrum or release cycle of XP, instead of the normal coding witnessed in agile after the initial meetings,

specific software components are identified and selected from an established repository. These

components are retested, assembled, integrated and again tested as the software product is produced.

There is a noticeable flexibility in this process model, if the required component is not available, the

development team goes back and develops it (or produces it via the enhancement of an existing

component) before proceeding to the next level of assembling. The element labeled 2 (application

software) is the final product, which the service processing team of the service provider would use to

process client services.

Although the application software could only have been developed through agile method or simply the

reusable component, but for the very essence of software and service qualities, service providers tend

to combine different development paradigms. Researchers [28,], [29], [30] have also identified

variability as a bottleneck in the application of software product line engineering. However, it has been

successful in domains producing physical products like mobile phones. For example, Nokia Inc.

Company popularised software product line engineering when it successfully implemented SPLE and

churned out thousands of products within a very short time [31]. In contrast, the service industry is

bedeviled with varying client requirements and frequent changes even while the processing of the

service is ongoing. This particular variation in client’s requirement brought variability to the fore while

applying software product line in the service industry. In order to solve this problem emanating from

variability, agile with reuse is introduced in the software product line. First, the second level of client

requirement engineering undertaken at the beginning of the agile framework detects any variation from

the established domain requirements. Once detected, it is immediately addressed and incorporated in

the design phase. During the sprint cycle of the agile framework, any variation that might have

mistakenly filtered in, is also corrected before the identification and subsequent integration of relevant

reusable components. Through this process varying client’s requirements are promptly addressed and

software with an acceptable level of quality is produced.

5.2 Domain specific software quality attributes

RQ2: What does software quality mean to a service provider?

As indicated in table 2 and figure 2 above, respondents specifically identified four major software

quality attributes that service providers use for the measure of software quality. These include usability,

reliability, maintainability and reusability. Most importantly, the software must provide an acceptable

level of service quality delivered to their respective clients who are within the financial domain. Thus,

Planning and architectural

design

Client’s

requirements

Application Software

Re

usa

ble

So

ftw

are

Com

pon

ent

Rep

osit

ory

Assemble components

Develop reusable

components

Integrate components

ComponentNO YES

Service

Assessment

Build/TestReview

Select

Sprint Cycle Process team

Processes

1

2

Figure 4: Expanded SDLC for embedded software reusable components in agile framework

Page 75: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

709 Vol. 7, Issue 3, pp. 701-711

during the development process, these specific quality attributes are identified and incorporated into the

software.

This entails that different users or organisations have different software quality measurement attributes.

[32] were of the view that there is no universal set of quality concepts and measurements, because

different organisations or systems will require different attributes and measurements. And in this case

what is considered very important attributes due to organisation’s specific needs- the services they

deliver to their clients, are usability, reliability and maintainability. Although safety and reusability are

mentioned by some respondents, the combined five attributes are in line with ISO/IEC 25010 standard

and its characteristics but not all are relevant within this financial domain. The quality of service

delivered to clients has its dependence on the software quality which also depends on the software

development paradigm. This implies that the SDLC is directly proportional to both the quality of

software and the quality of processed services delivered to clients.

5.3 Organisational model

RQ3: Does the organisational model play any role in determining the quality of the application

software?

In answering this question, the research finding indicates the significant role of organisational modelling

in ensuring that quality services are delivered to clients. This is achieved through the integration of IT

into BPO service provider organisation to form BPO-IT organisational model. Through this model, the

organisation initially establishes dedicated software development team purposely to develop application

software for the service provider. In table 1, with the exception of BB4, all the service providers

implanted BPO-IT in order to determine the quality of services delivered to clients. The organisation

BB4 would, therefore, depend on vendors or IT companies to provide the application software required

to process client services. This integration ensures that an acceptable level of service quality is delivered

to clients through the provision of quality application software used in processing client services.

VI. CONCLUSIONS AND FUTURE WORK

In the course of this study, it was found that successful business process service providers specifically

indicated that software quality means delivering of quality services to their numerous clients. And the

only way this level of service quality could be achieved is to develop the software in-house through the

establishment of software product line as a development platform. Through this process, the

development team embedded reusable software components into an agile framework as a chosen

development paradigm. The software quality is enhanced through the reuse components, agile

framework and the software product line. [2] asserts that an agile framework with numerous advantages

provide the platform for the use of reusable components in order to quickly develop reliable software

for service processing.

Firstly, the presence of tested and retested components available in the repository during the

development cycle ensures that the end product produces the level of quality that would provide quality

service to clients. Through these components, there is significant reduction in defect density because

they are trusted components as they have been used severally. As shown in Figure 4 above, at the time

of build/test within a sprint cycle of agile, the developers would usually search for the required

components in the repository.

Secondly, agile framework with its test driven approach ensures that the level of software quality is

significantly improved by producing a software with less defects. The iterative and incremental

principle found in agile makes it easier to get feedback from customers and users who are also within

the service provider organisation. Thirdly, the introduction of software product line ensures that the

software is developed quickly to process different client requirements. The element of agility ensures

that the problem of variability arising from varying client requirements are detected early and

controlled.

This whole process of in-house software development guarantees full involvement of all the

stakeholders especially the users. Client’s requirements are recognised and built-in during the

development process and users are also involved in every stage of the development level. Issues arising

from maintainability are also treated in-house. The organisational model where application software are

Page 76: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

710 Vol. 7, Issue 3, pp. 701-711

developed in-house is a key component of developing an acceptable level of software quality. It ensures

that the required level of software quality is produced which translates to delivery of expected quality

of service to clients. Client’s requirements are delivered in time and service provider’s competitiveness

is significantly enhanced.

Future work could be in the area of developing a central repository of reusable software components

for a specific domain. Through this repository, service providers within the same domain like the

financial sector could quickly avail required components and develop application software for

processing client services.

ACKNOWLEDGEMENT

The authors gratefully acknowledge the full funding of this study by the National Information

Technology Development Agency (NITDA) of Nigeria.

REFERENCES

[1] Javalgi, R. R. G., Joseph, W. B., Granot, E., and Gross, A. C. (2013). Strategies for sustaining the edge in

offshore outsourcing of services: the case of India. Journal of Business & Industrial Marketing, 28(6):475–

486.

[2] Ikerionwu, C., Edwin, G., and Foley, R. (2013). Embedded software reusable components in agile framework:

The puzzle link between an outsourcing client and a service provider. In

Quality Comes of Age, SQM XXI, pages 63–78. BCS, London

[3] Willcocks, L. (2010). The next step for the CEO: Moving it-enabled services outsourcing to the strategic

agenda. Strategic Outsourcing: An International Journal, 3(1):62–66.

[4] Allen, V., Barrow, B and Salmon, J. (2012). Did one high-tech worker bring RBS to its knees? Dailymail-

28th June, http://www.dailymail.co.uk/news/article-2165202/Did one-high-tech-worker-bring-RBS-knees-

Junior-technician-blamed-meltdown-froze-millions-accounts.html

[5] Kitchenham, B. A., Pfleeger, S. L., Pickard, L. M., Jones, P. W., Hoaglin, D. C., El Emam, K., and Rosenberg,

J. (2002). Preliminary guidelines for empirical research in software engineering. Software Engineering,

IEEE Transactions on, 28(8):721–734.

[6] Krishnan, M. S., Kriebel, C. H., Kekre, S., and Mukhopadhyay, T. (2000). An empirical anal-ysis of

productivity and quality in software products. Management science, 46(6):745–759.

[7] ISO/IEC25010. (2010). Systems and software engineering — systems and software quality requirements and

evaluation (square) — system and software quality models.

http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=35733

[8] Sommerville, I. (2011). Software Engineering . Pearson Education, UK, 9th edition.

[9] Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., MacLeod, G. J., and Merrit, M. J. (1978). Characteristics

of software quality, volume 1. North-Holland Publishing Company.

[10] Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J.,

Highsmith, J., Hunt, A., Jeffries, R., et al. (2001). Manifesto for agile software development.

[11] Dybå, T. and Dingsøyr, T. (2008). Empirical studies of agile software development: A systematic review.

Information and software technology, 50(9):833–859.

[12] Erickson, J., Lyytinen, K., and Siau, K. (2005). Agile modeling, agile software development, and extreme

programming: the state of research. Journal of Database Management (JDM), 16(4):88–100.

[13] Kumar, K., Gupta, P., and Upadhyay, D. (2011). Change-oriented adaptive software engineering by using

agile methodology: In Electronics Computer Technology (ICECT), 2011 3rd International Conference on,

volume 5, pages 11–14. IEEE.

[14] Chow, T. and Cao, D.-B. (2008). A survey study of critical success factors in agile software projects. Journal

of Systems and Software, 81(6):961–971.

[15] Onwuegbuzie, A.J., Johnson, R.B. (2006). The validity issue in mixed research. Research in the Schools

13 (1), 48–63. [16] Themistocleous, M., Irani, Z., and Love, P. E. (2004). Evaluating the integration of supply chain information

systems: A case study. European Journal of Operational Research, 159(2):393–405.

[17] Sjoberg, D. I., Dyba, T., and Jorgensen, M. (2007). The future of empirical methods in software engineering

research. In Future of Software Engineering, 2007. FOSE’07, pages 358–378. IEEE.

[18] Easterbrook, S., Singer, J., Storey, M.-A., and Damian, D. (2008). Selecting empirical methods for software

engineering research. In Guide to advanced empirical software engineering, pages 285–311. Springer.

[19] Johnson, R. B., Onwuegbuzie, A. J., and Turner, L. A. (2007). Toward a definition of mixed methods

research. Journal of mixed methods research, 1(2):112–133.

Page 77: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

711 Vol. 7, Issue 3, pp. 701-711

[20b] Onwuegbuzie, A. J. and Leech, N. L. (2004). Enhancing the interpretation of “significant” findings: The

role of mixed methods research. The Qualitative Report, 9(4):770–792.

[21] Malterud, K. (2001). Qualitative research: standards, challenges, and guidelines. The lancet, 358(9280):483–

488.

[22] Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches. Sage.

[23] Guest, G., MacQueen, K. M., and Namey, E. E. (2011). Applied thematic analysis. Sage.

[24] Braun, V. and Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology,

3(2):77–101.

[25] Gibson, W. and Brown, A. (2009). Working with qualitative data. Sage

[26] Northrop, L., Clements, P., Bachmann, F., Bergey, J., Chastek, G., Cohen, S., Donohoe, P., Jones, L., Krut,

R., Little, R., et al. (2007). A framework for software product line practice, version 5.0. SEI.–2007–

http://www. sei. cmu. edu/productlines/index. html.

[27] Baruch, Y. and Holtom, B. C. (2008). Survey response rate levels and trends in organizational research.

Human Relations, 61(8):1139–1160.

[28] Mannion, M. (2002). Using first-order logic for product line model validation. In Software Product Lines,

pages 176–187. Springer

[39] Metzger, A. and Pohl, K. (2007). Variability management in software product line engineering. In

Companion to the proceedings of the 29th International Conference on Software Engineering, pages 186–

187. IEEE Computer Society

[30] Ziadi, T. and Jézéquel, J.-M. (2006). Software product line engineering with the UML: Deriving products.

In Software Product Lines, pages 557–588. Springer

[31] Clements, P. and McGregor, J. (2012). Better, faster, cheaper: Pick any three. Business Horizons, 55(2):201–

208.

[32] Gilb, T. and Finzi, S. (1988). Principles of software engineering management, volume 4. Addison-Wesley

Reading, MA.

AUTHORS

Charles Ikerionwu is a PhD candidate in the Department of Computer, Communications and

Interactive Systems at Glasgow Caledonian University. He received his MSc (IT) in 2002

from Punjab Technical University and MCA from Indira Gandhi National Open University

in 2004, both in India. His research interest includes but not limited to Software Product Line

Engineering, Software Quality, Agile development and big data analytics. He is a member of

British Computer Society (MBCS) and International Council on System Engineering

(INCOSE).

Richard Foley is a Senior Lecturer in the Dept. of Computer, Communications and

Interactive Systems at Glasgow Caledonian University. His current research interests are in

Software Quality and Process Improvement. He has been a member of academic staff at the

University for 30 years and received his PhD from the same institution in 1994. He can be

contacted on [email protected].

Edwin M Gray is Senior Lecturer in the School of Engineering and Built Environment at

Glasgow Caledonian University, United Kingdom. He is a national and international

consultant and lecturer in software engineering. Since 1981, Gray has given seminars on

software engineering to a worldwide audience of engineers and managers in Europe, Asia,

Africa and the Americas. His research interests are in software engineering, particularly

software quality and configuration management systems. Gray has 33 years’ experience in

applied research in the area of Software Engineering and is recognised nationally for his work

in this area and has a long international standing in software engineering, especially software quality.

His specific research expertise lies in the area of software quality and software process improvement and most

recently in the use of agile methods to support these processes. He has been directly involved with a number of

organisations responsible for developments and advancements in this area over the past 33years, e.g. Software

Engineering Institute (SEI), British Computer Society (BCS). He is also the author of several books and over 35

papers on systems analysis and software engineering.

Page 78: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

712 Vol. 7, Issue 3, pp. 712-722

PRODUCE LOW-PASS AND HIGH-PASS IMAGE FILTER IN

JAVA

Omeed Kamal Khorsheed School of Engineering, Dep of s\w Engineering

Koya University, Iraq - Erbil

ABSTRACT Image processing is a computers procedures to convert an image to digital data and implement a lot of

operations on the converted data, Processing to enhanced the image details or to extract some information

from it, Low-Pass and High-Pass are the two famous filter in image enhanced processing, In this paper we will

try to review and compare between two types of image filtering algorithm Low-Pass Filter and High-Pass Filter

and how we can implement them using java , Low-Pass and High-Pass are contradictory Image filtering with

the same executive step but the result is opposite, Low-Pass will Produce a Gaussian smoothing blur image, in

the other hand High-Pass filter will increases the contrast between bright and dark pixel to produce a sharpen

image.

KEYWORDS: High-Pass, Low-Pass, Image Processing, Pixel, Gaussian, Convolution Kernel, Smoothing,

Blur, Sharpening, Java.

I. INTRODUCTION

In Image processing and filtering the most important technique is convolution, convolution is an

expression which means to convolve digital Image original information with convolution kernel as

dimensional convolution. [1]

When we take a digital photo with camera or scanner, often images will be noisy. Noise always

changes rapidly from pixel to pixel because each pixel generates its own independent noise. So we

need to improve the image detail, this improve require many image filter effects [2], Filtering effects

can be used to:

Sharpen image.

Blur or smoothing image.

Image noise reduction.

Detect and enhance edges.

Alter the contrast of the image.

Alter the brightness of the image.

In this paper we will discuss two type of image filter Low-Pass Filter and High-Pass filter, the

two Filters have the same Convolution technique with deferent convolution kernel.

II. SURVEY OF LOW-PASS FILTER

Low-Pass filter is known as (smoothing or blurring) filter, Low-pass used for image smoothing and

noise reduction. Low-pass effect is to calculate the average of a pixel and all of its eight neighbors

[3].Low- Pass filtering is convolution that attenuates high frequency of an image while allowing low

frequency passing (leaving) [4].High frequency introduced noise in image that decrease image quality,

The Low-Pass will smooth the image and reduction noise [2].

2.1 Low-Pass Types

1. Ideal low-pass filters (ILPF)

Page 79: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

713 Vol. 7, Issue 3, pp. 712-722

ILPF is the most simple low-pass filter, it remove all frequencies higher than the cut-off (a specified

distance) frequency and leaves smaller frequencies unchanged.

ILPF transfer response equation is:

Where:

M is number of rows in image

N is number of columns in image

(M/2, N/2) is center of frequency rectangle

Is the transform point (u, v) from the center

ILPF frequency Curve

Figure 1: Ideal low-pass filters frequency

ILPF transfer response placed the (one) inside and the zero is placed outside in this case we will get

a blurred smoothing image in the same time the image edge content will be reduced .

2. Butterworth low-pass filters (BLPF)

BLPF keeps frequencies inside radius cut-off and discards values outside, the transfer equation of

order ( n ) with cut-off frequency defined as:

Where:

M is number of rows in image

N is number of columns in image

(M/2 , N/2) is center of frequency rectangle

Is the transform point (u ,v) from the center

N is the transfer order

BLPF frequency Curve:

(1)

(2)

(3)

Page 80: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

714 Vol. 7, Issue 3, pp. 712-722

Figure 2: Butterworth low-pass filter frequency

3. Gaussian low-pass filters (GLPF)

GLPF is highly effective in blurring and removing Gaussian noise from the image. The transfer

equation of GLPF is defined as :

Where:

M is number of rows in image

N is number of columns in image

(M/2 , N/2) is center of frequency rectangle

D(u ,v) Is the transform point (u ,v) from the center

GLPF frequency Curve:

Figure 3: Gaussian low-pass filter frequency

III. SURVEY OF HIGH-PASS FILTER

High-Pass filter is known as (Sharpening) filter, High-pass used for image Sharpening and emphasize

Image details. High-Pass same as Low-pass calculate the average of a pixel and all of its eight

neighbors [3] but High-Pass convolution attenuates low frequency of an image by allowing high

frequency passing and changing the Low frequency with convolution average [4].

3.1 High-Pass Types:

1. Ideal High-pass filters (IHPF)

(4)

(5)

Page 81: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

715 Vol. 7, Issue 3, pp. 712-722

IHPF is The simplest High-pass filter, It remove all frequencies below the cut-off frequency and

leaves higher frequencies unchanged.

HLPF transfer response equation is:

Where:

M is number of rows in image

N is number of columns in image

(M/2 , N/2) is center of frequency rectangle

Is the transform point (u ,v) from the center

IHPF frequency Curve:

Figure4: Ideal High-pass filters frequency

2. Butterworth High-pass filters (BHPF)

BHPF keeps frequencies outside radius cut-off and discards values inside, The transfer equation of

order ( n ) with cut-off frequency defined as:

Where:

M is number of rows in image

N is number of columns in image

(M/2 , N/2) is center of frequency rectangle

(6)

(7)

(8)

(9)

(10)

Page 82: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

716 Vol. 7, Issue 3, pp. 712-722

Is the transform point (u ,v) from the center

N is the transfer order

D0 is Cutoff

BHPF frequency Curve:

Figure 5: Butterworth High-pass filters frequency

3. Gaussian High-pass filters

GHPF has the same concept of ideal high pass filter, but the transition is smoother ,GHPF is defined

as :

GHPF transfer response equation is:

Where:

M is number of rows in image

N is number of columns in image

(M/2 , N/2) is center of frequency rectangle

D(u ,v) Is the transform point (u ,v) from the center

GHPF frequency Curve:

Figure 6: Gaussian low-pass filter frequency

4. Convolution Kernel Method

(11)

(12)

Page 83: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

717 Vol. 7, Issue 3, pp. 712-722

Convolution is mathematical operation used in image processing operators. Convolution is a way of

multiplying together two arrays of numbers with the same size or in different sizes producing a third

array of numbers of the same dimensionality. This can be used in image processing to implement

operators whose output pixel values are simple linear combinations of certain input pixel values

[5,4,8].

Kernel is a synonym for a matrix of numbers that is used in image convolutions, Kernels can have

different size matrix to present different convolution numbers pattern. The kernel multiplying

neighborhood pixels to get the average, and after we get the average each pixel is replaced by the

average .

To understand the convolution kernel let us look to this example:

We have the 3*3 (h) kernel and f(x,y) as original Image element:

Figure 7: Convolution Kernel

Now the convolution will be equation:

H[f(x,y)]={ k1.f(x-1,y-1)+ k2.f(x-1,y)+ k3.f(x-1,y)+

k4.f(x-1, y+1) + k5.f(x, y) + k6.f(x, y+1) +

k7.f(x+1, y-1) + k8.f(x+1, y1) +k9.f(x+1, y+1)}

H [f(x, y)] it’s the convolution of original Image element (pixel).

Now we can get the pixel average by divide H [f(x, y)] by 9

The most general Convolution average equation is:

(m is the kernel ,p is the image element).

(i is row and j is column).

4.1 convolution Algorithm Step:

1. Convert the image to array.

2. Reads the (x , y) value of each pixel in the array

3. Multiplies (x , y) value by appropriate weight in kernel

4. Sums the products of ((x , y) value x weight) for the nine pixels, and divides sum by 9 (3*3

Kernel)

5. Derived value applied to center cell of array

6. Filter moves one pixel to right, and operation is repeated, pixel by pixel, line by line

4.2 Convolution Kernel Values:

The Kernel Values are not specific and not isotropic ,we can change this values dependence on the

purpose of convolution ,in fact we can use the same kernel to smooth image or to sharpen image [8] .

For example this Kernel is to detect isolated points from background.

(13)

(14)

Page 84: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

718 Vol. 7, Issue 3, pp. 712-722

Table 1: detect isolated Kernel

-1 -1 -1

-1 8 -1

-1 -1 -1

And this kernel is for smoothing image:

Table 2: smoothing Kernel

1 1 1

1 1 1

1 1 1

And this kernel is for harshly smooth image: Table 3: harshly smooth Kernel

1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

If we went subtle smooth effect we can use this kernel Table 4: subtle smoothing Kernel

0 1/8 0

1/8 1/2 1/8

0 1/8 0

Briefly By choosing different kernel, we can create filter has enough noise smoothing, without

blurring the image too much. If any one of the pixels in the neighborhood has a noisy value, this noise

will be besmeared over nine pixels and we will get blur image. Wherefore we must use a median

convolution kernel.

5. Convolution Implementation in Java:

5.1.Java Kernel Class

The Java Kernel class defines a matrix that describes how a spatial domain pixel and its neighborhood

pixels affect the average value computed for the pixel's position in the output image of a filtering

operation. The X origin and Y origin indicate the kernel matrix element that corresponds to the pixel

position for which an output value is being computed [6, 4].

5.1.1Kernel Constructors:

Kernel(int width, int height, float[] data)

Where :

Width Convolution Kernel width

Height is Convolution Kernel height

Data is the Convolution Kernel matrix of float numbers.

*To use image kernel we must import java.awt.image.Kernel

5.2 java ConvolveOp Class

Java ConvolveOp class coded to execute the image convolution from original image to the filtering

output image. Convolution applies a convolution kernel in a spatial domain operation that computes

the average value for the output pixel ,the multiplying operation take in repetitions step an input pixel

and multiply by the kernel matrix . Convolution mathematically affected the output pixel with its

immediate neighborhood according to the kernel values.

ConvolveOp class invoke Buffered Image object to present the image pixels data. Pixels data is the

RGB color components multiplied with alpha component. If the Source Image has an alpha

component, and the color components are not pre-multiplied with the alpha component, then the data

are pre-multiplied before being convolved. If the Destination has color components which are not pre-

multiplied, then alpha is divided out before storing into the Destination (if alpha is 0, the color

Page 85: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

719 Vol. 7, Issue 3, pp. 712-722

components are set to 0). If the Destination has no alpha component, then the resulting alpha is

discarded after first dividing it out of the color components [7,4].

5.2.1 filter method

Filter Method Performs a convolution on BufferedImages. Each component of the source image will

be convolved (including the alpha component, if present). If the color model in the source image is

not the same as that in the destination image, the pixels will be converted in the destination. If the

destination image is null, a BufferedImage will be created with the source ColorModel. The

IllegalArgumentException may be thrown if the source is the same as the destination [7,4].

5.3 Convolution Coding Example

In this example we will work on a noisy low quality image like the following baby image

Figure 8: noisy low quality image

Now we will Create the Convolution kernel to smoothing the baby imge

Table 5: smoothing Kernel

1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

float val=1f/9f;

float[ ]data={ val, val, val,val, val, val, val, val, val };

Kernel kernel = new Kernel(3, 3,data);

Now we can load the original image using bufferedImage like this:

BufferedImage buff_original;

buff_original = ImageIO.read(new File("Baby.jpg"));

It’s the time to use the BufferedImageOp class to create the ConvolveOp object which hold and

buffer the a convolution kernel.

BufferedImageOp ConOp = new ConvolveOp(kernel);

The last step is to call the filter method:

buff_original = ConOp.filter(buff_original, null);

the Convolution result will be:

Page 86: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

720 Vol. 7, Issue 3, pp. 712-722

Figure 9: Convolution smooth image

If we change the convolution Kernel to detect isolated points from background like this kernel:

Table 6: detect isolated Kernel

-1 -1 -1

-1 8 -1

-1 -1 -1

We will get this result

Figure 10: Convolution Detect isolated point image

5.4 Convolution Test Class:

import java.awt.image.BufferedImage;

import java.awt.image.BufferedImageOp;

import java.awt.image.ConvolveOp;

Page 87: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

721 Vol. 7, Issue 3, pp. 712-722

import java.awt.image.Kernel;

import java.awt.Image;

import javax.swing.ImageIcon;

import java.awt.*;

import javax.swing.*;

import javax.swing.JFrame;

import javax.swing.JPanel;

import java.awt.event.*;

import java.io.*;

import javax.imageio.*;

public class Convolution {

public static void main(String[] argv) throws Exception {

try {

BufferedImage buff_original;

buff_original = ImageIO.read(new File("Baby.jpg"));

float val=1f/9f;

float[]data={ val, val, val,val, val, val, val, val, val };

Kernel kernel = new Kernel(3, 3,data);

BufferedImageOp ConOp = new ConvolveOp(kernel);

buff_original = ConOp.filter(buff_original, null);

JPanel content = new JPanel();

content.setLayout(new FlowLayout());

// label to load image

content.add(new JLabel(new ImageIcon(buff_original)));

JFrame f = new JFrame("Convolution Image ");

f.addWindowListener(new WindowAdapter() {

public void windowClosing(WindowEvent e) {

System.exit(0);

}

});

f.setContentPane(content);

f.pack();

f.setVisible(true);

} catch (IOException e) {

}

}}//end class

IV. CONCLUSION

In this paper, we discussed the Low-Pass and High-Pass Image filtering, after that we explained the

Convolution Kernel and how to use java utilities to implement the convolution technique in easy

way. In convolution operation we can produce both filters (Low and High) with the same step but

there is a substantial deferent in the kernel, each type of filter and it’s sub type need deferent kernel

data, in fact even with the same type of filter we need various data kernel that suitability the original

image frequency domains data, by other word the Kernel must be adjustable in easy compatible way

to adjust the filtering purpose that fit the original image. Smoothing image without low-pass filter or

sharpening image without high-pass filter is not that easy. But fortunately Java provide us image

filtering utilities which can process and smoothing or sharpening image in very simple easy way ,

All required from us to create the kernel matrix and let java ConvolveOp to complete the complex

convolution lopping steps to image frequency domains and replace the pixels with average .

V. Future Work

For future work we will implement the convolution java technique in tangible application with user

friendly interface which help the user to suggests and modify the kernel array data in easy way to fit

Page 88: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

722 Vol. 7, Issue 3, pp. 712-722

the filtering purpose, with this application user can select the original image from his pc and to save

the filtered image. The next future research will be about image encryption and decryption algorithms.

REFERENCES

[1] The Scientist and Engineer's Guide to Digital Signal Processing

[2] Randall B. Smith, Ph.D 5 January 2012©MicroImages, Inc, 1997-2012 “ Filtering Images “.

[3] Image Processing - Laboratory 9: Image filtering in the spatial and frequency domains , Technical

University of Cluj-Napoca

[4] Kenny Hunt, Ph.D.,” The Art of Digital Image Processing “.

[5] Robert Fisher, Simon Perkins, Ashley Walker, Erik Wolfart ,2004, “ The Hypermedia Image Processing

Reference” . http://homepages.inf.ed.ac.uk/rbf/HIPR2/hipr_top.htm

[6] (Java SE Documentation), “ java.awt.image.Kernel”

http://docs.oracle.com/javase/7/docs/api/java/awt/image/Kernel.html

[7] Java SE Documentation “, Class ConvolveOp “,

http://docs.oracle.com/javase/7/docs/api/java/awt/image/ConvolveOp.html

[8] John C. Russ ,” The Image Processing Handbook, Third Edition” ,07/01/98

AUTHORS BIOGRAPHY

O. K. Khorsheed born in Baghdad in 1974. BSC computer science from al-mustansiriya

University. Higher deploma degree from Informatics Institute of Higher Studies. Master

degree from Jordan (Arab academic for banking and financial science) school of (IT)

computer information system. Lecturer in koya University since 2005 in school of

engineering S\W engineering department.

Page 89: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

723 Vol. 7, Issue 3, pp. 723-732

THE COMPARATIVE ANALYSIS OF SOCIAL NETWORK IN

INTERNATIONAL AND LOCAL CORPORATE BUSINESS

Mohmed Y. Mohmed AL-SABAAWI College of Administration & Economic

University of Mosul, Iraq

ABSTRACT Social networks are used by various corporate companies all over the world as tools for enhancing their

marketing strategy through online communication. The range of application of social networks by different

corpotare organizations is wide but common ones are for management to effectively comminucate with

employees and customers, and to keep an eye on competition. The main objective of this research is to compare

the utilisation of social network tools between international and local corporate businesses with a view to

identifying how to improve in the latter. In this paper we conducted a comparative study on how selected

international and local corporate companies use social networks. Specific items compared are the social

network tools used and the market strategy adopted by the different companies. The study identified potential

benefits of using social networks for corporate businesses. The study established more extensive use of diverse

social networks compared to the local companies. This study is among the few of its kind that tries to make

comparison between well established international and emerging local corporate businesses. The implication of

this study is that the local companies can benefit more by adopting some of the strategies used by the

international corporate companies.

KEYWORDS: social network1; corporate business2, Comparative Analysis3.

I. INTRODUCTION

People share similar social network groups. Career interests, social interests, religious subdivisions,

common friends and shared beliefs are among the typical bonds which members of a community

share and live with. Social network facilitate links among different people with similar interest to

become friends among the members. Social networks have been studied by many scholars in various

fields and across a wide variety of topics such as privacy and identity, and the social capital of

communities.

Social networks are use mostly by young people. Adolescents use of the social networks is mainly to

link up with friends. Apart from making contact with friends, social networks provide link between

network-makers and business owners and employees. The most popular online social network sites

bring together more than 20 million users and more than 150 different crafts. Social networks user can

write his autobiography in the field of education and work and can invite friends to recommend it to

others or to collaborate start new areas of work. So these networks are one of the areas that are away

from the future of large social network, the major conflict.

Initially, social networks serve as a media that links business websites, which is considered as

ultimate search engine optimization techniques. It is now recommended that most of the social

network sites nowadays have made some adjustments in order to make links for enhancement in

ranking. However, increase in web traffic can occur.

Current social networks are predominantly constituted by users who might meet face-to-face on

mostly online network such as hi-5, Netlog, MySpace, linkeln, Facebook and others. With the

increased use of enhanced mobile phone technology and the popular mobile phones, Twitter can also

be considered as a social network. This has the advantage of allowing users to find and know what

their friends and relative are doing at certain time of the day. The social networks are virtually free for

Page 90: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

724 Vol. 7, Issue 3, pp. 723-732

everyone to join. This has made them to become popular among other networks. Nowadays, senior

executives in many part of the world are considering and implementing strategies to create business

value for their organisations through the use of social networking tools. It is evident that very large

organizations are at various stages of considering and grappling with issues related to the use of

online social networking. While some companies are deliberating on whether to restrict external

social networks, others are proactively exploring the tools. They are adopting these tools for

applications such as project collaboration, recruitment, learning and development, and other business

applications. In this paper we tried to compare the use of social networks between selected

international and local corporate businesses. Specific items compared are the social networ tools used

and the market strategy adopted by the different companies. The study identified potential benefits of

using social networks for corporate businesses. We limited our research to four international

companies – Coca-Cola, Ford, IBM and Sears,some selected manufacturing and services companies

and five local companies in Malaysia - Proton, Celcom, TM, J- Biotech and Inter-Pacific Securities.

The research was conducted based on six online social networks; Face-book, LinkedIn, MySpace,

Twitter, Flickr and YouTube.

II. LITERATURE REVIEW

Definition of Social Network

Social networking is defined as the bringing individuals together into to specific groups, often like a

small community or a neighborhood. Although social networking is possible in person, especially in

schools or in the workplace, it is most popular on the internet. This is because unlike most high

schools, colleges, or workplaces, the internet is filled with billions of individuals who are looking to

meet other internet users and develop friendships.( Directory Submissions 2009).

Social network has also been defined as "individuals or groups linked by some common bond, shared

social status, similar or shared functions, or geographic or cultural connection. Social networks form

and discontinue on an ad hoc basis depending on specific need and interest." (Barker, 1999).

Web-based social networking occurs through a variety of websites that allow users to share content,

interact and develop communities around similar interests. Examples include websites such as

Facebook and LinkedIn.( Susan Gunelius 2011).

Social Networking Sites Growth in Marketing

As the use of social networks continue to expand, marketers view this medium as a potential tool

for marketing communications. The number of people visiting networking Web sites keeps increasing.

In a survey shows that social networking Web sites have witnessed a substantial increase in the

number of visitors(Marshall Kirkpatrick 2006).

As more time is being spent on social networking Web sites, many companies are now giving

much attention to marketing on these social media. In a survey, Suzanne Vranica, (2007), predicted

that between 2007 and 2011, spending on social networks in the U.S. will grow by 180% from $900

million to $2.5 billion. Danah (2007) stated that there are over a hundred social network sites, with

various technological affordances, supporting a wide range of interests and practices. While their

principal technological features are fairly consistent, there are different cultures that emerge

around social network sites. Most sites support the maintenance of pre-existing social networks,

but others help new users to connect based on shared interests, political views, or activities.

Krista (2009) itemized some of the key social networking sites. However, because this is such a

dynamic field, there are bound to be new social networks evolving, merging or vanishing all the

time. Trust is one of the key attributes expected of every social networking.

Trust and Privacy in Social Network Sites

Mayer, Davis, and Schoolman (1995) defined trust as “the willingness of a party to be vulnerable

to the actions of another party based on the expectation that the other will perform a particular

action important to the tractor, irrespective of the ability to monitor or control that other party”.

Electronic commerce research has established that there is a strong relationship between trust and

information disclosure (Metzger, 2004).

Page 91: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

725 Vol. 7, Issue 3, pp. 723-732

Social Network Sites at Work

The first category is general social network sites for which registration is open to members of public.

An example is Facebook. The second category is enterprise social network site that is internal to the

particular corporate organisation and therefore only accessible to its employees, e.g., IBM Beehive

DiMicco et al (2008).

Social Network Services

Boyd & Ellison (2007) defined social network service as “Social networking services can be

widely defined as internet- or mobile-based social spaces designed to facilitate communication,

collaboration, and content sharing across networks of contacts."

Online Social Networking

Christopher (2008) asserted that "Online social networking has been around in various forms for

over a decade, and has continued to be widely notice in the recent years". Online social networks

take various forms, and are created for several purposes.

Online groups or online study clubs provide a way to share knowledge, meet other dentists with

similar interests, access a calendar of events, share files, and upload pictures and videos. For

dentists, this means we can now practice solo but have the benefit of knowledge sharing and

connectivity to a specific group that is of interest to us Dan Marut (2009).

Tools for Online Corporate Business

The sharing of information and professional work are important for communication and productivity

of an organisation. Using some tools on the internet, employees can create, share and edit files of

work (e.g. calendar, word files, excel) in real time. These web applications are not difficult to

use. They enable people to improve internal communication within an organisation (Kioskea.net,

2009).

According to Faith (2009), the best features of using social networking to promote a business are

free and easy business building tools. Most social networking sites are free to register. This may be

important for a business that is working with a limited advertising budget. It is also suitable for those

without an internet specialist or marketing expert to use them for building your customer and/or

supplier list of contacts. It is worthwhile trying for any business that is not yet using social

networking to explore and promote their enterprise".

Social Network Tools for Online Corporate Business

There are several online social network tools which are beyond the scope of this paper to cover.

The most common social networking tools adopted for business applications include the

following: LinkedIn, Facebook, MySpace, Twitter, Flickr and Youtube.

i. Facebook Nicole S (2008) stated that “Facebook is structured around linked personal profiles based on

geographic, educational, or corporate networks. Profile of members can show a range of personal

information, including favorite books, films, and music; e-mail and street addresses; phone numbers;

education and employment histories; relationship status (including a link to the profile of the person

with whom one is involved); political views; and religion. According to Facebook, an average of 20

minutes per day is spent by members once they log in. tasks performed by members include linking to

friends’ profiles, uploading and “tagging” (or labeling) friends in photos, creating and joining groups,

posting events, website links, and videos, sending messages, and writing public notes for each other”.

Ads can be posted in either available or customized format. The market place is available for all

Facebook users and is currently free (Facebook Adds 2007)”.

Facebook has variety of resources to use. Taking approach of learning it step by step to create

profile that is interesting and start engaging with your friends, make frequent updates and work toward

building your profile. It permits creating ones own sub groups within the community where one

can promote events to a specified inner circle of entrepreneurs and professionals Pagiamtzis,

(2009).

Page 92: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

726 Vol. 7, Issue 3, pp. 723-732

ii. LinkedIn This is a business-oriented social networking site created in December 2002 and became

operational in May 2003 mainly used for professional networking. The network consists of more

than 43 million registered users, across 170 industries. The goal of the site is to allow registered

users to maintain a list of contact details of persons they know and trust in business. The persons on

the list are referred to as Connections. The network permits users to invite anyone (whether a site

user or not) to become a connection. "LinkedIn takes your personal business network online and

gives you access to people, jobs and opportunities like never before" (Sonja, 2008). It is mainly

designed and targeted for professionals. Because of this, most of the users are above the age of 24.

Many users have been in employment for a relatively long period and are mostly well-educated.

i i i . Twitter This is a Web 2.0 micro-blogging tool that requests users from the general public to answer a

simple question, “What are you doing?” The web-based software has been described as a

combination of blogging and Instant Messaging. It encourages brevity by limiting the postings to

140 characters, (Steven (2008). Twitter is a deceptively simple system in which users create an

account and post messages, or “tweets,” comprised of up not more than 140 characters, which can be

viewed by a network of associates.

Twitter was created in March 2006. While Twitter does not publish statistics on usage, Forrester

Research estimated that there are about 5 million Twitter users in March 2009. It is one of the recent

communication tools parallel growing social and cultural changes (in most parts of the world)

toward greater transparency and democratization. These changes are also emerging and becoming

visible within individual businesses and non-profit organizations.

Tweeting is not a difficult task. One can follow or be followed by others on the social networking

journey. People from across various backgrounds can be found twitting on daily basis about their

happenings Pagiamtzis, (2009). However, Pagiamtzis stated that "Recently many corporations have

used the Twitter to keep their customer informed on company news and product and service

launches". One can also post links to other networking sites that have posted information for

followers to learn more about him in the social networking arena.

iv. Flickr

Flickr is a web site owned by Yahoo! that offers photo sharing and a host of related services for users

of any level, from beginners to professionals. It is tremendously easy to use, and when used with

friends or associates it can be great fun.

According to crunchbase.com, (2010) Flickr.com allows users to utilize a number of useful features.

Unlike many photo sharing sites, no fee is required to start loading and sharing pictures. Users have

the option to make their pictures private or public. However, around 80% of the pictures on Flickr are

public. Other features of Flickr include maps to track where and when photos were taken, tagging and

commenting on pictures, and the ability to create postcards and other products.

Flickr is absolutely the most exciting place for pictures of cute puppies, slightly out-of-focus sunsets

and wedding cakes. It is not very famous as business hub use Flickr through Perl that are more

business oriented. A common business use for Flickr has been to show a product. This works well

enough for companies with a physical product to sell, especially one that benefits from visual

exposure (Teodor Zlatanov, 2009). This is achieved by creating a group to invite fans to join and

share photos.

v. Youtube

YouTube is a video sharing website on which users can upload and share videos. Everyone can watch

videos on YouTube. People can see first-hand accounts of current events, find videos about their

hobbies and interests, and discover the quirky and unusual. As more people capture special moments

on video, YouTube is empowering them to become the broadcasters of tomorrow

(www.crunchbase.com, 2010). Thus, YouTube can make a fortune in the business setting, as it could

help facilitate a new mode of business communication.

Executives at many companies provide regular presentations to department groups or even the entire

workforce, to discuss financial performance, share information, recognize outstanding workers and so

on. While the bulk of the audience may be at headquarters, many others may be in remote offices. A

Page 93: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

727 Vol. 7, Issue 3, pp. 723-732

lot of workers may be absent the day of the presentation or unable to attend for other reasons. By

videotaping the highlights of the presentation and putting them on YouTube, they can be viewed by

everyone who is authorized to do so (Ramel, 2007).

Moreover, Ramel, (2007) mentioned that, a picture is worth a thousand words, and thousands of

pictures streamed together at 29.97 frames per second are worth a lot more. YouTube can be used to

show off new shiny hardware to prospective customers or demonstrate new software and provide

product data, statistics or any other information.

III. SOCIAL NETWORK STRATEGY

Having a good Social Networking Strategy for corporate organisation is helpful if it wants to become

successful in it business and consequently be able to develop truly complex business strategies using

online social networking. It is very important to develop a real verifiable presence on all the key.

Organization networking sites and all the major business networking sites. A lot of people spend so

much time ego networking trying to become known by everyone that they actually rarely leverage

their position and do not get to know anyone well enough to actually transact any business.

According to Pagiamtzis (2009), there has been many webinars and websites discussing the value

added reasons in combining communications as a key social strategy for your online presence. They

further mentioned some practical advice on how to use the social networking as a key tool to build

organizational resources and promote its expertise or marketing ideas in easy and effortless way.

The bottom line is that most people have experienced social networking sites like LinkedIn and

MySpace and understand the value of having specific groups of people with similar interests, in this

case how others may be using the software, communicate with each other. The business relationship

expert, Bob (2008), stated that “If these social networking concepts are not in your radar, you are

ignoring a dynamic trend that could have a profound impact on key areas of your business such as

profitable revenue growth, talent acquisition and development, and operational efficiency and

effectiveness".

Benefit of Social Network Tools for Corporate Businesses

One of the many ways to achieve industrial competitiveness is to manage and share efficiently

knowledge built inside an organization. In this context, social networks have shown signs of being an

efficient tool to proliferate individual and explicit knowledge. It can also improve tacit knowledge

dissemination, helping to capture organizational knowledge based on the knowledge of each of its

employees (Ricardo 2009).

A company that has a good early warning system would not miss opportunities or fail to meet

challenges quickly enough. Clueless organisations in many industries were surprised when PCs turned

out to be a big thing, because they had no way of absorbing that knowledge systemically, through

their own employees. Even though some people inside the organization undoubtedly knew the shape

of the future and were talking about it, they had no way to get it to decision-makers.

According to executive briefing from social networking for businesses & associations (2009), the top

10 ways businesses, associations and organizations can use social networking are highlighted. These

include:

I Customer and Member Relationship Development

Customer satisfaction is at an all-time low, perhaps as a result of reduced business focus on actual

relationships, and an increased focus on “customer relationship management” systems emphasizing

management of data rather than personal connections. Online social networks allow a prospective

customer or prospective member to easily facilitate a real, human level connection with individuals

within an organisation. This enables genuine business relationships to form and puts an authentic

human face on the interaction, changing the external perception of an organization from a sterile,

faceless behemoth into a collection of individuals who are ready to help.

II. Customer support (connecting the customer with the right resource)

Successful customer support achieves a number of goals. Basic customer service includes, of course,

assisting customers when they have problems or questions about an organization’s products.

However, online networks enable exceptional customer support that goes beyond the basics, allowing

customers to connect with experts in an organization who have deep knowledge in a particular area.

III. Provide the “whole product”

Page 94: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

728 Vol. 7, Issue 3, pp. 723-732

By creating a strong network of complementary providers with similar philosophies and business

practices, a single service provider can provide a much greater value proposition to a prospective

customer than an individual working without the benefit of the network.

IV. Supercharge Meeting Facilitation and Preparation

The unfortunate part of meetings and conferences is that it always seems that one do not connect with

the people he really want to meet until the final day of the event, when he meets them randomly in the

buffet line. A dedicated online social network created before the event enables attendees to use their

time at the event more efficiently, by determining with whom they want to connect before leaving

home.

V. Increasing the Value and Extend the “Shelf Life” Of Conferences

Similar to the above point, creating an online social network of event attendees extends the “shelf

life” of a conference, enabling the attendees to remain connected and take action on the items

discussed at the event. This can evolve a meeting, event or conference from a “one time” occurrence

into the catalyst of a community that more effectively achieves its goals.

VI. Share knowledge

By connecting a social network with basic subscription technologies (such as RSS, or “Really Simple

Syndication”), an individual can easily “subscribe” to updates from customers and colleagues. This

enables a straightforward way to stay abreast of the goings-on in projects of interest, as well as a way

to share knowledge within an organisation without additional effort.

IV. METHODOLOGY

Four international manufacturing and services companies were considered in the study. They include

Coca-cola, Ford, IBM and Sears. Five manufacturing and services companies in Malaysia were

targeted for the research. These include Celcom, Proton, J-Biotech, TM and Inter-Securities corporate

companies. The selection of manufacturing and services companies was done in such a way that it

will cut across major corporate companies that make use of social platform in their business activities.

For the international companies, the study was conducted by accessing the information regarding their

use of social networks as evidenced on their websites. For the local companies a combination of

questionnaires and interviews were adopted. Priority was also placed on the respondents that must

have been a part of the management in order to give the researchers a broader and better

understanding of the use of social networks in the companies.

V. DATA ANALYSIS AND FINDINGS

Social Network Tools for Corporate Businesses

The website analysis of the international corporate companies shows the kind of social networking

tools used for their business activities. Most of them use their own social networking sites in

promoting their business activities apart from the known social platforms. Ultimately, the selected

companies that employed successful use of social network in business activities introduced the

practice of publicity in marketing product on social platforms.

Meanwhile, the analysis of the local corporate companies obtained from questionnaire shows that all

of them employed only the use of some public social platforms. Nonetheless the local companies are

now motivated to use of their own social network for business activities.

Therefore, the study found out that, apart from the known social networking sites, all the international

and local corporate companies under study use the social platform in promoting their products and

services. The study established more extensive use of diverse social networks compared to the local

companies as shown in figure 1.

Page 95: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

729 Vol. 7, Issue 3, pp. 723-732

Figure 1 frequency of social network tools

Internation

al

Local

Youtube

Linkedin

Flickr

Mysears

Mykmant

Lotus®Conne

ct

Myspace

Twitter

Facebook

Youtube

Linkedin

Flickr

Mysears

Mykmant

Lotus®Conne

ct

Myspace

Twitter

Facebook

4

1

2

1

1

1

2

3

4

1

1

4

0

0

0

0

1

4

Page 96: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

730 Vol. 7, Issue 3, pp. 723-732

Social Network Marketing Strategy

Some social network strategies were brought and digested out by the researchers that could be used as

a challenge to some corporate companies. Many companies are now committed to improving the lives

of their customers by providing quality service products solutions that earn their trust and build

lifetime relationship. Ultimate, they are now strategizing to become an even bigger integral part of the

online community with new changes and developments that will likely place them as a front runner of

social networking and involvement. However, these companies according to research findings

developed a marketing-based strategy that gives them a competitive advantage. The social network

marketing strategies used by the companies are as indicated in Table 1.

Table 1: Social Network Market Strategies

CORPORATE COMPANIES

INTERNATIONAL LOCAL

Name Social Network Strategy Name Social Network Strategy

Coca

Cola

Creates games and reinforce their

brand image in the target markets'

mind J-Biotech

Organizing a sort of online quiz and

making product tag, Thus, the

customer contending with their

friend

Ford

Creates online quiz applications

making customers to challenge

their colleagues about company’s

product and services

TM

Promoting products and then client

can recommend to their friends.

IBM

Create demand-pull video

advertising, web advertising &

viral marketing

Celcom

Sending out information to potential

customers through the network

Sears

Launch an open ID platform and

directly connect users

Inter-

pacific

Securities

Make clients to challenge their

friends about company’s products

Proton Making an application about

company’s product to the clients

Social Network Usage Suitable For Corporate Business

As highlighted earlier, the use of social networking for corporate business indeed has tremendous

advantage. This may perhaps improve company’s performance in terms of promoting product and

services. Most of the respondents were of the view that corporate companies need to be on social

platform (their owned or public) in order to have competitive advantage. This is a great challenge to

corporate companies.

Using social network, a company can benefit from the striking ways of doing business which could of

a great competitive advantage. Suitable social networks are the ones that enable corporate businesses

to achieve the following:

i. Finding buyers

ii. Find manufacturers

iii. Finding potential job candidates

iv. Create persona; friends or business connection group

v. Hire people

vi. Market product

VI. CONCLUSIONS

In this paper we compared the use of social networks between selected international and local

corporate businesses in Malaysia. By connecting on social network, individuals can easily get updates

from customers and colleagues regarding company’s products and services. This perhaps enables a

Page 97: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

731 Vol. 7, Issue 3, pp. 723-732

direct way to stay well-informed about what is going on in a particular company. Therefore, a

company can diversify the means of sharing knowledge within an organization without additional

effort especially company research related issues. Social media, if used effectively, can leads

corporate companies to find potential job candidates who are skilled social networkers. The

implication of this study is that the local companies can benefit more by adopting some of the

strategies used by the international corporate companies.The companies set up the skills they want on

social platforms, while the candidates had to be very active within social platform before they even

come across it. This helps in getting the right potential employees needed by the companies.

ACKNOWLEDGMENT

This work was sponsored by the Research Management Center, Universiti Teknologi Malaysia. My

thanks to the anonymous referees whose comments helped considerably in the preparation of this

paper for publication.

REFERENCES

[1]. Boyd, D., & Ellison, N., (2007), ‘Social network sites: Definition, history, and scholarship’, Journal of

Computer-Mediated Communication, 13(1),.Retrieved August 22, 2009 from

http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.htm , pp.1-11

[2]. Christopher,C.(2008) Executive Briefing: Social network for business associations. Retrieved August

20, 2009 from http://haystack.cerado.com/html/haystack_directory.php. PP.1-8.

[3]. Dan, M. (2009). Social networking for dentists – made easy. Retrieved August 22, 2009 from

http://www.dental- tribune.com/articles/content/id/315/scope/news/region/usa.

[4]. DiMicco. J., Millen. D., Geyer. W., Dugan, C., Brownholtz, B., and Muller, M. (2008) Motivations for

Social Networking At Work. In Proceedings of The 2008 ACM Conference on Computer Supported

Cooperative Work, San Diego, CA, USA, November 08 - 12, 2008, pp.711-720

[5]. David,R.(2007) YouTube for Your Business; Computerworld. Retrieved August 20, 2009 from

http://www.pcworld.com/article/133278/youtube_for_your_business.html.

[6]. Emin , D. & Cüneyt , B. (2007) Web 2.0 - an Editor’s Perspective: New Media for Knowledge Co-

creation. International Conference on Web Based Communities (2007),pp 27-34.

[7]. Facebook Adds Marketplace of Classified Ads (2007-05-12) .Retrieved August 24, 2009 from

www.physorg.com/news98196557.html .

[8]. Faith, D. (2009) Writing Level Star: Promote your business using social networking tools.

[9]. Pagiamtzis, J. (2009 ) Social Networking Strategy: Communication is the key .Retrieved August 12,

2009 from: http://www.articlesbase.com/social-marketing-articles/social-networking-strategy-

communication-is-the-key-1112082.html

[10]. Kioskea.net (2009) Corporate Tools for Online Business, Retrieved August 24, 2009 from

http://en.kioskea.net/faq/sujet-1904-corporate-tools-for-onlinebusiness.

[11]. Krista, (2009) 18 Using Social Networking Sites: You are hired! CV. 2009.

[12]. Marshall Kirkpatrick “Top Ten Social Networking Sites See 47% Growth,” the socialsoftwareweblog

.http://socialsoftware.weblogsinc.com/2006/05/17/top-10-social-networking-sites-see-47-growtw.

[13]. Mayer. R., Davis. J., and Schoorman, F., (1995). “An Integrative Model of Organizational Trust,” The

Academy of Management Review (20) 3, pp. 709-734.

[14]. Metzger. M., (2004) “Privacy, Trust, and Disclosure: Exploring Barriers to Electronic Commerce 9

(4),” Journal of Computer-Mediated Communication (9) 4.

[15]. Nicole, S., (2008). The Valorization of Surveillance: Towards a Political Economy of Facebook.

[16]. Sonja, J., (2008) How to use LinkedIn to bring in business. Retrieved August 24, 2009 from

http://www.sonjajefferson.co.uk/.

[17]. Steven, C. (2008) Twitter as KM Technology.

[18]. Suzanne, V., (2007) Marketing Leadership Council 2008, Marketing Leadership Council 2008:

Retrieved August 14, 2009 from www.mlc.executiveboard.com.

[19]. Teodor, Z., (2009). Cultured Perl: Flickr, a business's bst frnd Create charts and upload them to Flickr

using CPAN modules, IBM DeveloperWorks

[20]. Directory Submissions (2009), Social Networking Definition, Retrieved May 01, 2011,

http://directorysubmissions.eu/news/2009/04/22/social-networking-definition/

[21]. Barker, Robert L. (Ed.). (1999). The social work dictionary (4th Ed.). Washington, DC: NASW Press.

[22]. Susan Gunelius (2011). Social Networking Retrieved May 01, 2011,

http://weblogs.about.com/od/bloggingglossary/g/SocialNetwork.htm

Page 98: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

732 Vol. 7, Issue 3, pp. 723-732

ABOUT THE AUTHORS

Mohmed Y. Mohmed AL-SABAAWI is a Lecture of University of Mosul. in the College of Administration &

Economic at the Department of Management Information System .

Address University of Mosul. College of Administration & Economic

Page 99: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

733 Vol. 7, Issue 3, pp. 733-742

PRECISE CALCULATION UNIT BASED ON A HARDWARE

IMPLEMENTATION OF A FORMAL NEURON IN A FPGA

PLATFORM

Mohamed ATIBI, Abdelattif BENNIS, Mohamed BOUSSAA Hassan II - Mohammedia – Casablanca University, Laboratory of Information Processing,

Cdt Driss El Harti, BP 7955 Sidi Othman Casablanca, 20702, Maroc

ABSTRACT The formal neuron is a processing unit that performs a number of complex mathematical operations on real

format data. These calculation units require hardware’s architectures capable providing extremely accurate

calculations treatments. To arrive upon more accurate hardware architecture in terms of the calculation, the

new proposed method uses data coding in single precision floating point. This allows handling of infinitely

small and infinitely large data and; consequently, a diverse field of application. The formal neuron

implementation requires an embedded platform whose implementation must be flexible, efficient and fast. This

article aims at presenting in detail a new precise method to implement this calculation unit. It uses a number of

specific blocks described in VHDL hardware description language in an embedded FPGA platform. The data

handled by these blocks are coded in 32-bit floating point. The implementation of this new method has been

developed and tested on an embedded FPGA platform of Altera DE2-70. The calculation results on the platform

and those obtained by simulation are very conclusive.

KEYWORDS: FPGA, precision, formal neuron, Floating point, HARDWARE implementation.

I. INTRODUCTION

The artificial neurons networks (ANN) present heuristic models whose role is to imitate two basic

skills of the human brain:

Learning from examples.

The Generalization of knowledge and skills learned through examples to others which are

unseen in the learning phase [1].

The ANN is configured through a learning process for a specific application, this process involves

adjusting the synaptic connections between neurons. These models are used in a wide range of

applications such as patterns recognition, the classification, robotics, signal and image processing...

etc. For example in the field of information processing, these models simulate the way biological

nervous systems process information.

The ANNs are networks based on a simplified model of neuron called formal neuron, this model can

perform a number of functions of the human brain, like the associative memory, supervised or

unsupervised learning, parallel functioning…etc. Despite all these features, formal neuron is far from

having all the performances of biological neurons that human being possess like synapse sharing and

membrane activation [2].

A major problem of the use of formal neurons in the ANNs, is the lack of Hardware method to

implement in embedded platforms [3] [4]. The respect of, on the one hand, the neurons architecture,

and on the other hand, the format of neurons manipulated data which takes often the form of a real

number has a great impact on the calculation results of this neuron and their precision. This is

especially true in the case of an application requires an architecture consisting of a large number of

neurons.

Page 100: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

734 Vol. 7, Issue 3, pp. 733-742

Several attempts have allowed implementing formal neurons as integrated circuits. The field-

programmable gate array (FPGA) is the preferred reconfigurable hardware platform. It has proven its

capacity through several applications in various fields [5]; implementing complex control algorithms

for high speed robot movements at [6], efficient production of multipoint random distributed variable

[7], the design of hardware platforms / software for the car industry [8], or in applications of energy

production [9]. However, the design of the neuron presents several challenges, the most important is

to choose the most effective format of arithmetic representation to ensure both good precision and

processing speed.

The article examines in detail the precision of formal neuron design with a sigmoid activation

function on FPGA, the architecture is tested with a floating point arithmetic format by using an

advanced hardware description language (VHDL).

The article is organized as follows. Section II provides a global overview on different Hardware

architectures. Section III presents a theoretical study of the formal neuron with its different activation

functions, and the existing data formats. Section IV dedicated to the details of the implementation of

formal neuron Hardware detail. Section V presents the tests of efficiency of this implementation.

Finally Section VI presents the conclusion.

II. RELATED WORK

Several approaches of architecture have been proposed for hardware implementation of the formal

Neuron in a platform such as FPGA, as shown in Figure 1. In 2007, Antony W. Savich has made a

detailed study on FXP and FLP representations and the effect of the accuracy on the implementation

of the multilayer perceptron. The obstacle found in this study was related to the implementation of

formal neuron with the sigmoid activation function which requires complex operations such as the

exponential and division [3].

Figure 1. Neuron structure

In 2011 (Horacio Rostro-Gonzalez) has presented [4] a numerical analysis of the role of asymptotic

dynamics in the design of hardware implementations of neural models like GIF (generalized integrate-

and-fire). The implementation of these models was carried out on an FPGA platform with fixed-point

representation (figure 2).

In 2012 (A. Tisan) has introduced an implementation method of the learning algorithm of networks

artificial neural on FPGA platform. The method aims at constructing a network of specific neurons

using generic blocks designed in the math Works Simulink environment. The main features of this

solution are mainly the implementation of the learning algorithm on high capacity chip of

reconfiguration and functioning of real time constraints [5].

In 2011 (Cheng-Jian Lin) has presented in his article the hardware implementation of neurons and

neural networks, with a representation of real numbers in fixed-point format, using the perturbation

method as a method of network’s learning [2].

Page 101: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

735 Vol. 7, Issue 3, pp. 733-742

Figure 2. Architecture for a single neuron

III. THE FORMAL NEURON THEORY

3.1. History

In 1943, McCulloch and Pitts have proposed a model that simulates the functioning of biological

neuron. This model is based on a neurobiological inspiration, which is a very rudimentary modeling

the neurons functioning, in which the accumulation the neuron synaptic activities are ensured by a

simple weighted summation [1]. The interconnections of a set of such units provide a connectionist

neural system, also referred to as neural network.

These networks can perform logical functions, complexes arithmetic and symbolic. Figure 3 shows

the schema of a formal neuron:

Figure 3. Schema of a formal neuron

With:

X1……Xn: the neuron vector object.

W1……Wn: synaptic weights contained in the neuron.

∑: a function which calculates the sum of the multiplication between the object vector and the

synaptic weights according to equation (1).

b: the bias of the summation function.

F(V): the neuron activation function.

Y: the formal neuron output.

A "formal neuron" (or simply "neuron") is a nonlinear algebraic and bounded function. In fact the

neuron receives at its input an object vector of which each object parameter is multiplied by a synaptic

weight. The sum of these multiplications and the bias constitute internal activation:

(1)

V will gets, at the end, to an output through an activation function. There are many activations

functions of a formal neuron, the most used are:

Page 102: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

736 Vol. 7, Issue 3, pp. 733-742

The threshold function (Figure 4): In 1949, Mculloch and Pitts have used this function as an

activation function in their formal neuron models. Mathematically this model of neuron

generates an application from Rn to {0,1}. 1 si V(X) ≥ Ѳ

Y(X)=f(V(X))= (2)

0 sinon

Figure 4. Threshold function

The sigmoid function (Figure 5): this is a function proposed by by Rosenblat in 1962, also

called logistic function, defined by:

(3)

It is a function with values in the interval [0,1], which allows to interpret the output of the neuron as a

probability. In addition, it is not polynomial and is infinitely continuously differentiable.

Figure 5. Sigmoid function

The Gaussian function (Figure 6): It is a function proposed in 1989 by MOODY and

DARKEN in order to be used in specific networks called radial based networks (RBF). It is

defined by:

Y(X)=f(V(X))= (4)

It is a function that depends on the center points of the input space and its width. In addition it is a

continuous and differentiable function.

Figure 6. Gaussian function

3.2. Data formats handled by a formal neuron

The formal neuron, in most cases, makes its calculations with real numbers. To represent a real

number, there are a finite number of bits and one can imagine different ways to represent it with this

bit set. The two well-known methods of representation are fixed-point representation and floating

point representation.

Page 103: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

737 Vol. 7, Issue 3, pp. 733-742

3.2.1. Fixed-point representation

This is the usual representation as is done on paper except for the sign and the decimal point. For the

sign, we keep a bit for it; in general the left bit. Also the point is not represented, working with an

implicit point located at a definite place.

This representation is not widely used because it has a low accuracy because of the numbers lost to

the right of the decimal point. Another problem with this representation is that it does not represent

very large numbers.

3.2.2. The floating point representation

Inspired by the scientific notation (ex : +1,05 x 10-6 ) and in which a number is represented by the

product mantissa and a power of 10 (decimal) or a power of 2 (binary). To normalize the floating

point representation of real numbers [3], the IEEE-754 norm is recommended by the Institute of

Electrical and Electronics Engineers. It is widely used to represent real numbers. Each number is

represented by:

bit for the sign (s).

Ne bits for the signed exponent (E).

Nm bits for the absolute value of the mantissa (M).

Real numbers are represented either in 32 bits (simple precision), 64 bits (double precision) or 80 bits

(extended).

Example of a real number represented in 32 bits:

Table 1. Floating point representation

Bits Bits (31 down to 0) 31 30 - 23 22 - 0

Contents (s, E, M) Sign (s) 0 = positive

1 = négative

Exponent (E)

An 8 bits integer

Mantissa (M)

An 23 bits integer

X = (- 1)S x 2E- 127 1, M with 0 < E < 255

IV. IMPLEMENTATION DETAILS

This section reviews the different steps and necessary modules for the design of formal neuron with

sigmoid activation function. The formal neuron module consists of many multipliers, Additionners,

and a block of sigmoid activation function.

Among the problems of the design of a formal neuron in a FPGA platform with the VHDL language,

is that the real numbers are not synthesized in this language, the solution which has proven this

implementation is to design a formal neuron that manipulates these data, representing them in

floating-point . This provides an efficiency in terms of the calculation precision.

To achieve this accuracy, blocks called mega functions , which are blocks offered by the constructors

of the FPGA, have been used . These blocks are written in VHDL to handle complex arithmetic

operations with floating point representation (32 or 64 bit), these blocks are useful for the calculation

accuracy in the formal neuron.

4.1. Megafunctions

As the design complexity increases in a fast manner, the use of specific blocks has become an

effective method of design to achieve complex applications in different domains such as robotics,

signal and image processing …etc. The simulation software QUARTUS offers a number of IP («

Intellectual Properties ») synthesizing complex functions (memory, multipliers, comparators etc ...)

optimized for Altera circuits. These IP, designated by the term « megafunction », are grouped into

libraries, including « Library of Parameterized Modules » (LPM), containing the most complex

functions that are useful for the design of formal neuron.

The use of megafunctions replacing in the coding of a new logic block saves precious time for design.

In addition to the functions provided by Altera allows us to offer more efficient logic synthesis for the

realization of the application. It also allows the opportunity to redimensionning the size of these

Page 104: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

738 Vol. 7, Issue 3, pp. 733-742

megafunctions by adjusting the settings and to access specific functionality of the architecture in the

memory, DSP blocks, registers shift, and other simple and complex functions [10].

The formal design of the neuron was based on the sigmoid activation function; it's an indefinitely

differentiable function.

The design detail is divided into two parts (Figure 7):

The first part: the design of the internal activation.

The second part: the design of the sigmoid activation function.

Figure 7. Design of the formal neuron

4.2. Design detail of the internal activation

The formal neuron receives as input an object vector X=(X1,X2,…..,Xn) which is represent forms to

recognize in the example of application of pattern recognition and a vector of synaptic weights

W=(W1,W2,…,Wn) representing the connection between the neuron and one of its inputs (Figure 8).

The function of the neuron consists of calculating firstly the weighted sum of its inputs. The sum

output is called internal neuron activation (1).

Figure 8. Internal activation

This module implements this operation using megafunctions multiplication and addition according to

Equation (1).

4.2.1. Multiplication:

The multiplication block used is a megafunction block that implements the functions of the

multipliers. It follows the IEEE-754 norm for representations of floating point numbers in simple

precision, double precision and single extended precision. More, it allows the representation of special

values like zero and infinity.

The Representation followed in this paper is the representation of single precision 32 bits as follows;

this is a High Precision representation which consumes less space compared to 64 bits:

X = (-1)S x 2E-127 x 1,M

The result (R) of the multiplication algorithm of two real inputs (A and B) represented in floating

point used by this megafunction is calculated as follows:

R= (Ma x 2Ea) x (Mb x 2Eb) = (Ma x Mb) x 2Ea+Eb

Where:

R: multiplication result.

Ma: the mantissa of a number A.

Mb: the mantissa of a number B.

Ea: the exponent of a number A.

Eb: the exponent of a number B.

Sign: (sign of A) XOR (sign of B).

4.2.2. Addition:

Page 105: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

739 Vol. 7, Issue 3, pp. 733-742

The addition block used is a megafunction block that implements the functions of addition and

subtraction; it follows the IEEE-754 norm for representations of floating point numbers in single

precision, double precision and single extended precision, with handling of selecting operation

between addition and subtraction.

The result (R) of the addition algorithm of two real inputs (A and B) represented in floating point used

by this megafunction is calculated as follows:

R = (-1)Sa x 2Ea x 1,Ma + (-1)Sb x 2Eb x 1,Mb

Where:

R: addition result.

Ma: the mantissa of a number A.

Mb: the mantissa of a number B.

Ea: the exponent of a number A.

Eb: the exponent of a number B.

Sa:the sign of number A.

Sb:the sign of number B.

These 2 blocks are the basis for designing the internal activation of the formal neuron. The following

Figure 9 shows an example of the implementation of this internal activation.

Figure 9. Design of internal function

4.3. Design detail of the sigmoid function

The second block is a transfer function called activation function. It limits the output of the neuron in

the range [0,1]. The most used function is the sigmoid function (Figure 10).

Figure 10. Sigmoid function

The implementation of this function requires a number of complex operations such as division and

exponential. It requires the use of the exponential and division megafunctions.

4.3.1. Exponential and division

The used blocks of the exponential and the division are megafunction blocks that implement the

functions of division and exponential. These blocks require a number of resources for their designs,

the following 2 tables show these resources:

Page 106: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

740 Vol. 7, Issue 3, pp. 733-742

Table 2. Exponential resources

Precision Output

latency

Logic usage Fmax

(MHz)

ALUTs Registers 18 bit DSP memory

Single 17 527 900 19 0 274,07

Double 25 2905 2285 58 0 205,32

Table 3. Division resources

Precision Output

latency

Logic usage Fmax

(MHz)

ALUTs Registers 18 bit DSP memory

Single 20 314 1056 16 0 408,8

Double 27 799 2725 48 0 190,68

4.3.2. Sigmoid function implementation

The implementation of the sigmoid function in addition to these displays two blocks, the already

mentioned blocks of multiplication and addition, as it is shown in the following diagram (Figure 11):

Figure 11. Design of sigmoid function

V. TEST AND RESULT

This test is designed to evaluate the precision of calculation of the formal neuron with VHDL

language by comparing it with the software results. Before performing this test, it is necessary to

initialize the synaptic weights. The following table summarizes the initialization (the case of 4 inputs),

representing the floating point data:

Table 4. Values of the synaptic weight

value

Floating point

representation in 32

bits (hexadecimal)

W1 1 3F800000

W2 0.5 3F000000

W3 -0.5 BF000000

W4 -1 BF800000

These values of the synaptic weights will be used during the entire test phase for design.

The simulation of the complete implementation of a formal neuron using the sigmoid activation

function, in the FPGA platform of the family Cyclone II Version EP2C70F896C6. Requires a number

of steps in the simulation software Quartus II 9.1:

1. Create a new project by specifying the reference of the chosen platform.

2. Choose the Block Diagram/Schematic file.

3. Draw the model of the formal neuron by combining between required megafunctions blocks

(figure 12).

Page 107: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

741 Vol. 7, Issue 3, pp. 733-742

Figure 12. Design of the complete formal neuron

4. Compile the project by clicking START COMPILER.

5. Choose VECTOR WAVEFORM FILE by specifying the values of the manipulated inputs

(table 4) and outputs.

6. Start SIMULATOR TOOL to simulate a module of the formal neuron.

7. View the simulation result (Figure 13).

Figure 13. Test result

The following table shows the obtained results of the formal neuron test based on the sigmoidal

activation function.

Table 5. Hardware and Software result

vector object Hardware

result

Software

result X1 X2 X3 X4

1 1 0.5 0.5 0.67917 0.679178

1 0.5 -0.5 -0.5 0.88 0.88

0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 -0.5 -0.5 0.817 0.817574

4 test vectors are tested in this neuron. The table shows the output result of these 4 inputs with the

synaptic weights in the table (4). These data are represented in the FPGA with floating point format at

32 bits, leading to a good precision while doing the calculation in this neuron. Moreover, the table

shows also a comparison of the same neuron calculations carried in software. The result of these

calculations has shown great precision thanks to the representation of floating point data. This

precision is due to the use of megafunctions blocks (multiplier, additionner, exponential etc.).

VI. CONCLUSION AND FUTURE WORKS

This article has examined the technics of Hardware implementation of the formal neuron with

sigmoid activation function in the FPGA platform using a floating point 32-bit format, of neuron

processed data. The objective of this Hardware implementation is to materialize the formal neuron as

a specific component in the calculations and can, therefore, be added to the library of the Quartus

software.

Page 108: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

742 Vol. 7, Issue 3, pp. 733-742

For future work, this module of a formal neuron will be the base for the design of architecture of

artificial neuron networks, as the architecture of the multilayer perceptron (MLP). And apply this

network to various applications such as image processing, signal processing, pattern recognition...

REFERENCES

[1]. Sibanda, W., & Pretorius, P. (2011). Novel Application of Multi-Layer Perceptrons (MLP) Neural

Networks to Model HIV in South Africa using Seroprevalence Data from Antenatal Clinics. International

Journal of Computer Applications, 35.

[2]. Lin, C. J., & Lee, C. Y. (2011). Implementation of a neuro-fuzzy network with on-chip learning and its

applications. Expert Systems with Applications, 38(1), 673-681.

[3]. Savich, A. W., Moussa, M., & Areibi, S. (2007). The impact of arithmetic representation on implementing

MLP-BP on FPGAs: A study. Neural Networks, IEEE Transactions on, 18(1), 240-252.

[4]. Rostro-Gonzalez, H., Cessac, B., Girau, B., & Torres-Huitzil, C. (2011). The role of the asymptotic

dynamics in the design of FPGA-based hardware implementations of gIF-type neural networks. Journal of

Physiology-Paris,105(1), 91-97.

[5]. Tisan, A., & Cirstea, M. (2013). SOM neural network design–A new Simulink library based approach

targeting FPGA implementation. Mathematics and Computers in Simulation, 91, 134-149.

[6]. Shao, X., & Sun, D. (2007). Development of a new robot controller architecture with FPGA-based IC

design for improved high-speed performance. Industrial Informatics, IEEE Transactions on, 3(4), 312-321.

[7]. Bruti-Liberati, N., Martini, F., Piccardi, M., & Platen, E. (2008). A hardware generator of multi-point

distributed random numbers for Monte Carlo simulation.Mathematics and Computers in Simulation, 77(1),

45-56.

[8]. Salewski, F., & Kowalewski, S. (2008). Hardware/software design considerations for automotive embedded

systems. Industrial Informatics, IEEE Transactions on, 4(3), 156-163.

[9]. Bueno, E. J., Hernandez, A., Rodriguez, F. J., Girón, C., Mateos, R., & Cobreces, S. (2009). A DSP-and

FPGA-based industrial control with high-speed communication interfaces for grid converters applied to

distributed power generation systems. Industrial Electronics, IEEE Transactions on, 56(3), 654-669.

[10]. Online in: http://www.altera.com.

AUTHORS

ATIBI Mohamed received his master degree in information processing from Faculty of

Science Ben M’sik, Hassan II University Mohammedia-Casablanca in 2013, he is

preparing his PhD thesis in the same university, and his area of interest includes:

application of image processing and artificial neuron networks in road safety.

BENNIS Abdellatif Is a professor of higher education at the Laboratory of Information

Processing, Faculty of Science ben m'sik, , Hassan II University Mohammedia-

Casablanca, and responsible for the software engineering and telecommunications team in

the same laboratory.

BOUSSAA Mohamed received his master degree in information processing from Faculty

of Science Ben M’sik, Hassan II University Mohammedia-Casablanca in 2013, he is

preparing his PhD thesis in the same university, and his area of interest includes:

application of signal processing and artificial neuron networks in the cardiac signals.

Page 109: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

743 Vol. 7, Issue 3, pp. 743-755

TEMPERATURE PROFILING AT SOUTHERN LATITUDES BY

DEPLOYING MICROWAVE RADIOMETER

A.K.Pradhan1, S. Mondal2, L.A.T.Machado3 and P.K.Karmakar4 1Department of Electronic Science, AcharyaPrafullaChandraCollege, New Barrackpore,

North 24 Parganas, Kolkata 700 131, India. 2Dumkal Institute of Engineering and Technology, P.O-Basantapur, Dist- Murshidabad, India 3Instituto Nacional de Pesquisas Espaciais (INPE) Previsão de Tempo e Estudos Climáticos,

CPTEC Road, Cachoeira Paulista, SP, 12630000, Brazil. 4Institute of Radiophysics and Electronics, University of Calcutta, Kolkata 700 009, India

ABSTRACT Multifrequency Microwave Radiometer (MP3000A) from Radiometrics Corporation is deployed at different

places of Southern latitude for the profiling of one of thermodynamic variables like Temperature. The radiative

intensity down- welling from the atmosphere and is expressed in an equivalent brightness temperature . The

radiation is a nonlinear function of the required quantities and we linearise the expression around a suitably

chosen first guess, such as a climatological mean. We describe changes in the brightness temperature around

the first guess by means of a weighting function which expresses the sensitivity of to the variation of the

humidity or the temperature around their initial values. The variation of brightness temperature

with height occurs at 51 – 53 GHz is observed but on the other hand the constancy of brightness temperature

with height at 56 – 57 GHz is noticeable. This suggests that the measurement of temperature at a certain place

by a ground based radiometer may provide good result by exploiting the 56 – 57 GHz band. In this band we

have used four frequencies for the purpose. But to extend our study we have also made an attempt to retrieve the

temperature profiles in 51 – 53 GHz band. The retrieval process starts with the calculations of and for

two sub-ensembles of radiometric observations separately. Here and . The

measured brightness temperatures at eight specified channel frequencies at Fortaleza, Brazil on 11th April,

2011 at 05:39 and 17:36 UTC are as shown in table 2. The summery of measured brightness temperature in

oxygen band at Belem, Brazil on 26th June, 2011 at 05:30 UTC and on 17th June, 2011 at 17:33 UTC and also

at Alcantara, Brazil on 12th March, 2010 at 06:03 UTC and on 15th March, 2010 at 17:57 UTC.

KEYWORDS: Microwave Radiometer, Temperature profile, Optimal Estimation, Inversion, Brightness

Temperature.

I. INTRODUCTION

Till date the radiosonde observations (RAOBs) are the fundamental method for atmospheric

temperature, wind, and water vapour measurement, in spite of their inaccuracies, cost, sparse temporal

sampling and logistic difficulties [1]. A better technology has been sought for decades, but until now,

no accurate continuous all weather technology has been demonstrated. The highly stable multichannel

radiometer ( MP3000A :Radiometrics Corporation, USA) has the capability of producing temperature

and water vapour profiles ([2]; [3]; [4]; [1]; [5]; [6]; [7]; [8]) within the admissible accuracies.

Applications for this passive radiometric profiling include: weather forecasting and now casting;

detection of aircraft icing and other aviation related meteorological hazards; refractivity profiles;

corrections needed for radio-astronomical studies; satellite positioning and GPS measurements;

atmospheric radiation fluxes; measurement of water vapour density and temperature as they affect

hygroscopic aerosols and smokes.

Page 110: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

744 Vol. 7, Issue 3, pp. 743-755

The present studies of temperature profiling are in the scope of CHUVA project (Cloud processes of

the main precipitation systems in Brazil: A contribution to cloud resolving modelling and to the

Global Precipitation Measurement). It aims to investigate the different precipitation regimes in Brazil

in order to improve remote sensing precipitation estimation, rainfall ground validation and

microphysical parameterizations of the tri-dimensional characteristics of the precipitating clouds.

Clouds play a critical role in Earth's weather and climate, but the lack of understanding of clouds has

been long limited scientists' ability of making accurate predictions about weather and climate change.

Climate simulations are sensitive to parameterizations of deep convective processes in the

atmosphere. The weather and climate modelling is improving space and time resolution and to

accomplish this it is necessary to move from cloud parameterization to explicit microphysical

description inside the cloud. Therefore, to improve space-time resolution reducing climate model

uncertainties is necessary to better understand the cloud processes. A3-D microphysical processes

description of the main precipitating system in Brazil can strongly contributes to this matter and thus

will be one of the tasks of the project. Keeping these in view the present authors are intending to get

the temperature profile at three different locations and they are: a) Fortaleza (30S; 380W),b)Belem

(1.460 S; 48.480W) and c) Alcantara ( 2.40S; 44.40W) in Brazil by exploiting the ground based

radiometric brightness temperatures at desired frequencies. It is to be mentioned here that these

locations are urban coastal city in the north-east of Brazil characterized by tropical climate, which

according to Köppen classification is the type as Equatorial - summer dry. Rainfall and wind regime

are governed mainly by the meridional shift of Inter-tropical Convergence Zone (ITCZ). The ITCZ is

located in its northernmost position, normally from August to October, and intense south-easterly

winds and low rainfall dominate in the area (dry seasonal). On the other hand, when the ITCZ is in its

southernmost position, from March to April, weak south-easterly winds and high rainfall prevail (wet

season).

Temperature profiles can be obtained by measuring the radiometric brightness temperature around 60

GHz .Centering this frequency there lies the continuum which we call as oxygen complex band. The

opacity is larger near the oxygen feature center, limiting emission observation to several meters in

height. Away from the oxygen feature center the opacity is smaller and emission can be observed at

increasing height. Since local temperature contributes to emission intensity, temperature profiles can

be obtained. In this context it is to be mentioned that in this band the emission is almost dependent on

ambient pressure and temperature.

The section III contains the physical principle needed behind the profiling technique. This ultimately

culminates to temperature weighting functions at the said three locations. The next section i.e., section

IV summarises the basic technique for inverting temperature from the measured radiometric

brightness temperatures at or near the oxygen complex. Here, we have chosen the Optimal Estimation

Method, in-spite of the in-built facilities available in the radiometer. Incidentally, the inversion

depends on the historical background of the chosen parameter of the said location. As the campaign

has been performed at three different locations of Brazil it is reasonably taken granted to reconstruct

the new background taking help of data taken from BADC (British Atmospheric data Center). This is

elaborated in the section V. The results obtained are summarised in section VI and the present work

ends with discussions and conclusions in section VII.

II. INSTRUMENT

The The Radiometrics ground based WVP-3000A portable water vapour and temperature profiling

radiometer measures the calibrated brightness temperature from which one can derive profiles of

temperature, water vapor, and limited resolution profiles of cloud liquid water from the surface to

approximately 10 km. The detailed descriptions of the system are given by[1]. A short summary of

the instrument characteristics is given here. The noticeable characteristics of the system include a

very stable local oscillator, an economical way to generate multiple frequencies and the multi-

frequency scanning capability ([5]). The radiometer system consists of two separate subsystems in the

same cabinet which shares the same antenna and antenna pointing system. A highly stable synthesizer

act as local oscillator and allows tuning to a large number of frequencies within the receiver

bandwidth. The water vapour profiling subsystem receives thermal emission at five selected

frequencies within 22 – 30 GHz [9]. The temperature profiling subsystem measures sky brightness

Page 111: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

745 Vol. 7, Issue 3, pp. 743-755

temperature within 51-59 GHz. An inbuilt infrared thermometer is provided with the said radiometer

to observe the presence of cloud and also measures the cloud base height [10]. Here in this paper, we

will restrict ourselves only to derive the temperature profiles at different places of Southern latitude as

mentioned earlier, under a collaborative CHUVA Project implemented at Instituto Nacional de

Pesquisas Espaciais - INPE, Brazil, at an initial stage. The salient characteristics of the radiometer are

shown in Table 1.

Table 1. Characteristics of the Radiometer

Frequencies(GHz) for water vapor and liquid

water profiling

22.234,23.035,23.835,26.235,30.00

Frequencies(GHz) for temperature profiling 51.248, 51.76, 52.280, 52.804, 56.02, 56.66, 57.288,

57.964

Absolute accuracy(K) 0.5

Sensitivity(K) 0.25

FWHP bandwidth(deg) 2.2 – 2.4

Gain (dB) 36 - 37

Side lobes(dB) < - 26

III. GENERAL PHYSICAL PRINCIPLE

The scalar form of Radiative Transfer Equation is remarkably simple in Rayleigh-Jeans limit and is

considered to be sufficient to the large majority of microwave applications. The radiative intensity

down- welling from the atmosphere and expressed in an equivalent brightness temperature can be

written as ([11])

(1)

Here, is the cosmic background radiation. The attenuation coefficient is a function of different

meteorological parameters.

The radiation is a nonlinear function of the required quantities and we linearise the expression around

a suitably chosen first guess, such as a climatological mean. We describe changes in the brightness

temperature around the first guess by means of a weighting function which expresses the sensitivity of

to the variation of the humidity or the temperature around their initial values

for a certain frequency and elevation angle . Here, stands for water vapour, for temperature,

for liquid water and for ambient atmospheric pressure respectively. However, the weighting

function analyses for humidity and temperature by ([12]) showed that the temperature

weighing function is given by

(3)

To explain more clearly about the weighting function, we take the temperature weighting function

(km-1). If we have a change in over a height interval (km), the brightness temperature

response to this change is where is called the height average of over the

height interval . The weighting functions are determined from the height profile of attenuation

coefficients at different frequencies. However equation (1) and its Rayleigh-Jeans approximation are

well discussed by [13] and its more general form including scattering is discussed by [14].

Information on meteorological variables may be obtained from measurements of radiometric

brightness temperature as a function of and/or . Equation (1) is used: a) in forward model

studies in which the relevant meteorological variables are obtained by radiosonde sounding, b) in

inverse problem and parameter retrieval applications in which meteorological information is inferred

from measurements from radiometric brightness temperature , c) in system modelling studies in

Page 112: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

746 Vol. 7, Issue 3, pp. 743-755

determining the effects of instrument noise on retrieval and optimum measurement ordinates such as

and/or ([15]).

However, for the sake of clarity the temperature weighting function at Fortaleza, Belem, and

Aclantara, Brazil is shown in Figure 1a - 1c.

Figure 1a. Temperature weighting function at Fortaleza at 17.56UTC. The derived median values of daily

radiosonde data are taken into consideration during the month April, 2011, for the purpose.

Figure 1b. Temperature weighting function at Belem at 17.33 UTC. The derived median values of daily

radiosonde data are taken into consideration during the month June, 2011, for the purpose.

Figure 1c. Temperature weighting function at Alcantara at 17.57 UTC. The derived median values of daily

radiosonde data are taken into consideration during the month March, 2010, for the purpose.

These figure show that the variation of brightness temperature with height occurs at 51 – 53 GHz but

on the other hand the constancy of brightness temperature with height at 56 – 57 GHz is noticeable.

This suggests that the measurement of temperature at a certain place by a ground based radiometer

may provide good result by exploiting the 56 – 57 GHz band. In this band we have used four

frequencies for the purpose. But to extend our study we have also made an attempt to retrieve the

temperature profiles in 51 – 53 GHz band which will be presented in the subsequent sections.

IV. INVERSION TECHNIQUE

For the purpose of formulating the inverse problem in this particular context i.e., retrieving the

temperature profile at three specified locations of Southern latitude we have purposely set the

radiometer towards zenith, direction. Hence, equation (2) can be rewritten retaining only the

temperature term as

(4)

Here, (refer to equation 4) the integral equation is linear in assuming is independent

Page 113: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

747 Vol. 7, Issue 3, pp. 743-755

of .The non-linear form is not commonly encountered in the microwave region as it is in the

infrared region. The infrared equivalent of the linear integral equation, for example, involves the

Planck brightness function, which is strongly nonlinear in Temperature in the infrared region but is

approximately linear for most of the microwave region. However, in practice is usually

measured at discrete number of frequencies, and the objective of the inversion technique is to find out

a function that, when substituted in equation (4) will give values of which might

be approximately equal to the measured values [16]. If the integral of equation (4) is approximated as

a summation over layers each of height , the radiometric temperature at frequencies can be

written as

(5)

Here, in our study, we have chosen four closely spaced frequencies 56 – 58 GHz in the lower shoulder

of the 60 GHz oxygen spectrum and thereafter frequencies 51 – 53 GHz, for the purpose. For the sake

of simplicity we write the equation (5) in a more compact form as

(6)

Where, and are the vectors of dimension and respectively and is an (n matrix.

The vector represents the observations; is a weighting matrix also presumed known and is

the unknown atmospheric temperature profile. To get the good vertical resolution we have

considered . But with this idea, this will produce infinite number of solutions. Now to get rid

of this issue i.e., to make the problem solvable we need a priori information about the character of the

atmosphere for a given geographic location and a given time of year. This information includes the

statistics about the temperature profile, constraints imposed by atmospheric physics, any other

information that, if integrated into the inversion algorithm, would narrow the range of values that

can have. However, the degree of accuracy to which this information is incorporated in the

inversion algorithm depends on the underlying structure of the variety of inversion method. Here, we

have chosen the Optimal Estimation Method purposely for the retrieval method.

The a- priori (statistical) information, in the present context, is comprising of monthly averages of the

vertical profiles of temperature and the constrains imposed by the atmospheres of particular places of

Argentina, Brazil, China, Newzeland, India for the months July through August, 2005.

V. OPTIMAL ESTIMATION METHOD

We consider the brightness temperature as measured by the radiometer is linearly related to the

unknown or sought function . Then equation (6) can be written as

(7)

where is the vector of order and is of order m and is (n matrix and is commonly

known as kernel or weighting function of the sought function ). In practice we cannot measure the

true exactly because of experimental error which may include both measurement error and

modeling error. However, our main purpose is to achieve successful retrieval of an unknown vector

using observations with . The key factor is to supplement the observations with sufficient

a priori information for regularizing the ill-posed problem. This a priori information is the mean

profile and its covariance matrix where

(8)

It is to be mentioned here that if represents the atmospheric temperature profile , then the

representative ensemble of radiosonde measured temperature profiles can provide and its

covariance . We also assume that the error vector has a zero mean and is statistically independent

of , but the error covariance matrix is known.

Page 114: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

748 Vol. 7, Issue 3, pp. 743-755

This optimal estimation method has the advantage of finding the most likely value of , based on the

combination of a priori information and a real measurement through along with the

associated covariance matrix. The optimal estimate of may be obtained by generalizing the method

to vectors as and , where D is exact solution such that , the identity matrix ([17];

[18]; [19]; [20]).

The solution of equation (6) is given by ([16])

(9)

VI. ANALYSES AND RESULTS

The relationship between the measurements, represented by the - dimensional vector

measurement , and the quantities to be retrieved, represented by the – dimensional profile vector

may be expressed as

(10)

This relationship is satisfied by infinite number of profile vectors for a finite particular set of y. Thus

by applying inversion method to obtain an exact solution x, it must be constrained which is the

statistical ensemble of a large number of historic radiosonde profiles as mentioned earlier. Now, the

modification of equation (9) is done to get a better approximation and minimization of experimental

error as well and is expressed as,

Here, contains both measurement errors and the errors originated due to the assumptions and

approximations associated with models. The n X 1 matrices , , and denote the a- priori

data set, retrieved temperature profile vector and radiosonde measurements respectively, being carried

out at Fortaleza, Brazil on 11th April, 2011 at 05:39 and 17:36 UTC. A similar approach (as discussed

earlier) being carried out using a- priori data set, radiometric measurements at specified frequencies

and radiosonde measurements respectively, at Belem, Brazil on 26th June, 2011 at 05:30 UTC and on

17th June, 2011 at 17:33 UTC. And, following the modified equation (equation 11) of optimal

estimation method, the entire measurements being repeated for Alcantara, Brazil on 12th March, 2010

at 06:03 UTC and on 15th March, 2010 at 17:57 UTC, so as to approximate and validate the model.

Here, is the mean and covariance matrix of . represents m X m error covariance matrix

associated with m-dimensional radiometric observations. The Lagrangian Multiplier , is basically a

positive real quantity to be determined empirically. Depending upon the band of frequencies and the

time at which the observations were carried out, it can take up the values ranging from 1 to 200.

The profile vector representing statistical ensemble of temperature is a 6 X 1 matrix having elements

exactly at the vertical coordinates of 0.351 km, 0.6866 km, 1.4327 km, 3.1005 km, 5.862 km and

7.613 km forming has 8 elements (1 x 8 matrix) each at the eight specified frequencies.

For faithful analysis this 1 x 8 matrix is subdivided into two sub-ensembles each of (1 x4). The

weighting functions associated with was calculated analytically. These weighting functions

are then normalized to unit maxima for different frequencies separately. The experimental errors

allied to radiometric observation forming 1 x 4 matrices for each segment, assuming zero mean error.

These errors can be assumed rather easily and correctly by calculating the elements of

corresponding to each radiometric observation and then finding their lowest possible

ratio. The range of these errors lies within -1 to +1. The retrieval process starts with the calculations

of and for two sub-ensembles of radiometric observations separately. Here and

. The measured brightness temperatures at eight specified channel frequencies at

Fortaleza, Brazil on 11th April, 2011 at 05:39 and 17:36 UTC are as shown in table 2. The summery of

measured brightness temperature in oxygen band at Belem, Brazil on 26th June, 2011 at 05:30 UTC

Page 115: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

749 Vol. 7, Issue 3, pp. 743-755

and on 17th June, 2011 at 17:33 UTC and also at Alcantara, Brazil on 12th March, 2010 at 06:03 UTC

and on 15th March, 2010 at 17:57 UTC are tabulated in table 3 and 4 respectively. The approximated

results of temperature retrieval are shown from figure 2 to figure 7 separately for the three places of

choice.

Table 2. Summary of Brightness Temperature Measured in Oxygen Band at Fortaleza

Date &

Time Brightness

Temperature

in K

Frequency in GHz

51.248 51.76 52.28 52.804 56.02 56.66 57.288 57.964

11/4/2011

5:39

124.092 140.774 166.392 199.965 293.843 294.943 295.669 296.053

11/4/2011

17:36

137.786 152.597 176.91 207.676 294.534 296.245 296.477 297.553

Table 3. Summary of Brightness Temperature Measured in Oxygen Band at Belem

Table 4. Summary of Brightness Temperature Measured in Oxygen Band at Alcantara

In Figures 2-7, we plot the statistical difference between retrieved air temperature profile using

brightness temperatures measured by scanning radiometers and RAOBs measured air temperature

profile, the mean value (BIAS), and the root mean square (RMS) of the difference between retrieved

estimation and RAOBs measurements at the specified dates and times. Figure 3 shows highest air

temperature profile retrieval accuracy comparable with the radiosonde observation being observed for

Fortaleza, Brazil on 11th April, 2011 at 17:36 UTC. The RMS is smaller than 1.0 K up to 8 km; the

only exception is around 6 km (1.5 K), while the BIAS does not exceed 0.7 K except around 6 km.

While at Belem, Brazil, figure 5 clearly suggest that the highest air temperature profile retrieval

accuracy comparable with the radiosonde observation found to be on 17th June, 2011 at 17:33 UTC.

The RMS is smaller than 1.0 K up to 5 km and beyond this it is around 1.6 K, while the BIAS does

not exceed ±1.0 K, except above 5 km (±1.3 K). A similar result is observed for Alcantara, Brazil on

15th March, 2010 at 17:57 UTC (Figure 7). The RMS is smaller than 1.0 K up to 8 km; the only

exception is around 6 km (1.2 K), while the BIAS does not exceed ±0.6 K except around 6 km. On the

other hand, in case of Fortaleza particularly, retrieval accuracy slightly degraded with observable

RMS errors are varying with a maximum up to 2.5 K, while the BIAS does not exceed ±2.5 K on 11th

April, 2011 at 17:36 UTC for retrieved profile (being derived with BT of lower frequency sub-

ensemble). But it is effected by fairly high RMS (2.0 K to 6.0 K) and BIAS (-4.0-5.0 K) on 11th April,

2011 at 05:39 UTC (Figure 2). This is might be related to the temperature inversion around 0.7 km,

but also to the lower surface temperature during local dawn time. While at Belem, Brazil we observed

that the retrieval accuracy is affected by observably high RMS errors varies maximum up to 2.0 K and

BIAS does not exceed ±1.6 K on 26th June, 2011 at 05:30 UTC (Figure 4). This is might be related to

the fairly lower temperature difference between two subsequent layers near surface, during morning

UTC. We noticed a similar result for Alcantara with retrieval accuracy is effected by fairly high RMS

(1.0 K to 2.0 K) and BIAS (-1.5-1.5 K) on 12th March, 2010 at 06:03 UTC (Figure 6). This

observation is also valid for the retrieved profile when derived using the brightness temperatures of

Date &

Time Brightness

Temperature

in K

Frequency in GHz

51.248 51.76 52.28 52.804 56.02 56.66 57.288 57.964

26/6/2011

5:30

115.899 133.291 160.164 195.107 294.612 296.093 295.933 296.324

17/06/2011

17:33

124.305 141.799 167.661 200.014 296.673 297.739 299.324 299.718

Date &

Time Brightness

Temperature

in K

Frequency in GHz

51.248 51.76 52.28 52.804 56.02 56.66 57.288 57.964

12/3/2010

6:03

123.084 136.897 162.706 198.142 296.006 296.065 296.852 297.793

15/3/2010

17:57

124.421 138.322 163.987 197.55 296.63 297.695 298.022 299.08

Page 116: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

750 Vol. 7, Issue 3, pp. 743-755

lower frequency sub-ensemble. The following figures show the comprehensive inaccuracies happen at

05:39 UTC. Statistics confirm that at Fortaleza as well as Alcantara, the results are in good agreement

during afternoon when the retrieval method being carried out with the brightness temperatures for

upper frequency sub-ensembles. But this is partly supported by the results at Belem. Here also

statistically fairly high accuracy being observed during afternoon Universal Time Coordinates [UTC],

when the retrieval method is being carried out not only with the brightness temperatures for upper

frequency sub-ensemble but also with the BTs of lower frequency channels in the oxygen band

separately.

(a) (b)

Figure 2. Statistical difference between retrieved air temperature profiles when retrieved with BTs of two

frequency sub-ensembles separately and temperature profile measured by RAOBs for Fortaleza, Brazil on 11 th

April, 2011 at 05:39 UTC are shown in red (upper frequencies), blue (lower channels) and black respectively

(a). The corresponding measured BIAS is shown in black and blue; while the RMS is shown in red and magenta,

while retrieved using BTs measured for both upper and lower frequency sub-ensembles respectively (b).

(a) (b)

Figure 3. Statistical difference between retrieved air temperature profiles when retrieved with BTs of two

frequency sub-ensembles separately and temperature profile measured by RAOBs for Fortaleza, Brazil on 11th

April, 2011 at 17:36 UTC are shown in red (upper frequencies), blue (lower channels) and black respectively

(a). The corresponding measured BIAS is shown in black and blue; while the RMS is shown in red and magenta,

while retrieved using BTs measured for both upper and lower frequency sub-ensembles respectively (b).

(a) (b)

Figure 4. Statistical difference between retrieved air temperature profiles when retrieved with BTs of two

frequency sub-ensembles separately and temperature profile measured by RAOBs for Belem, Brazil on 26th

June, 2011 at 05:30 UTC are shown in red (upper frequencies), blue (lower channels) and black respectively (a).

The corresponding measured BIAS is shown in black and blue; while the RMS is shown in red and magenta,

while retrieved using BTs measured for both upper and lower frequency sub-ensembles respectively (b).

Page 117: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

751 Vol. 7, Issue 3, pp. 743-755

(a) (b)

Figure 5. Statistical difference between retrieved air temperature profiles when retrieved with BTs of two

frequency sub-ensembles separately and temperature profile measured by RAOBs for Belem, Brazil on 17th

June, 2011 at 17:33 UTC are shown in red (upper frequencies), blue (lower channels) and black respectively (a).

The corresponding measured BIAS is shown in black and blue; while the RMS is shown in red and magenta,

while retrieved using BTs measured for both upper and lower frequency sub-ensembles respectively (b).

(a) (b)

Figure 6. Statistical difference between retrieved air temperature profiles when retrieved with BTs of two

frequency sub-ensembles separately and temperature profile measured by RAOBs for Alcantara, Brazil on 12th

March, 2010 at 06:03 UTC are shown in red (upper frequencies), blue (lower channels) and black respectively

(a). The corresponding measured BIAS is shown in black and blue; while the RMS is shown in red and magenta,

while retrieved using BTs measured for both upper and lower frequency sub-ensembles respectively (b).

(a) (b)

Figure 7. Statistical difference between retrieved air temperature profiles when retrieved with BTs of two

frequency sub-ensembles separately and temperature profile measured by RAOBs for Alcantara, Brazil on 15th

March, 2010 at 17:57 UTC are shown in red (upper frequencies), blue (lower channels) and black respectively

(a). The corresponding measured BIAS is shown in black and blue; while the RMS is shown in red and magenta,

while retrieved using BTs measured for both upper and lower frequency sub-ensembles respectively (b).

VII. DISCUSSIONS AND CONCLUSIONS

The optimal estimation method is a kind of method that combines the observations with a background

taken from numerical weather prediction (NWP) model outputs. The assumed error characteristics of

both are taken into account ([21]). However, the 1DVAR approach (One- Dimensional Variation

Technique was demonstrated to be advantageous over methods using background from statistical

climatology ([22]). In fact, as background information, 1DVAR uses a forecast state vector, which is

usually more representative of the actual state than a climatologic mean. A comparative analysis

Page 118: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

752 Vol. 7, Issue 3, pp. 743-755

between a variety of retrieval methods applied to ground-based observations from conventional

microwave radiometers (such as MWRP) indicated that the 1DVAR technique outperforms the other

considered methods, these being based on various kinds of multiple regression and neural network.

Thus, it seemed convenient to couple the sensitivity of millimetre-wave radiometry with the

advantages of the 1DVAR technique for the retrieval of temperature and humidity profiles [23]. While

developing this technique the standard notation is used as used by [24] and indicated with B and R the

error covariance matrices of the background and observation vector y, respectively. In addition, the

forward-model operator (i.e., radiative transfer model) with F(x) was used. Thus, the technique

adjusts the state vector x from the background state vector to minimize the following cost function

(12)

Here and represents the matrix transpose and inverse. The radiometric noise,

representativeness, and forward-model errors all contribute to the observation-error covariance R. The

minimization is achieved using the Levemberg–Marquardt method; this method was found to improve

the convergence rate with respect to the classic Gauss–Newton method ([21]) by introducing a factor

that is adjusted after each iteration depending on how the cost function J has changed; thus, calling

K the Jacobian matrix of the observation vector with respect to the state vector, the solution

(13)

is iterated until the following convergence criterion is satisfied

(Observations)

(14)

Here, and (Obs.) indicates the number of observations (i.e., the

dimension of y).

The GSR was first deployed during the Water Vapour Intensive Operational Period (WVIOP, March–

April 2004) and, later, during the Radiative Heating in Underexplored Bands Campaign (RHUBC,

February–March 2007), both held at the Atmospheric Radiation Measurement (ARM) Program’s

North Slope of Alaska (NSA) site in Barrow, Alaska ([25] ).

The state vectors that used by [23] are profiles of temperature and total water (i.e., total of specific

humidity and condensed-water content ([26]). The choice of total water has the advantages of

reducing the dimension of the state vector, enforcing an implicit correlation between humidity and

condensed water, including a super-saturation constraint. Moreover, the introduction of natural

logarithm of total water creates error characteristics that are more closely Gaussian and prevents

unphysical retrieval of negative humidity. The background-error covariance matrices for both

temperature and humidity profiles may be computed from a set of simultaneous and co-located

forecast-RAOB data (both in clear and cloudy conditions). This calculation of inherently includes

forecast errors as well as instrumental and representativeness errors from the radiosondes. The

radiosonde instrumental error is assumed to be negligible compared with the representativeness error,

which consists of the error associated with the representation of volume data (model) with point

measurements (radiosondes). The matrix including these terms seems appropriate for the

radiometric retrieval minimization; since the grid cell of the NWP model is much larger than the

radiometer observation volume, the latter can be assumed as a point measurement compared with the

model cell, similar to radiosondes. It may be assumed that the matrix estimated for humidity to be

valid for control variable total water, since no information on the background cloud-water error

covariance was available. This assumption is strictly valid during clear sky conditions only, while it

underestimates the background error in cloudy conditions. The implications are that, under cloudy

conditions, humidity retrieval would rely more on the background and less on measurements than

would be possible by adopting a matrix that includes both humidity and liquid-water errors.

However, considering the infrequent and optically thin cloudy conditions encountered during

RHUBC, it is understood that this assumption does not affect results significantly.

The observation vector is defined as the vector of measured by GSR at a number of elevation

angles, plus the surface temperature and humidity given by the sensors mounted on the lowest level

Page 119: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

753 Vol. 7, Issue 3, pp. 743-755

(2m) of the meteorological tower. The observation error covariance matrix may be estimated using

the GSR data taken from the WVIOP, following the approach by [21]. The forward model F( is

provided by the NOAA microwave radiative-transfer code ([27])which also provides the weighting

functions that were used to compute the Jacobians with respect to temperature, humidity, and liquid

water. The typical errors with respect to band-averaged are within 0.1 K and were accounted for in

the forward-modelling component of the observation error.

The 1DVAR retrieval technique and the settings described in the previous section were applied to

GSR data collected during the three-week duration of RHUBC. These observations were found to be

consistent with simultaneous and co-located observations from the other two independent 183-GHz

radiometers and with simulations obtained from RAOBOs ([28]), generally within the expected

accuracy. As a comparison, the background NWP profiles were shown that were used as a first guess

and the in situ observations from the radiosonde.

Concerning the temperature profile, it is noted that in this case, the NWP forecast is in good

agreement with the RAOB, particularly in the atmospheric layer from 0.5 to 3.0 km. Conversely, in

the upper part of the vertical domain (3–5 km), the NWP forecast shows about 1–2-K bias with

respect to the RAOB, while in the very first layer (0–0.5 km), it differs from RAOB by more than 10

K. Conversely, the 1DVAR retrieval agrees better with the RAOB in the lowest levels, while for the

upper levels, the retrieved temperature tends to lie over the NWP background.

As for the humidity, again it is noted that the NWP forecast captures well the vertical structure,

although with lower resolution, except for the first 500 m, where the 1DVAR retrieval shows a much

better agreement with the RAOB. The analyses show that at Fortaleza, the upper frequency band i.e., 56-58 GHz consisting of four

frequency channels built in the said radiometer provides a good agreement regarding temperature

profiling, with the RAOB’s. But on the other hand, at Belem the lower frequency band i.e., 51-53

GHz consisting of four frequency channels built in the said radiometer provides a good agreement

regarding temperature profiling, with the RAOB’s. It may also be noted that at Alcantara, the lower

frequency band show good agreement, in this regard. All these agreement happen to be good during

afternoon and the situation is worst during midnight (UTC). It is also observed that as we move

towards the higher latitude the possibility of getting good agreement with RAOB’s upper air data lies

in favor of using the higher frequency channels.

VIII. FUTURE SCOPE

In future we are very much interested to use others retrieval method such as Backus Gilbert Synthetic

Averaging Inversion method, neural network method etc. for retrieval of vertical profiles of

atmospheric temperature over the aforesaid three places choice and try to find out the most suitable

and simplified inversion method for retrieving of atmospheric temperature profile both in terms of

accuracy and resolution with minimal operating and instrumental limitations. In future, using the

potentials offered by ground-based multichannel microwave radiometry we are very much interested

to apply the Modified Optimal Estimation Method for continuous vertical profiling of humidity as

well as water vapour in order to validate the model.

REFERENCES

[1].Solheim, F., Godwin, J. R., Westwater, E.R., Han,Y., Keihm, S. J., Marsh, K., Ware, R., (1998)

“Radiometric profiling of temperature, water vapor and cloud liquid water using various inversion methods”,

Radio Science, Vol. 33, No. 2, pp 393-404.

[2].Westwater, E. R., Crewell, S., Matzler, C., (2004) “A Review of Surface-based Microwave and Millimeter

wave Radiometric Remote Sensing of the Troposphere”, URSI Radio Science Bulletin, No. 310, pp 59-80.

[3].Rocken,C., Johnson, J. M., Neilan, R. E., Cerezo, M., Jordan , J. R., Falls, M. J., Nelson , L. D., Ware, R. H.,

Hayes, M., (1991) “The Measurement of Atmospheric Water Vapor: Radiometer Comparison and Spatial

Variations”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 29, Issue 1, pp 3-8.

[4].Westwater, E.R., (1993) Ground-based Microwave Remote Sensing of Meteorological Variables, in:

Michael A. Janssen (Eds.), Atmospheric Remote Sensing by Microwave Radiometry. J. Wiley & Sons, Inc.

(New York), pp.145-213.

Page 120: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

754 Vol. 7, Issue 3, pp. 743-755

[5].Westwater, ED R., Han, Y., Solheim, F., (2000) Resolution and accuracy of a multi-frequency scanning

radiometer for temperature profiling, in: P. Pampaloni and S. Paloscia (Eds.), Microwave Radiometry and

Remote Sensing of the Earth’s Surface and Atmosphere. VSP 2000 (Netherland), pp. 129-135.

[6].Ware, R., Solheim , F., Carpenter, R., Gueldner, J., Liljegren, J., Nehrkorn, T., Vandenberghe, F., (2003)

“A multichannel radiometric profiler of temperature, humidity and cloud liquid”, Radio Science, Vol. 38, No. 4,

8079, pp 4401-4413.

[7].Karmakar, P. K., Maiti, M., Sett, S., Angelis, C. F., Machado, L.A.T., (2011a) “Radiometric Estimation of

Water Vapor Content over Brazil”, Advances in Space Research, Vol. 48, No. 9, pp 1506-1514.

[8].Karmakar, P. K., Maiti, M., Calheiros, A.P.J., Angelis, C.F., Machado, L.A.T., Da Costa, S.S., (2011 b)

“Ground based single frequency microwave radiometric measurement of water vapour”, International Journal

of Remote Sensing, Vol. 32, No. 23, pp 8629-8639.

[9].Mondal, S., Pradhan, A.K., Karmakar, P.K., (2014) “water vapour profiling at southern latitudes by

deploying microwave radiometer”, International Journal of Advances in Engineering & Technology, Vol. 6,

Issue 6, pp 2646-2656.

[10].Karmakar, P.K., (2013) Ground-Based microwave radiometry and remote sensing, CRC press, Boca Raton,

FL (USA), pp 224.

[11].Askne, J. I. H., Westwater, E. R., (1986) “A Review of Ground-Based Remote Sensing of Temperature and

Moisture by Passive Microwave Radiometers”, IEEE Transactions on Geoscience and Remote Sensing, GE-

24(3), pp 340-352.

[12].Canavero, F. G., Einaudi, F., Westwater , E. R., Falls, M. J., Schroeder, J. A., Bedard Jr, A. J., (1990)

“Interpretation of ground-based radiometric observations in terms of a gravity wave model”, Journal of

Geophysical Research, Vol. 95, No. D6, pp 7637-7652.

[13].Goody, R. M., Yung, Y. L., (1995). Atmospheric Radiation: Theoretical Basis, 2nd Edition, Oxford

University Press (USA).

[14].Gasiewski, A. J., (1993) Microwave Radiative Transfer in Hydrometeors, in: Michael A. Janssen (Eds.),

Atmospheric Remote Sensing by Microwave Radiometry. J. Wiley & Sons, Inc (New York), pp.1-36.

[15].Westwater, Ed. R., Crewell, S., Matzler, C., Cimini, D., (2005) “Principles of Surface-based Microwave

and Millimeter wave Radiometric Remote Sensing of the Troposphere”, Quaderni Delle Societa Italiana Di

Elettromagnetismo, Vol. 1, No. 3, pp 50-90.

[16].Ulaby, F. T., Moore, R. K., Fung, A. K., (1986) Microwave Remote Sensing-Active and Passive, vol. 3,

Artech House, Inc. (Norwood).

[17].Cimini, D., Shaw, J. A., Han, Y., Westwater, E. R., Irisovi, V., Leuski, V., Churnside, J. H., (2003) “Air

Temperature Profile and Air-Sea Temperature Difference Measurements by Infared and Microwave Scanning

Radiometers”, Radio Science, Vol. 38, No. 3, 8045, pp 1001-1019.

[18].Westwater, E. R., Snider, J. B., Carlson, A. C., (1975) “Experimental Determination of Temperature

Profiles by Ground-Based Microwave Radiometry”, Journal of Applied Meteorology, Vol. 14, No. 4, pp 524-

539.

[19].Rodgers, C. D., (1976) “Retrieval of Atmospheric Temperature and Composition from Remote

Measurement of Thermal Radiation”, Reviews of Geophysics, Vol. 14, No. 4, pp 609-624.

[20].Rodgers, C. D., (2000) Inverse Methods for Atmospheric Sounding: Theory and Practice, i-xvi. World

Scientific Publishing (Singapore).

[21].Hewison, T., (2007) “1D-VAR retrievals of temperature and humidity profiles from a ground-based

microwave radiometer”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 45, No. 7, pp 2163–2168.

[22].Cimini, D., Hewison, T. J., Martin, L., Güldner, J., Gaffard, C., Marzano, F. S., (2006) “Temperature and

humidity profile retrievals from groundbased microwave radiometers during TUC”, Meteorology Z., Vol. 15,

No. 5, pp 45–56.

[23].Cimini, D., Westwater, Ed R., Gasiewski, A. J., (2010) “Temperature and Humidity Profiling in the Arctic

Using Ground-Based Millimeter-Wave Radiometry and 1DVAR”, IEEE Transactions on Geoscience and

Remote Sensing, Vol. 48, No. 3, pp 1381-1388.

[24].Ide, K., Courtier, P., Ghil, M., Lorenc, A. C., (1997) “Unified notation for data assimilation: Operational,

sequential, and variational”, Journal of Meteorological Society of Japan, Vol. 75, No. 1B, pp 181–189.

[25].Ackerman, T. P., Stokes, G. M., (2003) “The atmospheric radiation measurement program”, Physics

Today, Vol. 56, No. 1, pp 38–44.

[26].Deblonde, G., English, S., (2003) “One-dimensional variational retrievals for SSMIS simulated

observations”, Journal of Applied Meteorology, Vol. 42, No. 10, pp 1406–1420.

[27].Schroeder, J. A., Westwater, E. R., (1991) User’s guide to WPL microwave radiative transfer software,

Nat. Ocean. Atmos. Admin., Boulder, CO, NOAA Technical Memorandum, ERL WPL-213.

[28].Cimini, D., Nasir, F., Westwater, E. R., Payne, V. H., Turner, D. D., Mlawer, E. J., Exner, M. L., Cadeddu,

M. P., (2009) “Comparison of ground based millimeter-wave observations and simulations in the Arctic

winter”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 47, No. 9, pp 3098–3106.

Page 121: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

755 Vol. 7, Issue 3, pp. 743-755

AUTHORS

Ayan Kanti Pradhan (b. 1980) is an M.Sc. of the Vidyasagar University, West Bengal,

India and currently pursuing his research work under the supervision of Dr. Pranab Kumar

Karmakar in the Department of Radiophysics & Electronics, University of Calcutta. His

field of research includes vertical profiling and estimation of atmospheric constituent’s

especially ambient temperature deploying ground based microwave radiometer as well as

employing Radiosonde data.

Since 2010, till date he acted as Assistant Professor in the Department of Electronics,

Acharya Prafulla Chandra College, West Bengal, India, where he is involved in teaching in both the Post-

graduate and under-graduate classes in areas such as microwave devices, communication, control system and

instrumentation.

Subrata Mondal (b. 1984) received the B-Tech degree in Electronics & Communication

Engineering from Murshidabad College of Engineering & Technology, W.B.U.T in 2007

and M-Tech degree in Radiophysics & Electronics from Institute of Radiophysics &

Electronics, C.U in 2009. He is currently working as an Assistant professor in the

department of Electronics & Communication Engineering at Dumkal Institute of

Engineering and Technology, W.B.U.T and pursuing research work leading to PhD degree

under the supervision of Dr. P.K. Karmakar at Institute of Radiophysics & Electronics,

C.U., India.

He taught courses on electromagnetics, remote sensing and wave propagation theory. His technical interest

include passive and active remote sensing, radiative transfer, microwave engineering.

Pranab Kumar Karmakar is currently pursuing his research work basically in the area of

modeling of integrated water vapour and liquid water in the ambient atmosphere. Ground

based microwave radiometric remote sensing is his special area of interest. This includes

the vertical profiling of thermodynamic variables. He is presently involved in the research

and teaching in the Post-graduate classes of Institute of Radiophysics and Electronics,

University of Calcutta.

Since his joining in 1988 in the University of Calcutta, India, Dr. Karmakar published his

noteworthy outcomes of researches over tropical locations in different international and national journals of

repute. All these are culminated into a book entitled Microwave Propagation and Remote Sensing: Atmospheric

Influences with models and Applications published by CRC Press in 2012.

He had been awarded the International Young Scientist award of URSI in 1990. Also, he had been awarded the

South-South Fellowship of TWAS in 1997. He acted as visiting scientist in Remote Sensing Laboratory,

University of Kansas, USA; Centre for Space Sciences, China, and National Institute for Space Sciences, Brazil.

Page 122: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

756 Vol. 7, Issue 3, pp. 756-764

DEVELOPMENT AND EVALUATION OF TROLLEY-CUM-BATCH

DRYER FOR PADDY

Mohammed Shafiq Alam and V K Sehgal

Department of Processing and Food Engg.,

Punjab Agricultural University, Ludhiana, Punjab, India

ABSTRACT

The tractor trolley available with the farmer was converted into a batch dryer by making suitable modifications. The

trolley was modified in such a way that it can hold one tonne of paddy per batch. A set of three heaters (8 kW each)

was provided to heat the air for drying and a controllable system was provided to control the air temperature. The

performance of the dryer was evaluated by drying paddy and found that the system was capable of drying one tonne

of paddy in 100 minutes consuming 56 kW-hr of energy. The operating cost for moisture reduction of one tonne of

paddy per batch by 5 per cent i.e. 18 to 13% (w.b.) was approximately Rs. 250. Among the nine drying models fitted

to the experimental data the Verma model was found to be the best to represent the drying behaviour of paddy in the

trolley-cum-batch dryer.

KEYWORDS: Dryer, Energy, Moisture content, Paddy, Trolley.

I. INTRODUCTION

The drying of grains followed by scientific storage practice is universally accepted to be the safest and

most economical practice of preserving quality and quantity of grains. The high moisture content crop at

the time of harvest is liable to be infested with moulds, fungus and insect attack during a short period of

storage. The safe storage moisture content for most of the crops is between 8-10% in case of grains is 14

percent. Since most of the crops that need drying are harvested in October, the farmer is under great

pressure to sow the next crop in November. Under such circumstances, a suitable drying system is and 5-

6% in case of fruits and vegetables. For paddy milling, the appropriate moisture content an integral part of

farm mechanization. In order to store the produce, a farmer has to opt for safe drying methods. It is

neither desirable nor practically feasible for the farmer to dry his crop in the open sun which is a

highly time consuming, sunshine dependent and inefficient method of drying. A mechanical dryer helps

to reduce the losses and drying time as well as to improve and stabilize the quality of the produce. But in

the absence of a suitable and cheap mechanical dryer he has no option but to sell his crop and be deprived

to take the advantage of higher price of dried/dehydrated product during off-season.

Efforts have been made consistently by several researchers to develop a batch drying system that has a

low capital and operating cost but were of low capacities which do not fulfill the requirement of the

farmers[1][2][3][4][5]. Keeping in view the problem faced by the farmers regarding high moisture paddy

especially if rains persist an attempt has been made by the authors to modify a tractor trolley to a multi

crop batch dryer which runs on electricity and was evaluated for its performance for drying of paddy in

bulk and assessing its quality in terms of Head yield.

Page 123: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

757 Vol. 7, Issue 3, pp. 756-764

II. MATERIALS AND METHODS

A tractor trolley available with the farmer has been converted to dryer. The various components of the

existing trolley dryer such as plenum chamber, holding bin, heat exchanger and hot air supply system

were modified /improved to make trolley dryer a multi crop dryer with temperature control system. The

schematic view of the developed trolley-cum-batch dryer is presented in Figure 1. The trolley was

equipped with hydraulic jack for easy unloading. The air, heat and energy requirements of the dryer have

been calculated as per the procedure suggested [6] [7]. A trolley has been constructed in such a manner

that the trolley can be used for conventional transport purposes when it is not being used as a dryer. The

dryer consists of a common trolley modified into a drying chamber by providing a perforated surface

above the trolley floor. The total size of the screen on which the material is to be placed for drying is

4267.2 x 2133.6 mm where as the perforated screen is of the dimension of 4054 x 2042 mm. An axial

flow blower powered by 3.73kW electric motor is attached in the center of the trolley floor along with

heating system (electric heaters), in such a way that it sucks the heated air and distribute uniformly

through the plenum chamber and the air velocity through the plenum chamber was kept nearly in the

range of 1 ± 0.2 m/s. Commodity that needs drying is to be placed on top of the perforated surface of the

trolley. Electric heaters (a set of 3 heaters, each of 8 kW) heats up the air and thus the air is forced

through the drying bed. The total energy consumed was worked out by noting down the input power of

driving fan and thermal energy consumed based on the time required for the drying operation. The energy

meter was used for recording the energy consumption during drying of paddy and the economics was

calculated based on total energy consumed and labour engaged for the operation. Three replication of

each data were taken and average values were taken for the analysis.

Figure1. Schematic view of developed trolley-cum-batch dryer

Figure 1. Schematic view of developed trolley cum batch dryer

8 LEGEND

1. Perforated sheet

2. Blower (axial flow)

3. Electric heater

4. Control panel

5. Trolley with hydraulic

jack for unloading

6. Air heating unit

7. Drying chamber

8. Test bins

All dimensions are in mm.

A B

C D

E

8

Page 124: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

758 Vol. 6, Issue 3, pp. 1049-1054

2.1 Evaluation of Trolley-Cum-Batch Dryer

The modified / improved multi crop trolley dryer was tested for its performance and evaluated for paddy.

The moisture content and temperature variations were noted by using specially designed test bins which

were placed at five different locations i.e. one at center (E) and four at each corner (A, B, C, D) of

perforated platform of trolley drier (Figure 1). The observations were recorded from the top and bottom

layer of the grain bed. Temperature of air in the plenum chamber was also observed at each of the five

locations at every 15 minutes interval by a laser temperature thermometer having least count of 0.1°C.

The moisture content of paddy was measured in the field by wile-35 moisture meter and was again

measured for accuracy by oven method [8]. In order to avoid non-uniform drying the paddy was stirred at

regular intervals of 30 minutes. The dryer was evaluated for drying capacity, moisture reduction, total

drying time required, specific energy consumption and economics[5][9]

The performance of dryer was evaluated on the basis of EHE, HUF and COP as suggested [6]

t1 – t2

Effective heat efficiency (EHE) = --------------- ------ eq 1

t1 – tw1

t1 – t2

Heat utilization factor (HUF) = --------------- ------ eq 2

t1 – t0

t2 – t0

Co-efficient of performance (COP) = --------------- ------ eq 3

t1 – t0

Where; t1 = drying air temperature, °C

t2 = exhaust air temperature °C

to = ambient air temperature, °C

tw1 = wet bulb temperature of drying air, °C

2.2 Modeling of Drying Curve of Paddy in Trolley Dryer

The moisture contents of paddy during thin-layer drying were expressed in dimensionless form as

moisture ratio, that is expressed as (M-Me)/(M0-Me) where M is the moisture content at any time t; M0 is

the initial value; and Me is the equilibrium moisture content. However, (M-Me)/(M0-Me) could be

simplified to M/M0[10][11] as the equilibrium moisture content is too small in comparison to M0.

For mathematical modeling, nine thin layer drying mathematical models were tested to select the best

model for describing the drying curve equation of paddy during drying process in a trolley dryer (Table

1). The non-linear regression analysis was performed using SPSS (Statistical Package for Social Sciences

Version 11.0). The coefficient of determination, r2 was one of the primary and main criteria for selecting

the best equation to account for variation in the drying curves of dried samples [12]. In addition to

coefficient of determination, the goodness of fit was determined by various statistical parameters such as

reduced chi-square, 2, root mean square error, RMSE and percent mean deviation modulus, P and was

calculated as suggested [13]. For quality fit, r2 value should be higher and the error terms 2, RMSE and P

values should be lower [12] Table1. Mathematical models used for modeling of drying curve of paddy

Name of the Model Model equation References

Newton MR = Exp (-kt) [14]

Page MR = Exp (-ktn) [15]

Logarithmic MR = a Exp (-kt) + c [16]

Two term MR = a Exp (-k1t) + b Exp (-k2t) [17]

Midilli MR = a Exp (-ktn) + ct [18]

Two-term exponential MR = a Exp (-kt)+ (1-a) Exp(-kat) [19]

Wang and Singh MR = 1+at+bt2 [20]

Page 125: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

759 Vol. 6, Issue 3, pp. 1049-1054

Verma MR = a Exp (-kt)+(1-a) Exp (-ct) [21]

Thomson t = a ln(MR) + b (ln MR)2 [22]

2.3 Milling Quality

III. RESULTS AND DISCUSSION

3.1 Drying Of Paddy

The trolley-cum-batch dryer was tested and evaluated at the farm of Punjab Agricultural University,

Ludhiana. The average initial moisture content of paddy was 18% (w.b.), which was reduced to 13%

(w.b.) in approximately 100 minutes. The dryer was evaluated for 18.5 mm bed thickness of paddy

resulting in one tonne/batch holding capacity for reducing the moisture content to safe moisture level for

milling of paddy. The dryer was tested at its full load and was observed that the airflow rate through the

plenum chamber was 496.8m3/min and to raise the temperature of drying air from 27°C (ambient air) to

50°C the dryer took approximately 30 minutes. During testing the maximum temperature attained by the

dryer were 54°C and the drying air temperature varied from 50 to 54°C throughout drying process.

During experimentation the ambient air conditions in respect of temperature and humidity varied between

27°C to 29°C and 35% to 40% respectively. The reduction in moisture content with time was measured

by taking samples at regular time intervals from five different locations of the trolley loading platform for

assessing the uniformity in drying (Table 2). The samples were taken from the top as well as bottom layer

to see the variation in temperature and moisture content with time (Figure 2 & Figure 3). It is very clear

from the curve that in the initial stage of drying the moisture reduction is fast in the bottom layer while

there is a little decrease in moisture content from the top layer, the difference of moisture content of

paddy between the bottom and the top layer is caused by the difference of drying temperature of air

flowing through the paddy bed, higher drying temperature at the bottom layer than at the top layer. Also

the relative humidity is another factor that affects the drying rate of paddy. The drying rate from both top

as well as bottom layer stabilized when the dryer temperature reaches approximately 50°C. The total time

taken to reduce moisture content up to 13% (w.b.) was found to be 100 minutes while the total electrical

power consumed in reducing moisture content by 5 per cent i.e. from 18% to 13% (w.b.) was

approximately 56 kW-hr. One labour was required to operate the dryer.

3.2 Mathematical Models For Fitting Drying Curves

The average moisture content data (means of moisture content data from five test kits placed at different

locations of trolley dryer) obtained at various time intervals for paddy dried in trolley dryer were

converted to the more useful moisture ratio and then curve fitting computations with the drying time were

done by using the thin layer drying models (Newton, Page, Logarithmic, Two term, Midilli, Two term

exponential, Wang and Singh, Verma, Thomson) as shown in Table 1. The coefficient of correlation and

results of statistical analyses and model constants are depicted in Table 3. All the models showed higher

R2 (>0.90) except Newton model. The examination of statistical terms showed that the Page, Midilli,

Two term and Verma model experienced maximum R2 (> 0.98) and minimum error values for paddy

dried in trolley dryer, selecting them the best model representing the experimental data showing minimum

(<1.2) mean relative deviation modulus values (P).

Among the best four models fitted, Verma model noticed minimum error terms (0.009569 of RMSE;

0.00016 of 2; 1.0 of P) and maximum R2 (0.9861), supporting their superiority to fit the experimental

data. Thus, the Verma model was judged to be the best model for representing the convective drying

kinetics of paddy in a trolley dryer

Page 126: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

760 Vol. 6, Issue 3, pp. 1049-1054

Figure 2. Temperature variation of drying air and paddy in trolley dryer

0

10

20

30

40

50

60

0 20 40 60 80 100 120

Drying time (minutes)

Te

mp

era

ture

(°C

)

(°C

)

Ambient air

Plenum chamber

temp of bottom layer of

paddy temp of top layer of paddy

S.D varied between: 0.24 to 0.78, based on N=3 replications for temperature recorded at each drying time

Figure 3. Variation of moisture content at various depth of

grain in trolley dryer

10

11

12

13

14

15

16

17

18

19

20

0 50 100 150

Drying time ( Minutes)

Moi

stur

e co

nten

t (%

w.b

.)

Top layer

Bottom layer

Page 127: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

761 Vol. 6, Issue 3, pp. 1049-1054

Table2. Moisture content of paddy sample during drying at various testing positions of trolley dryer

Table 3. Statistical results obtained from different thin layer models

3.3 Performance Evaluation Of Trolley Dryer

The modified trolley dryer was evaluated for its efficiency factors i.e. Effective heat efficiency (EHE),

Heat utilization factor (HUF) and Co-efficient of performance (COP). The EHE, HUF and COP of the

trolley dryer evaluated were 40.4, 52 and 51.5 per cent respectively. The operating cost of drying one

tonne of paddy was about Rs. 250 by taking into account the cost of electricity (Rs 4/unit) and the labour

cost. Thus the paddy can be dried safely in a trolley dryer with an acceptable quantity and quality.

Moisture content (% w.b.)

Testing Bin

positions

15 min

30 min

45 min

60 min

75 min

100 min

A 16.4(19.6)

0.13

15.0(17.06)

0.34

14.6(17.0)

0.16

14.3(16.6)

0.26

13.9(16.3)

0.52

13.0 (15.0)

0.17

B 16.24(19.4)

0.18

15.2(17.9)

0.40

14.9(17.6)

0.63

14.7(17.2)

0.58

14.5(16.9)

0.24

13.1(15.1)

0.14

C 16.2(19.3)

0.14

14.7(17.2)

0.38

14.4(16.8)

0.44

14.1(16.4)

0.22

13.8(15.9)

0.28

12.7(14.6)

0.17

D 16.8(20.2)

0.28

15.5(18.3)

0.73

15.1(17.8)

0.45

14.7(17.2)

0.35

14.4(16.8)

0.09

13.1(15.1)

0.12

E 16.6(19.8)

0.22

15.5(18.4)

0.27

15.0(17.7)

0.30

14.7(17.2)

0.15

14.4(16.8)

0.20

13.2(15.2)

0.10

Average 16.5(19.7) 15.2 (17.9) 14.8(17.4) 14.5(16.9) 14.2(16.6) 13.0(15.0)

Initial moisture content of paddy: 18% wet basis (21.95 % dry basis);

The data in parenthesis are moisture content (dry basis)

* Mean of N=3 replications; Bold values are the standard deviation based on N=3 replications

Models Model constants Comparison

criteria

k n k1 k2 a b c R2 RMSE χ2 P Newton 0.00352 - - - - - - 0.8498 0.031529 0.00116 3.15

Page 0.02220 0.56398 0.982 0.0109 0.000166 1.14

Logarithmic 0.02291 0.72416 0.9716 0.013671 0.000262 1.43

Two term 0.002312 5 0.92513 0.074872 0.9752 0.012769 0.00038 1.17

Midilli 0.98209 0.52252 1.00058 -0.00023 0.9821 0.010867 0.000276 1.14

Two term

exponential 0.05099 0.05422 0.9348 0.020725 0.000601 2.02

Wang & Singh -

0.00502 0.000025 0.9454 0.019007 0.000506 1.96

Verma* 0.07447 0.10598 0.00187 0.9861 0.009569 0.00016 1.00

Thomson

model -98.279 753.247 0.9654 6.002119 50.43561 9.08

* Drying model with minimum error terms and maximum R2

Page 128: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

762 Vol. 6, Issue 3, pp. 1049-1054

3.4 Milling Quality Characteristics

The test results indicated that trolley dried samples showed comparatively higher head recovery (62.04%)

than tray-dried samples (60.07%). The total yield obtained from trolley and tray dried samples was 73.43

and 72.47 percent respectively. In comparison to tray dried samples the trolley-dried samples showed

comparatively lower broken percentage (15.50%). This slight improvement in head yield in trolley dried

samples may be due to its proper churning during drying operation thus resulting in uniform drying of the

sample. The paddy samples dried in the trolley dryer and tray dryer experienced non-significant

difference in milling quality at 5% level of significance (Table 3).

IV. CONCLUSIONS

The developed trolley-cum-batch dryer was evaluated for the holding capacity of one tonne/ batch of

paddy and found that the dryer successfully reduced 5 per cent moisture i.e. 18 to 13% (w.b.) in 100

minutes. The operating cost and energy consumption of trolley dryer in drying one tonne of paddy was Rs.

250 and 56 kW-hr respectively. The model developed [22] showed good agreement with the experimental

data. The milling quality of samples dried in trolley dryer was at par to the samples dried in mechanical

tray dryer. Thus, high moisture paddy can be dried in a trolley drier at farm level/grain market to reduce

losses during various post harvest operations.

V. FUTURE RESEARCH DIRECTION

In future, the developed trolley-cum-batch dryer should be tested for drying of red chillies, turmeric

fingers etc. The trolley cum batch dryer should be modified for utilization of solar energy for heating the

drying air.

REFERENCES

[1]. Chancellor, W.J. (1968). A simple grain dryer using conducted heat. Trans of ASAE, 11, 857-62.

[2]. Schuler, R.T., Hirwing, H.J., Hofman, V.L. & Lundstrom, D.R. (1978). Harvesting, Handling and Storage of

Seeds. Inc. Publishers, Madison, Wisconsin, USA. Pp. 45-167.

[3]. Bose, A.S.C., Ojha, T.P. & Maheshwari, R.C. (1980). Drying of paddy in a batch dryer with solar-cum-husk

fired furnace. Proc. Solar Energy Convention, held at Annamalai University. Pp 14-20.

[4]. Sahay, K.M., Saxena, R.P. & Singh, B.P.N. (1981). Development and performance evaluation of husk fired

furnace with continuous dryer for small rice mill. Paper presented at the International Conference on Agric

Engng and Agro industries in Asia at AIT, Bangkok.

[5]. Arora, Sadhna; Sehgal, V.K. & Singh, R. (2000). Evaluation of a farm grain dryer. Intern. J Trop Agric 18(2),

159-163.

[6]. Chakraverty, A. (2000). Post Harvest Technology of Cereals, Pulses and Oilseeds. Oxford and IBH Publishing

Co.

Table 4. Milling quality of paddy

Milling parameters Drying source

Trolley dryer Tray dryer

Husk content (%) 21.91 21.67

Total yield (%) 73.43 72.47

Bran (%) 4.66 5.86

Degree of Polish (%) 5.97 7.48

Head yield (%) 62.04 60.07

Brokens (%) 15.50 17.11

CD at 5% NS

t stat -0.4056

t critical two tail 2.5705

Page 129: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

763 Vol. 6, Issue 3, pp. 1049-1054

[7]. Xing, ZuoQun., Yin, XiaoHui., Gao, GuangZhi., Xiu, DeLong., Yin, SiWan. & Sun, PeiDong. (2009). Calculation and discussion of condition correction coefficient for drying performance of grain dryer.

Transactions of the Chinese Society of Agricultural Engineering, 25 (2), 91-95.

[8]. AOAC (1965). Official Methods of Analysis. Association of Official Agricultural Chemists, Washington, USA.

[9]. Singh, R.P. (1978). Energy accounting in food process operations. Fd Technol, 31-40.

[10]. Maskan, A., Kaya, S., Maskan, M. (2002). Hot air and sun drying of grape leather (pestil). J Food Engng ,54,

81-88.

[11]. Goyal, R.K., Kingsly, A.R.P., Manikanthan, M.R. & Ilyas, S.M. (2007). Mathematical modeling of thin layer

drying kinetics of plum in a tunnel dryer. J Food Engng, 79, 176-180.

[12]. Togrul, I.T. & Pehlivan, D. (2002). Mathematical modeling of solar drying of apricots in thin layers. J. Food

Engng ,55, 209-216.

[13]. Gomez, K.A. & Gomez, A. A. (1983). Statistical Procedure for Agricultural Research. John Wiley and Sons,

New York.

[14]. Brooker, D.B., Bakker Arekma Fred, W. & Hall. C.W. (1997). Theory and simulation of grain drying. In:

Drying and storage of grains and oilseeds. CBS publishers and distributors, New Delhi, India.

[15]. Doymaz, I. & Pala, M. (2002). Hot-air drying characteristics of red pepper. J. Food Engng, 55, 31-335

[16]. Yaldiz, O., Ertekin, C. & Uzun, H.I. (2001). Mathematical modeling of thin layer solar drying of sultana

grapes. Energy Oxford. 26, 457-465.

[17]. Rahman, M.S., Perera, C.O. & Theband, C. (1998). Desorption isotherm and heat pump drying kinetics of

peas. Food Res Int ,30, 485-491.

[18]. Midilli, A., Kucuk, H. & Yapar, Z. (2002). A new model for single layer drying. Drying Technol., 20, 1503-

1513.

[19]. Sharaf-Eldeen, J.L., Blaisdel & Hamdy. M.Y. (1980). A model for ear corn drying. Trans ASAE. 28: 1261-

1265.

[20]. Wang, C.Y. & Singh, R.P. (1978). A single drying equation for rough rice. Trans ASAE ,11,668-672.

[21]. Verma, L.R., Bucklin, R.A., Endan, J.B. & Wraten, F.T. (1985). Effects of drying air parameters on rice drying

models. Trans ASAE, 28, 296-301.

[22]. Thompson, T.L., Peart, R.M., Foster, G.H. (1968). Mathematical simulation of corn drying-A new model.

Trans ASAE 11, (4), 582-586.

[23]. Bal, S. (1974). Measurement of milling quality of paddy. REPC, Indian Institute of Technology, Kharagpur,

Publication No.72.

AUTHORS

Mohammed Shafiq Alam born in the year 1973, graduated from JNKVV, Jabalpur in the

year 1994 and completed M.Tech in the year 1998 from Punjab Agricultural University,

Ludhiana, awarded Ph.D. (Processing and Food Engineering) from Punjab Agricultural

University, Ludhiana in the year 2008 as in-service candidate. Presently, he is working as

Processing Engineer, Department of Processing and Food Engineering, Punjab Agricultural

University, Ludhiana. His area of specialization is Food Engineering and Post Harvest

Technology. He has developed many post harvest machinery and process technologies for

honey, food grains, fruits, vegetables etc. He has published more than 100 papers in the national and international

journals, magazines etc. He has been awarded with many prestigious awards and recognitions of national and

international repute.

V K Sehgal born in the year 1952 graduated from College of Agricultural Engineering, PAU,

Ludhiana and post graduated from Asian Institute of Technology, Bangkok, Thailand and is a

Fellow of Indian Society of Agricultural Engineers and Institution of Engineers. He has

undergone three months Advance Training in Post Harvest Technology and Agricultural Waste

Management at Kansas State University, USA, one month Advance Training in Biomass Fuel

Conversion Technology at Fuel and Combustion Laboratory, Jyvaskyla, Finland and two

months Advance Training in Biomass Fuel Conversion Technology at University of California,

Davis, USA. He has developed many post harvest machinery and process technologies like PAU Moisture Meter,

PAU briquetting Machine, Trolley-cum-batch dryer, suspension type husk fired furnace, honey processing

equipments, fruit & vegetable washing machine, turmeric washing & polishing machine etc. He has published more

than 250 papers in the national and international journals, magazines etc. He has been awarded with many

Page 130: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July 2014.

©IJAET ISSN: 22311963

764 Vol. 6, Issue 3, pp. 1049-1054

prestigious awards and recognitions of national and international repute. He has been graced with several positions

like Head, Department of Processing and Agricultural Structures, Coordinator of Research (Engineering), additional

charge of Dean, College of Agricultural Engineering, additional charge of Estate Officer-cum-Chief Engineer and

Director School of Energy studies in Agriculture. He also had additional charge of Head, Department of Mechanical

till his retirement on 31st Aug 2012.

Page 131: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

765 Vol. 7, Issue 3, pp. 765-772

JOINT CHANGE DETECTION AND IMAGE REGISTRATION

METHOD FOR MULTITEMPORAL SAR IMAGES

1Lijinu M Thankachan, 2Jeny Jose Department of Electronics and Communication

College of Applied Science, Konni, India

ABSTRACT This paper presents a novel method for jointly change detection and image registration over multitemporal

Synthetic Aperture Radar (SAR) images. Image registration is performed based on histogram of unregistered

images. Detecting the changes occur between the multitemporal images is performed based on the evolution of

local statistics of the images. The local statistics are estimated using radon transform which removes the

speckle noise occurs in the SAR images. Histogram curve fitting is used to approximate the radon probability

distribution function (pdf) into Gaussian distribution which is on the assumption that radon pdf should obeys the

central limit theorem. In order to measure the variation between the two pairs of projection pdf, the algorithm

uses a local statistic similarity measure called Jeffrey divergence. Experiments results demonstrate that the

proposed method can perform change detection rapidly and automatically over unregistered remote sensing

images.

KEYWORDS— Synthetic Aperture Radar (SAR) images, Change detection, Image registration, Radon

transform, Jeffrey divergence.

I. INTRODUCTION

Detecting temporal changes in the state of remotely sensed natural surfaces by observing them at

different times is one of the most important applications of Earth orbiting satellite sensors, because

they can provide multidate digital imagery with consistent image quality, at short intervals, on a

global scale, and during complete seasonal cycles. A lot of experience has already been accumulated

in exploring change detection techniques for visible and near infrared data collected by Satellites. The

changes that occurred in earth surface are identified by analyzing the multitemporal images acquired

at different time.

Automatic change detection [1] [2] [3] in images of a given scene acquired at different times is one of

the most interesting topics in image processing. Since the Synthetic Aperture Radar (SAR) sensors are

capable of acquiring data in all weather conditions and are not affected by cloud cover or different

sunlight conditions, registered SAR images [4] are mainly used for change detection. In recent years,

SAR change detection applications have been widely extended to environmental Earth observation,

national security, and other fields.

The great prospect of change detection has led to a rapid development of techniques, such as image

differencing/ratioing [5], vegetation index differencing, principal component analysis [6], and change

vector analysis [7]. However, less attention has been paid to change detection with SAR images. But

the change detection procedure using SAR images become difficult due to the presence of speckle

noise [8][9], which is a type of multiplicative noise that often severely degrades the visibility in an

image. Thus, the simple pixel-by-pixel comparison used in optical remote sensing images is

ineffective; instead, alternative approaches have been proposed, based on local-statistic similarity

measure.

Change detection can be performed by various methods like pixel intensity comparison, feature point

matching, probability density function (pdf) comparison etc. Unlike the classical detector which is

Page 132: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

766 Vol. 7, Issue 3, pp. 765-772

based on ratio of local means, more information can be extracted from the comparison of the local

probability density functions (pdfs).

Higher order statistics include more information about the probability density function (pdf). For

example, the third central moment indicates the lopsidedness, and the fourth central moment is a

measure of whether the distribution is tall or short. Since the higher order statistical information has

proven to be helpful, here willing to compare the local pdf. Once the pdfs are estimated, their

comparison can also be performed using different criteria.

The paper is organized as follows. The overview of the proposed method is briefly presented in

section II-A. Section II-B presents the image registration process. Radon transform is discussed in

Section II-C. Histogram curve fitting method is described in section II-D and Jeffrey divergence is

analyzed in section II-E and then experimental results are presented and analyzed in section III.

Finally, conclusions and future works are drawn.

II. METHODOLOGY

A. Overview of the Proposed Method

Consider two coregistered SAR images Ix and IY acquired over the same geographical area at two

different dates. Our goal is to generate a “change/no change” map identifying the changes that

occurred during the two dates. The problem can be decomposed into two steps: the generation of a

change indicator and the thresholding of the change image. In this paper, only focus on the first step

of the procedure. The process is applied for each possible pixel position within the image area. Then,

at each pair of SAR images, the following processing stages are performed.

1) Pre-processing stage: Image registration procedure is performed if unregistered images are

selected from dataset.

2) Radon transform stage: Radon transform is applied to the co-registered SAR images respectively,

to generate a pair of projections namely horizontal projection and vertical projection.

3) Jeffrey divergence stage: The Jeffrey divergence is calculated as the distance measure between the

two pairs of pdf. The probability distribution functions (pdfs) used in Jeffrey divergence is

approximated to Gaussian pdf by Histogram curve fitting method.

B. Image registration

The backbone of any SAR image processing task requires initial image matching and registration

procedure. Intuitively, registration correctly aligns all or parts of two or more images containing the

same scene. The two matched images are used to extract temporal changes in a scene, to relate

differences in the appearance of a scene under differing imaging conditions, to detect parallaxes, to

mosaic the images, or to create a multidimensional data set or automated analysis. The many

important tasks that depend on precision registration make this a very significant problem of image

processing and the analysis of multiple SLR (Single Look Radar) images. An approach has been

described in the registration process of two images incorporating the difference in translation and

rotation.

The most common image matching method is dimension correlation, where image windows are found

that resemble one another in the two images to be matched. If the images are not matched then the

image registration is performed.

C. Radon Transform

The Radon transform is the projection of the image intensity along a radial line oriented at a specific

angle [10]. The resulting projection is the line integral of the pixel intensities in each direction. It is a

mapping from the Cartesian rectangular coordinates (x, y) to a distance and an angle (ρ, θ) also known

as polar coordinates.

For a given pixel, a new random variable is generated by averaging along a line oriented at an angle θ

with respect to the x axis, and it is named as “projection variable” which is denoted by ρ. For any ρ

and θ, the value of R (ρ, θ) is the amount of density that falls along the θ-line that passes within a

distance ρ of the origin. For image processing, the 0◦ and 90◦ cases are chosen and these projections

are called v-projection or vertical projection and h-projection or horizontal projection respectively.

Page 133: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

767 Vol. 7, Issue 3, pp. 765-772

Projection variable of an image f(x, y) oriented at an angle θ can be written mathematically as,

ρ = x cosθ + y sinθ … (1)

Radon transform of that image can be written as

𝑅(𝜌, 𝜃) = ∬ 𝑓(𝑥, 𝑦)𝛿(𝜌 − 𝑥𝑐𝑜𝑠𝜃 − 𝑦𝑠𝑖𝑛𝜃)𝑑𝑥𝑑𝑦 ∞

−∞ … (2)

where 𝛿(.) is the Dirac delta function.

Fig.1 shows the geometry of Radon transform of the image f(x,y). x’ and y’ is obtained by rotating

image axis around the centre of the image at an angle θ degrees. Rθ (x’) is the amount of distribution

that falls along the θ angle that passes within a distance x’ of the origin.

Fig.2 shows a single projection at a specified rotation angle. It is a parallel beam projection at rotation

angle θ. For the radon transform computation,source and sensor is rotated about the center of the

image having coodinates x and y. For each angle θ the distribution of the image is obtained by when

the rays from the source passes through the image and it is acccumulated at the sensor. This is

repeated for a given set of angles usually from θ € [0:180]. The angle 180 is not included since the

result would be identical to the angle 0.

Fig. 1 Geometry of Radon transform.

In a way, by averaging the pixels, the Radon transform makes the speckle noise weak and shortens the

tail. The shape of horizontal radon pdf and vertical radon pdf is much closer to the Gaussian

distribution and the histogram curve fitting approximation method fits them better.

Fig.2 Parallel beam projection at angle θ.

D. Histogram curve fitting

Page 134: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

768 Vol. 7, Issue 3, pp. 765-772

The method called Histogram curve fitting is used to approximate Radon pdf R(ρ, θ). Histogram

curve fitting succeeds based on the assumption that the density to be approximated is not too far from

a Gaussian pdf, which has been satisfied after Radon transform [12]. A histogram is a way to

graphically represent the distribution of data in a data set. Each data point is placed into a bin based

on its value. Bin means pixel intensity range.

Histogram curve fitting plots a histogram of the values in the data is plotted using a number of bins

equal to the square root of the number of elements in data, which will give a fitted normal

distribution. In order to obtain the normal pdf the mean µ and standard deviation 𝝈 are estimated from

the histogram data, and these values are substitute into the equation of the normal pdf.

Reason of using the curve fitting method is that the central limit theorem (CLT) says, the mean of a

sufficiently large number of independent random variables, each with finite mean and variance, will

be approximately normally distributed. Therefore, the pdf of Radon should not be too far from a

normal distribution.

E. Jeffrey Divergence

Let fx(x) and fy(x) be two probability distribution function of the random variables X and Y. The

Jeffrey divergence between the densities fx(x) and fy(x), is given by

𝐽(𝑌/𝑋) = ∫(𝑙𝑜𝑔𝑓𝑥(𝑥)

𝑚(𝑥)𝑓𝑥(𝑥) + 𝑙𝑜𝑔

𝑓𝑦(𝑥)

𝑚(𝑥)𝑓𝑦(𝑥)) … (3)

where m(x) = (fx(x) + fy(x))/2.

It is an information similarity measure that calculates the divergence from one pdf with other [13].

This measure is symmetric and nonnegative; therefore, it can be used as a statistical similarity

distance [14]. The Jeffrey divergence is numerically stable and robust with respect to noise and the

size of the bins than that of KL divergence. Since it is a modification of the KL divergence [15], it can

be written as

J(X, Y) = K (Z|X) + K (Z|Y) … (4)

In order to estimate the Jeffrey distance, decompose it into two KL divergences.

III. RESULTS AND DISCUSSIONS

A pair of SAR images, acquired by the Space borne imaging radar sensor taken at the Manaus, Brazil

in April 1994 and October 1995, respectively is used in the experiment. In order to better understand

the behavior of projection based detection, simulations have been performed.

Fig 3 SAR images (a) Before change (SAR image 1)

Page 135: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

769 Vol. 7, Issue 3, pp. 765-772

(b) After change (SAR image 2)

Fig 3(a) and (b) are the registered SAR images. After the pre-processing radon transform is applied to

both images. The resulting projection is the sum of the intensities of the pixel in each direction. The

Radon pdf obtained is in random manner. It is approximated into the Gaussian pdf for statistical

analysis by histogram curve fitting method. Fig 4(a) and Fig 4(b) shows the projection pdfs of SAR

image 1.

Fig 4(a) Vertical projection pdf of SAR image 1

Fig 4(b) Horizontal projection pdf of SAR image 1

Fig 5(a) and Fig 5(b) shows the Vertical and Horizontal projection pdf of SAR image 2. Gaussian

pdfs are obtained by the approximation of projection pdfs by using the method called Histogram

curve fitting. In order to detect the changes between the SAR images, comparison of the local

Page 136: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

770 Vol. 7, Issue 3, pp. 765-772

probability density functions (pdfs) of the homologous pixels of the pair of images are performed. Fig

6(a) and (b) shows the variation between vertical Gaussian pdfs and horizontal Gaussian pdfs of SAR

images.

Fig 5 (a) Vertical projection pdf of SAR image 2

Fig 5 (b) Horizontal projection pdf of SAR image 2

Fig 6 (a) Comparison of horizontal pdfs

As the divergence value is not equal to zero, it shows that changes have occurred. If it is equal to zero,

there is no change between the images. The percentage deviation between the vertical pdfs and

horizontal pdfs of the images under consideration are 13.68 % and 21.69% respectively. From the

results obtained it is clear that as the value of Jeffrey divergence increased, the percentage of change

between the images also increased.

Page 137: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

771 Vol. 7, Issue 3, pp. 765-772

(b) Comparison of vertical pdfs.

IV. CONCLUSION AND FUTURE WORKS

In this paper, a new similarity measure between images in the context of multitemporal SAR image

change detection is discussed. This measure is based on the Radon transform combined with Jeffrey

divergence, which removes the speckle noise generated by random fluctuations in SAR returning

signals.

A divergence value between the pdfs is calculated using Jeffrey divergence, which is the similarity

measure for change detection. From the results obtained it is clear that as the value of Jeffrey

divergence increased, the percentage of change between the images also increased.

Some questions still remain open about the use of projection. As stated in Section II, in theory, it can

detect the rotation change of the texture. It would be tested when there were appropriate data.

Some improvements could be done in order for it to be a more reliable approach by using more

information rather than only the pdf of the image ie,a change map has to be created from the

divergence value to visually analyze the changes between the images. All these aspects will be studied

as a future development of this paper.

REFERENCES

[1] W. Luo, H.Li and G.Liu, “Joint change detection and image registration method for remote sensing

images,” IEEE Trans. Geosci. Remote Sens., vol. 6, pp. 634–640, Mar. 2013.

[2] M.V. Wyawahare and P. M.Patil ,”Image registration techniques and change detection :An overview”, Int.

J. Signal Proce., vol. 2, no. 3, Sep 2013.

[3] Jin Zheng and Hongjian You, “A New Model-Independent Method for Change Detection in

Multitemporal SAR Images Based on Radon Transform and Jeffrey Divergence”, IEEE Trans.Geosci.

Remote Sens., vol. 4, no. 2, pp. 278–282, Jan.2012

[4] V. A. Krylov1 and G. Moser, “Change detection with synthetic aperture radar images by wilcoxon statistic

likelihood ratio test”, IEEE conf. image processing, Aug. 2011.

[5] N. Milisavljevic and D. Closson, “Detecting human induced scene changes using coherent change

detection in SAR images”, ISPRS. Remote sens, Vol.8, July 2010.

[6] C. Oliver and S. Quegan, “Understanding Synthetic Aperture Radar Images”, West Perth, WA, Australia:

SciTech, 2004.

[7] G. Mercier, G. Moser, and S. B. Serpico, “Conditional copulas for change detection in heterogeneous

remote sensing images,” IEEE Trans. Geosci.Remote Sens., vol. 46, no. 5, pp. 1428–1441, May 2008.

[8] J.Inglada and G. Mercier, “A new statistical similarity measure for change detection in multitemporal SAR

images and its extension to multiscale change analysis,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5,

pp. 1432–1445, May 2007.

[9] Maryna Rymasheuskaya,” Land cover change detection in northern Belarus using image differencing

technique”, in Proc. ScanGIS, May 2007.

[10] S. Baronti, R. Carla, S. Sigismondi “Principal Component Analysis for change detection on polarimetric

multitemporal SAR Data,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 7, Nov 1994.

[11] J.Chen, P.Gong, Chunyang He, R. Pu, “Land-Use/Land-Cover Change Detection Using Improved

Change-Vector Analysis,” IEEE Trans. Photogrammetric Engg & Remote Sens., Vol. 69, No. 4, pp.

369–379, April 2003.

Page 138: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

772 Vol. 7, Issue 3, pp. 765-772

[12] E. J. M. Rignot and J. J. van Zyl, “Change Detection Techniques for ERS-1 SAR Data,” IEEE Trans.

Geosci. Remote Sensing, vol. 31, no. 4,pp. 896–906, July 1993. V. Frost, J. Stiles, K. Shanmugan, and J.

Holtzman, “A model for radar images and its application to adaptive digital filtering of multiplicative

noise,” IEEE Trans. Pattern Anal. Machine Intell., vol. 4, no. 2, pp. 157–165, Mar. 1982.

[13] V.G.G. Moser and S. Serpico, “Unsupervised change detection from multichannel SAR images,” IEEE

Trans. Geosci. Remote Sens., vol. 4, no. 2, pp. 278–282, Apr. 2007.

[14] Y. Bazi, L. Bruzzone, and F. Melgani, “An Unsupervised Approach based on the Generalized Gaussian

Model to Automatic Change Detection in Multitemporal SAR Images,” IEEE Trans. Geosci.

RemoteSensing, vol. 43, no. 4, pp. 874–887, Apr. 2005.

[15] J.-S. Lee, “Digital Image Enhancement and Noise Filtering by Use of Local Statistics,” IEEE Trans.

Pattern Anal. Machine Intell, vol. 2, no. 1,pp. 165–168, 1980.

[16] C. L. Nikias and A. P. Petropulu, Higher-Order Spectra Analysis: a nonlinear signal processing

framework. Englewoods Cliff, NJ, PTR Prentice Hall, 1993.

[17] J. Inglada, “Change detection on SAR images by using a parametric estimation of the Kullback–Leibler

divergence,” in Proc. IEEE IGARSS, Jul. 2003, vol. 6, pp. 4104–4106.

AUTHORS

Lijinu. M. Thankachan received the Electronics and Communication engineer degree in 2011

from Mahatma Gandhi University and the post-graduation degree in signal processing in 2013

from CUSAT University. She has been since working at IHRD College of Applied Science,

Konni as Asst. Professor in department of Electronics and Communication. Her research

interests include the development of image processing algorithms for the operational

exploitation of Earth Observation images, mainly in the fields of image registration, change

detection and object recognition.

Jeny Jose received the B.Tech degree in Electronics and Communication engineering and the

Master’s (engineer) degree in VLSI and Embedded System engineering from Mahatma Gandhi

University, in 2011 and 2013, respectively. She is currently a Lecturer in the IHRD College of

Applied Science, MG University. Her research interests include VLSI design and

implementation for discrete wavelet transform and image coding.

Page 139: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

773 Vol. 7, Issue 3, pp. 773-781

LOAD - SETTLEMENT BEHAVIOUR OF GRANULAR PILE IN

BLACK COTTON SOIL

Siddharth Arora1, Rakesh Kumar2 and P.K. Jain3 1M. Tech. Student, M.A.N.I.T Bhopal, (M.P) India.

2Assistant Professor, Department of Civil Engineering, M.A.N.I.T Bhopal, (M.P) India. 3Professor, Department of Civil Engineering, M.A.N.I.T Bhopal, (M.P) India.

ABSTRACT The paper discusses the results of study conducted on floating granular piles constructed in soft black cotton

soil. The soil beds were prepared in a model tank of diameter173 mm and height605 mm. The granular pile of

diameter55mm was constructed at the centre of soil bed using crushed stone chips as the pile material. The load

test was conducted on the granular pile. The length to diameter ratio (L/d ratio) of the pile was varied, from 1

to 11 with an interval of two, and the effect of pile length on load carrying capacity was studied. Further, to

observe the effect of encasement of pile material, it was wrapped by geogrid and the load test was conducted.

The test results show that the ultimate load carrying capacity, Qult of the granular pile increases as L/d ratio

increases in both the cases i.e. without and with geogrid encasement. The increasing trend of Qult with L/d is

observed continuing for the maximum L/d ratio i.e. 11, for the granular pile tested in the study. No well-defined

critical length of granular pile is observed in the present investigation of floating granular piles in soft black

cotton soil.

KEY WORDS: Granular Pile, crushed stone chips, Geogrid, load carrying capacity, critical length.

I. INTRODUCTION

Granular pile, sometimes referred as stone column technique, is the one of the most commonly used

soil improvement technique for soft soils. The technique has been utilized worldwide to increase the

load carrying capacity of poor soils and to reduce the settlement of superstructures constructed on

them. The technique is particularly suitable for flexible structures such as road embankments, oil

storage tanks, etc constructed on soft marine clays (Murugesan and Rajagopal, 2006). Different

aspects of soft soil improvement using granular piles have been reported in the literature. The

construction methods are discussed by Datye and Nagraju, 1981; Ranjan and Rao, 1983 and

Greenwood and Krisch, 1983. The mechanism of load transfer is described by Greenwood, 1970;

Hughes and Withers, 1974; Hughes et al., 1976; Madhav and Vitkar, 1978; Aboshi et al., 1979;

Barksdale and Bachus, 1983 and Black et al., 2007. Granular pile is reported to be most effective in

clayey soils with undrained shear strength ranging from 7-50 kPa (Barksdale and Bachus 1983, Juran

and Guermazi 1988, IS 15284(Part-I): 2003).

Black cotton soils also behave like soft soils on wetting. The structures, particularly the light one,

founded on such soils experience large settlement, heave and undulation due to alternate wetting and

drying of the soil. A large number of methods that are commonly suggested to improve the behavior

of such soils include the soil replacement, mechanical and chemical stabilization, thermal treatment

etc. A brief review of the common methods and their shortcomings has been discussed by Kumar and

Jain (2012). Recently, Kumar and Jain (2013) and Kumar (2014) show that the technique of

improving the soft soils by granular piles/ stone columns can also be used in soft expansive black

cotton soil. Kumar (2014) performed model test in the laboratory on granular piles of sand

constructed in soft expansive soil of different UCS values. End bearing granular Pile were casted in

Page 140: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

774 Vol. 7, Issue 3, pp. 773-781

the soil and the load test were performed. The test results showed that load carrying capacity of a

footing Qult, defined as the load corresponding to 10% diameter of the footing ( or pile), resting on

granular pile is significantly more than the corresponding value for the footing placed directly on the

soft soil bed. The increase in Qult is observed for all soils i.e. with UCS 30, 40 and 50 KPa. Further,

by mixing nylon fibers and/ or encasing the pile by geogrid, the load carrying capacity of the granular

pile is found increasing. As all the test results reported by Kumar (2014) were performed on end

bearing granular piles, the effect of the ratio of length, L to diameter, d of the pile (L/d ratio) on load

carrying capacity of the pile could not be established, hence to fill this gap a test program has been

planned in which floating piles of different L/d ratio were casted in expansive soft soil prepared bed

and the load test were performed. The details of which are discussed in subsequent paragraphs. It

includes the experimental investigation, results, discussion and the conclusions.

II. EXPERIMENTAL INVESTIGATION

The experiments were carried out on a 55 mm diameter granular pile surrounded by black cotton soil

(i.e. soft clay) in cylindrical mould of 173 mm diameter and 605mm height. The soil bed was

prepared at a dry density of 14.32kN/m3and 32% water content at which the cohesion of soil was 36

kN/m2. Care was taken to ensure that no significant air voids were left out in the soil bed. The

granular pile was constructed at the centre of the soil bed by making hole with auger, removing the

soil and filling pile material in layers. The dry density of the granular material was 17.5 kN/m3. Three

Series of laboratory model tests were performed. First series of tests were performed on the soft soil

bed without any granular pile by placing a mild steel plate of 55mm diameter on the soil top. Second

series of tests were performed on ordinary granular piles (O.G.P) of length, L equal to 55mm, 165mm,

275mm, 385mm, 495mmand 605mm. Thus the L/d ratio was varied as 1, 3, 5, 7, 9, and 11. The third

series of tests were performed on encased granular piles (E.G.P) in which the nylon geogrid was

wrapped around the granular pile for the different cases of L/d ratio as mentioned in O.G.P test series.

III. TEST SETUP

A typical test arrangement for a single pile test is shown in Fig. 1. The load was applied on a circular

footing (mild steel plate) placed on the granular pile through a proving ring at a constant displacement

rate of 1.25 mm/min. The diameter of the footing was same as that of the granular pile. The load

corresponding to equal interval of settlement of the footing was noted.

Figure 1 – Typical test arrangement for studying load-settlement behavior of floating granular pile in soft soil

Page 141: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

775 Vol. 7, Issue 3, pp. 773-781

IV. MATERIAL PROPERTIES

Three basic materials used for this study are black cotton soil representing the soft soil to be

improved, crushed stone chips as granular pile forming material and the geogrid for encasing the

granular pile. The properties of each of these are as follows:

(i) Black cotton soil: The soil used in the investigation is black cotton soil taken from MANIT

Bhopal campus and its properties are given in table ‘1’.

Table 1 - Properties of black cotton soil

Properties Values

Liquid limit (L.L.), % 53.6

Plastic limit (P.L.), % 28.95

Plasticity index (P.I.), % 24.65

MDD (kN/m3) 15.36

OMC (%) 23.5

DFS (%) 40

Specific gravity (G) 2.64

Clay and silt content (%) 95

Classification (IS:1498-1972) CH

Dry unit weight (γd ), ( kN/m3) 14.32

Degree of saturation (Sr), (%) 100

Cohesion (c) or ( Undrained shear strength ), (kN/m2) at 32%

water content

36

(ii) Crushed stone chips: Properties of the crushed stone chips having size less than 6.35mm, used in

the granular pile are listed in table ‘2’.

Table 2 - Properties of crushed stone chips

(iii)Geogrid: The properties of the geogrid (net) used are presented in table ‘3’. Geogrid are stitched

to form the tube for encasing the granular pile.

Table 3 - Properties of geogrid used for encasement of soil

Aperture size (mm) 0.25

Stiffness (KN/m) 116

Weight (gm/m2) 255.85

V. RESULTS AND DISCUSSIONS

The load settlement graphs for O.G.P and E.G.P series of test for different L/d ratios are presented in

figure 2 to 7. For the purpose of comparison the load settlement curve for the footing placed on the

top of the soft black cotton soil alone is also shown in these figures. The ultimate load, Q for the

footing placed on soil is found as 0.42kN corresponding to settlement value as 10% size of the

footing.

Properties Values

D10 (mm) 0.95

D20 (mm) 1.40

D30 (mm) 1.75

D50 (mm) 2.70

D60 (mm) 3.25

Cu 3.42

Cc 0.992

Minimum dry unit weight (γ d min.), ( kN/m3) 16.4

Maximum dry unit weight (γ d max.), ( kN/m3) 17.5

Φ at (γ d.= 17.21 kN/m3) 440

Specific gravity (G) 2.65

Suitability number 2.42

Page 142: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

776 Vol. 7, Issue 3, pp. 773-781

From figure 2 to 7 it is noted that for all L/d ratios the Qult of footing on granular pile (O.G.P series)

is significantly more in comparison to the footing resting on the soil alone. Further, appreciable

increment in Qult is seen for E.G.P cases too. In order to show the effect of L/d ratio on load carrying

capacity of granular pile, the ratio of Qult for pile to Q of soil is calculated for both the test series and

is given in Table 4. The graphical representation of Table 4 is shown in Figure 8.

Table 4 – Effect of L/d ratio on pile load carrying capacity

L/d ratio Qult.pile/Q (O.G.P test series) Qult.pile/Q (E.G.P test series)

1 1.93 3.15

3 4.95 6.24

5 6.24 7.54

7 6.99 8.69

9 7.83 9.99

11 8.77 11.49

From Table 4 and Figure 8 it is noted that the critical length of granular pile as suggested by some of

the researchers in soft soils other than black cotton as 4-6 times the pile diameter (Hughes and

Withers, 1974; Mitra and Cahttopadhyay, 1999; IS 15284(Part-I): 2003; McKelvey et al., 2004; Black

et al., 2007; Samadhiya et al., 2008 and Najjar et al., 2010) is not observed in the present case of soft

soil as the expansive black cotton soil.

Figure 2:Load-settlement curve for L/d ratio ‘1’

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10

sett

lem

ent

(mm

)

Load (KN)

untreated clay

O.G.P

E.G.P

Page 143: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

777 Vol. 7, Issue 3, pp. 773-781

Figure 3: Load-settlement curve for L/d ratio ‘3’

Figure 4: Load-settlement curve for L/d ratio ‘5’

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10

Set

tlem

ent

(mm

)

Load (KN)

untreated clay

O.G.P

E.G.P

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10

Set

tlem

ent

(mm

)

Load (KN)

untreated clay

O.G.P

E.G.P

Page 144: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

778 Vol. 7, Issue 3, pp. 773-781

Figure 5: Load-settlement curve for L/d ratio ‘7’

Figure 6: Load-settlement curve for L/d ratio ‘9’

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10

Set

tlem

ent

(mm

)

Load (KN)

untreated clay

O.G.P

E.G.P

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10

Set

tlem

ent

(mm

)

Load (KN)

untreated clay

O.G.P

E.G.P

Page 145: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

779 Vol. 7, Issue 3, pp. 773-781

Figure 7: Load-settlement curve for L/d ratio ‘11’

Figure 8 - Graph between L/d ratio and load at settlement equal to 10% of pile diameter

VI. CONCLUSIONS

From above, following conclusions are drawn

The load carrying capacity of a footing on granular pile is found more in comparison to

that resting on soil alone for all L/d ratio of the pile.

In case of a geogrid encased granular pile of particular L/d ratio, the increase in load

carrying capacity was observed to be higher than the corresponding value in ordinary

granular pile.

The critical length of granular pile is not observed in the present investigation. As the L/d

ratio of the pile increases, the ratio Qult.pile/Q is found increasing.

VII. FUTURE WORK

Similar work can be conducted by changing the shear strength of the black cotton soil and by

using different types of granular materials as pile forming material.

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10S

ettl

emen

t (m

m)

Load (KN)

untreated clay

O.G.P

E.G.P

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10 11 12

Qult

.pil

e/Q

L/d ratio

O.G.P

E.G.P

Page 146: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

780 Vol. 7, Issue 3, pp. 773-781

The swelling and shrinkage behavior of black cotton soil reinforced with granular pile also

needs to be studied.

Single granular pile was used in the present study. The behavior of group of granular piles in

black cotton soil may also be studied.

In the present investigation, the load was applied only on the granular pile whereas it may be

applied on entire area of soil and pile so as to study the behavior of the composite material.

REFERENCES

[1]Aboshi, H., Ichimoto, E., Harada, K., and Emoki, M. (1979). “The composer-A method to improve the

characteristics of soft clays by inclusion of large diameter sand columns.” Proc., Int. Conf. on Soil

Reinforcement., E. N. P. C., 1, Paris, 211-216

[2]Barksdale, R.D., and Bachus, R.C. (1983). “Design and construction of stone columns.” Federal

Highway Administration, RD-83/026

[3]Black, J.A., Sivakumar, V., Madhav M.R. and Hamill, G.A. (2007). “Reinforced stone columns in weak

deposits: laboratory model study.” Proc., J. Geotech. Geoenviron. Eng., ASCE 133(9), 1154-1161.

[4]Datye, K. R., and Nagaraju, S. S. (1981). “Design approach and field control for stone columns.” Proc.,

10th Int. conf. on Soil Mech. and Found. Eng., Stockholm, Vol. 3, 637-640.

[5]Greenwood, D. A. (1970). “Mechanical improvement of soils below ground surfaces.” Ground Eng.

Conf., Institution of Civil Engineers, London, 11-22.

[6]Greenwood, D. A. and Kirsch, K. (1983). “Specialist ground treatment by vibratory and dynamics

methods.” Proc., Int. Conf. on Piling and Ground Treatment, Thomas Telford, London, 17-45.

[7]Hughes, J. M. O., and Withers, N. J. (1974). “Reinforcing of soft cohesive soils with stone columns.”

Ground Eng., 7(3), 42-49.

[8]Hughes, J. M. O., Withers, N. J., and Greenwood, D.A. (1976). “A field trial of reinforcing effect of

stone column in soil.” Geotechnique, Vol. 25, No. 1, 32- 44.

[9]Indian Standards (IS). (2003). “Indian standard code of practice for design and construction for ground

improvement-guidelines. Part 1: Stone columns.” IS 15284 (Part 1), New Delhi, India

[10]Juran I. and Guermazi A. (1988). “Settlement response of soft soils reinforced by compacted sand

columns.” J. Geotech. Geoenviron. Eng., ASCE 114(8), 903-943.

[11]Kumar, R. and Jain P. K. (2012). “Prospect of using granular piles for improvement of expansive

soil.” Int. J. of Advanced Eng. Tech., Vol.III/ Issue III/July-Sept, 2012, 79-84.

[12]Kumar, R. and Jain P. K. (2013). “Expansive Soft Soil Improvement by Geogrid Encased Granular

Pile.” Int. J. on Emerging Technologies, 4(1): 55-61(2013).

[13]Kumar, R. (2014). “A Study on Soft Ground Improvement Using Fiber- Reinforced Granular Piles”.

Ph. D. thesis submitted to MANIT, Bhopal, India

[14]Madhav, M. R., and Vitkar, P. P. (1978). “Strip footing on weak clay stabilized with a granular trench

or pile.” Can. Geotech. J., 15, 605-609.

[15]McKelvey, D., Sivakumar, V., Bell, A., Graham, J. (2004). “Modeling vibrated stone columns in soft

clay.” Proc., Institute of Civil Engineers Geotechnical Eng., Vol. 157, Issue GE3, 137-149.

[16]Mitra, S., and Chattopadhyay, B.C. (1999). “Stone columns and design limitations.” Proc., Indian

Geotech. Conf., Calcutta, India, 201-205.

[17]Murugesan, S. and Rajagopal, K. (2006). “Geosynthetic-encased stone columns: Numerical

evaluation.” Geotextiles and Geomembranes, J. Vol. 24, 349-358.

[18]Najjar, S. S., Sadek, S., and Maakaroun, T. (2010). “Effect of sand columns on the undrained load

response of soft clays.” Proc., J. Geotech. Geoenviron. Eng, ASCE 136(9):1263-1277.

[19]Ranjan, G. and Rao, B. G. (1983). “Skirted granular piles for ground improvement.” Proc., VIII

European Conf. on Soil Mech. and Found. Eng., Halainki.

[20]Samadhiya, N. K., Maheshwari, P., Basu, P., and Kumar, M. B. (2008). “Load settlement

characteristics of granular pile with randomly mixed fibers.” Indian Geotech. J., 38(3), 345-354.

AUTHORS

Siddharth arora was born in Haridwar (U.K), India, in 1991. He received the Bachelor

degree in civil engineering from govind ballabh pant engineering college, pauri-garhwal, in

2012 and He is currently pursuing the the Master degree in geotechnical engineering from

maulana azad national institute of technology, Bhopal, which will complete in july, 2014.

His research interests include geotechnical engineering.

Page 147: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

781 Vol. 7, Issue 3, pp. 773-781

Rakesh kumar was born in Gajrola (U.P), India, in 1977. He received the Bachelor degree

in civil engineering from the MMMEC, Gorakhpur, in 1999 and Master degree in

geotechnical engineering from IIT Roorkee, Haridwar, in 2001. He has completed his PhD

from maulana azad national institute of technology, Bhopal, in 2014. His research interests

include geotechnical engineering.

Pradeep Kumar Jain was born in Jhansi (U.P), India, in 1964. He received the Bachelor in

civil engineering degree from the MITS, Gwalior, in 1986 and Master degree in construction

technology and management from MITS, Gwalior, in 1988. He has completed his PhD from

IIT Roorkee, Roorkee, in 1996. His research interests include geotechnical engineering.

Page 148: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

782 Vol. 7, Issue 3, pp. 782-789

HARMONIC STUDY OF VFDS AND FILTER DESIGN: A CASE

STUDY FOR SUGAR INDUSTRY WITH COGENERATION

V. P. Gosavi1 and S. M. Shinde2 1P. G. Student, Govt. College of engineering, Aurangabad, India

2Asst. Professor, Electrical Department

Govt. College of Engineering, Aurangabad, India

ABSTRACT In majority of Sugar Industries, to achieve better efficiency of power, VFDs are extensively used. Initially D.C.

Drives were used, but now these are slowly getting phased out and are replaced by A.C. Drives .In a case study

of Harmonic Analysis of SAMARTH CO-OPERATIVE SUGAR FACTORY, ANKUSH NAGAR, MAHAKALA,

JALNA it is commonly seen perception that, latest technology A.C. drives are having in built harmonic filters

and hence do not produce harmonics / produce harmonics within prescribed tolerance limit. Because of

variation in load pattern these drives still produce harmonics, which must be suppressed by using appropriate

Harmonic Filter, otherwise it may result into malfunction of sensitive devices. It is more essential to suppress

these harmonics, for SAMARTH CO-OPERATIVE SUGAR FACTORY, ANKUSH NAGAR, MAHAKALA,

JALNA, as Sugar Industry has grid connected Cogeneration plant. The suggested method of filter design gives

better result, reduces harmonic level (5th harmonics) to almost 10A from 154A. The values of load Kw, P.F.,

Voltage, current load inductance, capacitance, resistance are derived from average of readings actually

available from site.

KEYWORDS: Passive Harmonic Filter, Tuned Frequency, Cable Impedance.

I. INTRODUCTION

In Co-generation Sugar Mills, Reduction in, specific power consumption increases power revenue by

about Rs. 3 per ton cane [1]. In a sugar mill, one of the method of increasing power efficiency is to

replace smaller low efficiency (25-30 %) mill turbines by better efficiency drives such as AC motors.

Multistage steam turbines can operate at efficiency of 65-70 %. Hence equivalent quantity of steam

saved by installation of AC motors, DC motor, Hydraulic drives can be passed through power turbines

to generate additional power [2].

Modern Drive manufacturer proved the line reactor with the A.C. Drives only, but these line reactors

produce high Voltage drop and does not provide the expected filtrations of harmonics. It is common

perception among industries that these line reactors, use the harmonic filter and hence these Drives do

not produce or produce harmonics within to tolerance limit. Because of variation harmonics, which

must be supposed by Harmonic Filters otherwise it may result into malfunction of sensitive

equipments. It is more essential to suppress these harmonics in case the Sugar Industry has the grid

interconnected cogeneration power plant.

This paper discuss the case study of Harmonic Analysis of SAMARTH CO-OPERATIVE SUGAR

FACTORY, ANKUSH NAGAR, MAHAKALA, JALNA, which confirms the presence of very high

amount of harmonic current and the necessity of proper harmonic filter.

The organization of this paper is as follows

In section II – The methodology of the case study is discussed

In section III – The actual readings taken during the case study are presented

In section IV – The filter design as per recommended method is explained

Page 149: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

783 Vol. 7, Issue 3, pp. 782-789

In section V – The result of the installation of filter simulated with the help of matlab

program is discussed.

In section VI – The interpretation of the case study and the results derived is discussed as

Conclusion

In section VII – The future scope in the same research area of harmonic filter design for the

Sugar Industry based co-generation plant is presented.

II. METHODOLOGY OF CASE STUDY

For the case study of harmonic analysis of SAMARTH CO-OPERATIVE SUGAR FACTORY,

ANKUSH NAGAR, MAHAKALA, JALNA the methodology adopted is as follows

Initially the walk – in – audit of the entire plant was done.

The probable sources of generation of harmonics were identified, VFDs were came out as most

probable sources of harmonics generation.

To confirm the current harmonics content generated by the VFDs, reading were taken for two VFDs

for a time cycle of ten minutes each with the help of power quality analyser. [3]

Out of this two, the one which was generating more harmonic current was taken for the case study and

case study is presented as below

III. CASE STUDY (ACTUAL READING)

An A.C. DRIVE OF 300 kW 3-Φ INDUCTION MILL DRIVE MOTOR

SAMARTH CO-OPERATIVE SUGAR FACTORY, ANKUSH NAGAR, MAHAKALA, JALNA

The Actual readings are kW consumed 261 kW.

Avg. Line Current = 406 A

P.F. = 0.88

Current Harmonics

I (h1) – 368.88 A I (h2) – 2.06 A

I (h3) – 28.363 A I (h4) - 0 A

I (h5) – 154.03 A I (h6) – 0 A

I (h7) - 59.543 A I (h8) – 0 A

I (h9) – 4.483 A I (h10) – 0 A

I (h11) – 26.33 A

Table 1 : R Current Harmonics

TIME I1(01) I1(02) I1(03) I1(04) I1(05) I1(07) I1(09) I1(11) THDF_I1(%)

15:54:40 351.00 0.00 19.00 0.00 148.00 47.00 4.00 30.00 45.60

15:55:40 329.00 0.00 21.00 0.00 137.00 51.00 0.00 24.00 46.00

15:56:40 282.00 0.00 17.00 0.00 136.00 61.00 0.00 25.00 54.30

15:57:40 480.00 5.00 20.00 0.00 183.00 49.00 3.00 31.00 40.80

15:58:40 338.00 5.00 19.00 0.00 149.00 57.00 0.00 27.00 48.60

15:59:40 397.00 4.00 17.00 0.00 165.00 57.00 0.00 29.00 45.10

16:00:40 393.00 5.00 22.00 0.00 173.00 67.00 0.00 33.00 48.50

16:01:40 373.00 0.00 21.00 0.00 161.00 62.00 4.00 33.00 48.10

16:02:40 340.00 0.00 17.00 0.00 153.00 62.00 0.00 24.00 49.80

16:03:40 202.00 0.00 23.00 0.00 117.00 63.00 5.00 19.00 68.20

16:04:40 330.00 4.00 21.00 0.00 152.00 62.00 4.00 26.00 51.10

Page 150: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

784 Vol. 7, Issue 3, pp. 782-789

Table 2 : Y Current Harmonics

TIME I2(01) I2(02) I2(03) I2(04) I2(05) I2(07) I2(09) I2(11) THDF_I2(%)

15:54:40 402.00 0.00 31.00 0.00 144.00 51.00 4.00 26.00 39.80

15:55:40 378.00 0.00 27.00 0.00 137.00 55.00 4.00 20.00 40.70

15:56:40 325.00 0.00 24.00 0.00 134.00 63.00 4.00 21.00 47.10

15:57:40 532.00 5.00 30.00 0.00 177.00 52.00 6.00 26.00 36.00

15:58:40 386.00 4.00 28.00 0.00 148.00 62.00 6.00 24.00 42.80

15:59:40 442.00 6.00 25.00 0.00 162.00 59.00 5.00 25.00 40.20

16:00:40 441.00 4.00 26.00 0.00 168.00 70.00 5.00 28.00 42.50

16:01:40 416.00 4.00 23.00 0.00 160.00 64.00 4.00 30.00 42.80

16:02:40 384.00 0.00 26.00 0.00 152.00 64.00 4.00 21.00 44.20

16:03:40 244.00 4.00 19.00 0.00 121.00 68.00 0.00 18.00 59.00

16:04:40 378.00 4.00 26.00 0.00 151.00 65.00 4.00 23.00 44.70

Table 3 : B Current Harmonics

TIME I3(01) I3(02) I3(03) I3(04) I3(05) I3(07) I3(09) I3(11) THDF_I3(%)

15:54:40 377.00 0.00 41.00 0.00 155.00 49.00 8.00 30.00 45.80

15:55:40 349.00 0.00 42.00 0.00 146.00 54.00 7.00 25.00 47.30

15:56:40 301.00 0.00 35.00 0.00 144.00 63.00 7.00 26.00 54.60

15:57:40 503.00 0.00 44.00 0.00 190.00 49.00 9.00 32.00 41.00

15:58:40 361.00 0.00 40.00 0.00 157.00 59.00 8.00 28.00 48.80

15:59:40 416.00 4.00 38.00 0.00 172.00 57.00 8.00 29.00 45.40

16:00:40 412.00 0.00 40.00 0.00 178.00 68.00 7.00 32.00 48.20

16:01:40 387.00 5.00 38.00 0.00 168.00 63.00 7.00 33.00 48.60

16:02:40 361.00 0.00 39.00 0.00 160.00 63.00 7.00 24.00 49.70

16:03:40 213.00 0.00 36.00 0.00 126.00 66.00 6.00 21.00 70.50

16:04:40 350.00 5.00 41.00 0.00 159.00 63.00 8.00 26.00 51.30

Table 4 : The impedance from Generator to Mill – Motor

Sr. Cable size R + j X L (Ω Length Impedance / Length No. of

Runs

Total

1 120 sq mm

(L.T.)

0.3050 + j 0.0744 25 0.00762 + j 0.00185 4 0.0019 + j 0.00046

2 300 sq mm

(L.T.)

0.1220 + j 0.0732 15 0.00183 + j 0.00109 10 0.00018 + j 0.00019

3 400 sq mm

(H.C.)

0.10 + j 0.099 25 0.0025 + j 0.00247 1 0.0025 + j 0.00247

4 Transformer 0 + j 0.00338 0 + j 0.00338

5 Turbine j 0.5377 0 + j 0 . 5377

Page 151: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

785 Vol. 7, Issue 3, pp. 782-789

Figure – 1 Current Harmonic Spectrum

IV. FILTER DESIGN AS PER RECOMMENDED METHOD

The load is 261 Kw with 0.88 logging P.F. The fifth harmonic current produced by load is maximum

that is 41.75 % of fundamental current. The system voltage is 422 V.

The fifth harmonic background voltage distortion on utility side of transformer is 0.5 %

Step 1 : Select a turned frequency for filter

The filter will be slightly tuned below the 5th harmonic frequency to allow for tolerances in the filter

components and variations in system impedance. The filter is designed to be tuned to the 4.7th

Step 2 : Compute capacitor Bank size and Resonant Frequency.

Here it is assumed that P.F. is to be improved from 0.89 logging to 0.96 logging.

Reactive power demand for 0.88 logging PF 1296.59 sin [cos (0.88)] 140.87 Kvar

….. (1)

Reactive power demand for a 0.96 logging PF 1

Var296.59 sin [cos (0.96)] 83.04 K ….. (2)

Required Compensation from filter

140.87 – 83.04 = @60varK

For a nominal 422 V system, the net wye equivalent filter reactance (capacitive) FiltX is determined

by 2(1000) (0.422) (1000)

= = = 2.9680Ω60

2

Filt

Var

KVX

K,

….. (3)

Filt Cap LX = X - X ….. (4)

For tuning at 4.7th harmonic 2 2

Cap L LX = h X = (4.7) X ….. (5)

Thus, desired capacitive reactance can be determined by 2

2

2.968(4.7)3.1087

1 (4.7) 1

2

Filt

Cap 2

X hX

h

….. (6)

To achieve this reactance at a 422 V rating the capacitor would have to be rated 2(1000) (0.422) 1000

57.283.1087

2

var var

Cap

KVK K

X

var@ 50 K ….. (7)

Now the filter will be designed using 422 V capacitor rated var60K . For this capacitor rating.

2.968CapX

Step 3 : Compute Filter Reactor Size

The filter reactor size is computed from the wye – equivalent capacitive reactance as follow

0

50

100

150

200

250

300

350

400

I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11

Series1

Page 152: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

786 Vol. 7, Issue 3, pp. 782-789

2

2.9680.134

(4.7)

Cap(wye)

L 2

XX (Fund)

h

….. (8)

0.1340.427 mH

2 50 2 50

L FundXL

….. (9)

Step 4 : Computation of Fundamental duty requirements

a) The apparent reactance of combined capacitor and reactor at fundamental frequency is

0.134 2.968Fund L Cap(wye)X X - X = 2.834 Ω

….. (10)

b) The fundamental frequency filter current is

/ 3 422 / 385.97 A

2.834

actual

Fund

fund

KVI

X

….. (11)

c) The fundamental frequency operating voltage across capacitor bank is

3L-L (Fund) fund Cap (wye)V Cap L X = 3 85.97 2.968 = 441.95 V ….. (12)

This is nominal fundamental voltage across the capacitor. It should be adjusted for any contingency

conditions and it should be less than 110 percent of capacitor rated voltage.

d) actual reactive power produced is

var3 62.83 Kvar Fund fund actualK I KV ….. (13)

Step 5 : Computation of harmonic duty requirements

a) The 5th harmonic current produced by load is 154.03 A(amps)Ih

b) Fundamental frequency impedance of service transformer 2(0.422)

(%) 0.06 0.005342

2

actual

T (Fund) T

KVX Z

MVA(Xmer)

….. (14)

T (harm) TX hX (Fund) = 5 × 0.00534 = 0.0267 Ω

….. (15)

The harmonic impedance of capacitor bank is

XCap (wye), harm = 0.5936 Ω

The harmonic impedance of reactor is

XL(harm) = h×XLFund = 5 × 0.134 = 0.67Ω

The fifth harmonic current contributed to filter from source side would be

3 ( )

h(utility)(pu) actual

h(utility)

T (harm) Cap (wye) harm L

V KVI

X X X (harm)

….. (16)

0.005 42211.81 A

3 (0.0267 0.5936 0.67)

c) The maximum harmonic total current 154.03 11.81 165.84h(total)I

d) The harmonic voltage across capacitor

3Cap (wye)

Cap (L-L rms harm) h(total)

XV I

h

2.9683 165.84

5

= 170.50

….. (17)

Step 6 : Evaluate total rms current and peak voltage requirement

a) 2 2(85.97) (170.50)2

rms total Fund hutilityI I I = 190.94A ….. (18)

b) The maximum peak voltage across capacitor

CapLL (max. peak) LL (Fund)+ VV Cap =V Cap ….. (19)

= 442 + 170.5 = 612.5

c) The rms voltage across capacitor 2 2(442) (170.5) 473.74V

….. (20)

d) The total VarK seen by the capacitor is

3 190.94 0.47374Var (wye)K Cap total ….. (21)

= 156.67varK

Page 153: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

787 Vol. 7, Issue 3, pp. 782-789

Capacitor Rated for high voltage must be 2

var2

(600)60 121.29 K

(422) .

Table : 5 Comparison able for Evaluating Filter Duty Limit

Duty Definatio Limit % Actual

Values

Actual

Value %

Peak

voltage KV rated

LL (max. peak)V Cap

120 612.5 /

600

102

RMS

voltage KV rated

LL (rms total)V Cap

110 473.74 /

600

80

RMS

current

. total

Icap rated

rmsI

180 190.94/

116.71

164

Kvar

K var rated

Kvar Cap (wye total)

135 156.67/

121.29

129

V. RESULT

Figure – 2 Waveform of 5th Harmonic Current Figure – 3 Waveform of 5th Harmonic Current

(w/o filter) (with recommended filter)

Figure – 4 Initial distorted Waveform Figure – 5 Distorted Waveform

(w/o filter) (with recommended filter)

Table : 6 Result Of Performance Of Filters Designed By Recommended Method.

Parameter Without Filter With Filter

Power Factor 0.88 0.968

2nd Harmonics 2.06 A 2.14 A

3rd Harmonics 28.363 A 14.237 A

4th Harmonics 0 0

5th Harmonics 154.03 A 10.3128 A

Page 154: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

788 Vol. 7, Issue 3, pp. 782-789

6th Harmonics 0 0

7th Harmonics 59.543 A 26.33 A

8th Harmonics 0 0

9th Harmonics 4.483 A 3.375 A

10th Harmonics 0 0

11th Harmonics 26.333 A 27.239 A

Finding of results

1) Figure 1 and 3 represent the current wave form of fifth harmonic current and the total rms current

generated by the VFD before installation of the proposed harmonic filter.

2) Figure 2 and 4 represent the current wave form of fifth harmonic current and the total rms current

generated by the VFD after installation of the proposed harmonic filter, simulated with the help of

MATLAB program. It is seen that the originally distorted wave forms as observed in figure 1 and 3

are smoothened in figure 2 and 4 respectively, which clearly indicates the effective performance of

proposed passive harmonic filter

3) The table 5 reveals that the power factor of load is improved, third, fifth, seventh and ninth

harmonic current generated by the load are minimized with the incorporation of the proposed passive

harmonic filter. It is seen that the fifth harmonic current for which the filter is designed, as it is the

major harmonic contributor current, is drastically reduced from 154.03 A to 10.3128 A.

4) It is also seen from table 5 that eleventh harmonic current is increased marginally due to

incorporation of the proposed filter.

5)The even harmonic currents that is second, fourth, sixth, eighth and tenth harmonics are absent in

both the cases with filter and without filter

VI. CONCLUSIONS

This paper explains the design of harmonic filter by a method which is generally not followed by

industry. The recommended method designs a filter which caters all requirement of the filter as shown

in table 5.

i) The recommended filter suppresses 5th harmonic current from 154.03 A to 10.3128 A.

ii) A Filter tuned for frequency 235 Hz is designed by considering 0.96 power factor gives all duty

requirements within acceptable limits as per IEEE Standard18-1992, IEEE Standard for Shunt

Power Capacitors

VII. FUTURE WORK

By this method filters tuned for different frequencies can be designed but for every frequency the cost

of filter as well reduction in %THD of current changes. Filters can be designed for different power

factors, but for different power factors the cost of filter as well reduction in %THD of current

changes. An optimized approach is necessary for the optimum design of the filter to give the most

cost effectiveness subjected to the better % THD of current and power factor of the system. Research

is necessary in this area. A program can be designed to get this optimum result.

ACKNOWLEDGEMENTS

Authors are thankfull to General Manager Shri S.N.Surwase, Manager Co-generation Shri

Vijaykumar Hulgiri & all supporting staff of Samarth Co-operative Sugar Factory

Ankushnagar,Mahakala,Jalna, Shri.V.P.Sonawane(A.E.)&Shri.G.H.Chavan(A.E.)M.S.E.T.C.L, Shri.

R.V. Kulkarni (Govt. College of Engg. A’bad), Shri. Padhye of Ambik Harmonic Filters Pvt Ltd.

Pune for their support and help during measurement of data.

REFERENCES

[1] Efficient Cogeneration Scheme For Sugar Industry. M.Premalata, S.Shanmuga Priya, V. Sivaramakrishnan

Journal of Scientific and Industrial Research. Vol 67,March 2008,PP 239-242

Page 155: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

789 Vol. 7, Issue 3, pp. 782-789

[2] Energy Conservatin in Sugar Mills.All India Seminar on “Ethanol & Cogeneration

2003,Nainital,Uttaranchal

[3] IEEE Recommended Practice for Monitoring Electric Power Quality, 1159.

[4] Electrical Power Systems Quality, Roger C. Dugan, Mark F. McGranaghan, Surya Santoso, H. Wayne

Beaty, Tata McGraw-Hill Publication, Second Edition, pp-264-270.

[5] Harmonic Minimization In Sugar Industry Based Cogeneration Plant.S.M.Shinde ,W.Z.Gandhare IEEMA

Journal December 2013 pp-122-125

[6] Modeling Variable AC Drives And Harmonic Distortion Erika Twinning, Ian Cochrane Department Of

Elctrical and Electronics Engineering The University Of Melbourne ,Australia

[7] IEEE Standard18-1992, IEEE Standard for Shunt Power Capacitors

BIOGRAPHY

Vivek P. Gosavi was born in Aurangabad, Maharashtra, India in 1972. He received the

Bachelor in Electrical Engineering degree from the Marathawada (Dr.B.A.M.U.)

University, Aurangabad in 1993. He is currently pursuing the Master of Engineering degree

with the Department of Electrical Engineering of Govt College of Engineering,

Aurangabad. His research interests include Power quality issues and design of harmonic

filters.

Sanjay M. Shinde was born in Manwat, Maharashtra, India in 1966. He received the

Bachelor in Electrical Engineer degree from the Marathwada (Dr.B.A.M.U.) University,

Aurangabad in 1990 and the Master in 1998 degree from the Dr. B.A.M.U. of Aurangabad

in Year, both in Electrical engineering. He is currently pursuing the Ph.D. degree with the

Department of Electrical Engineering, Govt. College of Aurangabad. His research interests

include Power quality study in co-generation systems and renewable energy sources.

Page 156: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

790 Vol. 7, Issue 3, pp. 790-797

PRECIPITATION AND KINETICS OF FERROUS CARBONATE

IN SIMULATED BRINE SOLUTION AND ITS IMPACT ON CO2

CORROSION OF STEEL

G. S. Das Material Science and Metallurgical Engineering

National Institute of Foundry and Forge Technology, Hatia, Ranchi, India

ABSTRACT The aim of present study was to study the stability of iron carbonate films formed on the surface of low alloy

carbon steel used in pipeline applications in brine solution. All the tests were carried out in a stirred autoclave

at various temperatures and low partial pressure of CO2. Usually, American Petroleum Institute (API)

specification steels are used for carrying the crude oil and gas from the offshore to refining platforms and

ultimately to the customers through pipeline routes. Developed and modified alloy steel of API grade such as

API X-52, API X-56, API X-60 and L-80 are used in this study that are rapidly using in petroleum industries to

transport the oil and gas. The first three grade of steel are used for transportation while L-80 grade is used for

tubing and drilling the wells for recovery of oil and gas from offshore. The comparative study of their

susceptibility of corrosion in severe corrosive condition has been made by measuring the corrosion rate of steel

using weight loss method in simulated 3.5% NaCl test solution. The fluid velocity of simulated brine solution

was maintained at a rate of 1.5 m/s for each test. The exposed samples were characterized by using Scanning

Electron Microscope and X-ray diffraction techniques.

KEY WORDS: API grade steel, Partial pressure of carbon dioxide, Autoclave, Brine solution

I. INTRODUCTION

The CO2 corrosion of carbon steel is one of the most important problems in the petrochemical

industries since 1940, because of their localized attack and severe corrosion [1-3]. These steels have

wide application in just about every sphere of oil and gas industries and it requires critically

assessment of corrosion severity to ensure safe utilization of these steels. An increased number of

offshore developments are based on transporting the unprocessed or partially processed multiphase

well streams from wells to onshore processing platforms and from there to ultimate consumers. In

such circumstances carbon dioxide often present in the fluids and is known for its high corrosion

potential when dissolved in water. These aspects are responsible to adequately assess the integrity of

the steels used in the oil and gas industry for production and transport equipment. Many parameters

are involved in the process and even small differences in chemical composition between one steel and

another can strongly influence scale formation during CO2 corrosion [4]. Scale formation and salt

accumulation are another analogous rigorous problem for the operation of multiphase pipelines [4-5].

Usually, well fluids and other products are transported through pipelines in the oil and gas industry

and these fluids contain aqueous phases which are considered to be inherently corrosive. CO2 is a gas

that reacts with water to form carbonic acid which lowers the pH of the solution and thus responsible

for increased corrosion rate. Despite, of its many studies frequent questions are still raised regarding

the mechanism of CO2 corrosion responsible for its occurrence. Many developed corrosion resistant

alloys such as 13% Cr steel and duplex stainless steel are used for the downhole drilling and also for

short flow lines [7-9]. But for long distance and large diameter pipelines, carbon steels are the only

economically feasible alternatives [9]. However, these steels are susceptible to corrosion in both either

Page 157: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

791 Vol. 7, Issue 3, pp. 790-797

CO2 and H2S or combination of these two. The susceptibility of their corrosion depends on various

parameters such as temperature, CO2 partial pressure, H2S concentration, pH, oxygen solubility,

chloride concentration, flow rate and characteristics of the materials [1-3]. The presence of acidic gas

like CO2, H2S and free water can cause severe corrosion problem in the oil and gas industries [5]. The

small change in one of these parameters can change the corrosion rate drastically due to change in the

properties of the layers of corrosion products that accumulates on the steel surface. The selection of

materials that transport oil and gas is not always made with sufficient emphasis on corrosion

resistance, but rather on good mechanical properties, ease of fabrication and low cost. Due to the

material loss rates resulting from internal corrosion, it becomes necessary to thoroughly characterize

the behavior of these high strength steels used for oil and gas pipelines. Many authors emphasized on

the importance of the evaluation of the specific nature of the film formed under a given set of

conditions, as these conditions can affect the type of film formed on the steel [10]. In general, the

precipitation of an iron carbonate film on the steel surface significantly affects the rate of corrosion.

The corrosion rate may increase or decrease depends on the nature of the film that forms on the metal

surface. Usually the carbonate film reduces the corrosion rate however in some certain cases it can

contribute to localized corrosion due to its non-uniform nature [11]. Also, the corrosion rate is closely

dependent on a series of parameters such as temperature, pressure, pH and steel composition. Nesic et.

al proposed a theoretical model of carbon dioxide corrosion in which the main focus was on the

factors which influence FeCO3 film formation [6]. They suggested that corrosion rates are not, in fact,

closely associated simply with the thickness of the protective film, but rather with its degree of

coverage and homogeneity on the metal surface. These films can partially or completely cover the

metal surface and consequently block only a portion of the exposed metal or, in some cases, it can

homogeneously cover the entire surface of steel. By covering entire surface it can prevent the further

corrosion due to restrict in further dissolution. The voids and holes may form at the underneath of

corrosion films that interface with the metal. The rate of void formation is important for determining

the type of film that formed on the metal surface. The protectiveness of film depends on many factors

such as film density, degree of protectiveness, porosity, and thickness. With the aid of their model,

they also confirmed that high levels of pH, temperature, partial pressure of CO2 and Fe2+

concentration associated with a low formation rate of the above mentioned voids can favour the

formation of a protective iron carbonate film [10-12].

II. MECHANISM AND KINETICS OF CO2 CORROSION

Iron carbonate is the main corrosion product in carbon dioxide corrosion is also known as sweet

corrosion since 1940 and it is considered as one of the most important problems in the oil and gas

industry due to its presence in the well fluids [10-11]. The iron carbonate scale formed on the metal

surface may be protective or non protective depends on the conditions at which they are formed and

influenced by many factors such as the concentration of ions present in the solution, pH, temperature,

pressure and velocity of the solution. The formation of protective films of iron carbonate called

siderite is possible when the concentration of bicarbonate ions increases rather than carbonic acid.

Since carbon dioxide dissolve in water and form carbonic acid which is mild acidic in nature and

corrosive. Among the most popular mechanisms postulated for this type of corrosion are proposed by

De Waard and Milliams, Schmitt and Rothmann and George and Nesic et. all which involve carbonic

acid and bicarbonate ions formation during dissolution of CO2 in water [1-6]. Both chemical and

electrochemical reactions are important factors in CO2 corrosion and one of the most common

chemical reactions that occur in the system is as follows;

This is followed by the carbonic acid dissociation:

Page 158: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

792 Vol. 7, Issue 3, pp. 790-797

On the surface of the steel, the electrochemical reactions which occur include an anodic reaction and

three cathodic reactions. The anodic reaction is:

The following cathodic reactions takes place are as follows;

In CO2 corrosion of carbon steel, when the concentrations of Fe2+ and CO32– ions exceed the solubility

limit, they can precipitate to form solid iron carbonate film according to the reaction as below;

Fe2+ + CO32– = FeCO3 (s) (9)

The properties of these films depend on precipitated iron carbonate (FeCO3) film called siderite on the

metal surface. It may act either in a protective or non-protective manner, depending on the specific

conditions under which it forms. At pH < 7, the carbonate ion (CO32–) is a minority species, and the

direct reduction of bicarbonate ions (HCO3-) is an important factor in the formation of the FeCO3

films on the steel surface. The FeCO3 forms by the following equation:

Fe2+ + HCO3– = FeCO3 + H+ (10)

The overall reaction can be expressed as:

Thus dissolved CO2 have the tendency to form carbonic acid (H2CO3) in the solution which increases

the cathodic reaction kinetics by dissociation to bicarbonate and hydrogen ions. Under stagnant

condition, dissolved ferrous ions combine with the H2CO3 to form ferrous carbonate (FeCO3).

However under flow condition some part of the iron carbonate dissolved and thus the reaction rate

become faster due to removal of corrosion scale from the metal surface. De Waard and Milliams

developed a semiempirical correlation between corrosion rate and CO2 partial pressure. This

relationship later simplified in the form of a nomogram and given as below;

Log C.R. (mm/y) = 5.8-1780/ T +0.67 Log (pCO2) (12)

The above equation provide a conservative estimate of corrosion rate under flowing condition because

it does not account for the effect of non-ideality of the gas phase and scale formation. In equation 4,

the hydrogen ion is produced through the bicarbonate ions near the surface rather than direct H2CO3.

In this sense, the carbonic acid acts as catalysts for the evolution of hydrogen. The high concentration

of H2CO3 or its precursor CO2 relative to H+ ion in the bulk solution increases the rate of overall mass

transfer and depolarized the cathodic reaction. The ability of carbonic acid to sustain high cathodic

reduction rate at relatively high pH indirectly aids the anodic reaction as per equation (5), which is

faster at higher pH. Also, the value of pH has the tendency to both either increase or decrease the CO2

Page 159: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

793 Vol. 7, Issue 3, pp. 790-797

based corrosion. The determination of pH is a function of partial pressure of mild acidic gas,

concentration of bicarbonate ions and temperature. Practically, the contribution of mild carbonic acid

(H2CO3) and temperature in pH determination is another way of representing effective level of partial

pressure of carbon dioxide (CO2) in the solution. As the solution pH decreases the localized attack

involved in the steel surface. The temperature of the solution has a significance influence on the CO2

corrosion. An Arrehenius type of relationship has been reported as below;

Log (Icorr) = 7.58-10,700/2.303 RT (13)

The temperature effect on film formation has been summarized by Burke in 1984 and reported that

below 60oC the corrosion product formed on the metal surface is mainly consists of siderite and

magnetite which is non-protective in nature. However, at higher temperature range (80-120oC) this

scale becomes dense in association with Fe2O3 and Fe3O4 and protects the metal for further

degradation.

III. EXPERIMENTAL

The materials used in the current study were obtained in pipe form. The as-received materials were

cut into the rectangular specimen of dimension 20 mm x 12 mm x 2.5 mm with a center hole of

diameter 1.5 mm at the top edge of each specimen to facilitate the suspension inside of the autoclave

of capacity 2.2 liter. The faces of the samples were initially coarse grounded on a SiC belt grinder

machine then consequently machine polished in the successive grade of emery papers (220, 400, 600

800 and 1000). The initial weight and area of each sample was measured using digital weighing

machine upto an accuracy of four digits. Four different experiments were carried out at different

temperature (30 oC, 60 oC, 90 oC and 120 oC) for each measured sample at a constant partial pressure

of 50 psi. The velocity of fluid was maintained at a constant speed of 1.5 m/s for 96 hours. The

oxygen solubility of the system was maintained below 40 ppb using the Ar gas purging in the system.

In the beginning of the test, the actual temperature of the machine was set as 30oC, 60oC, 90oC and

120oC separately and respectively and then created the actual partial pressure of carbon dioxide by

releasing the carbon dioxide gas inside the autoclave. The exposed samples were taken out from the

system washed in distilled water, rinsed in acetone and then dried in air. After cleaning, the coupon

weighed again and finally the corrosion rates of the samples were measured in mills per year using the

formula;

Corrosion rate (mpy) = Weight loss (g) X Constant (K)/Metal density (gm/cm3) X Metal

Area (A) X Exposure time (hrs.)

Where K is 5.34 x 105. Also, the exposed samples were characterized by using different techniques

such as environmental scanning electron microscope (ESEM) and X-ray diffraction (XRD)

Table 1: Chemical composition of the as received materials (in wt %)

Elements Sample 1

(L-80)

(Wt %)

Sample 2

(API X-52)

(Wt %)

Sample 3

(API X-56)

(Wt %)

Sample 4

(API X-60)

(Wt %)

C 0.19 0.17 0.14 0.12

Mn 1.22 1.23 1.27 1.25

Si 0.36 0.37 0.035 0.037

S 0.003 0.003 0.0031 0.003

P 0.004 0.004 0.0039 0.0038

Cr 0.07 0.080 0.010 0.015

Mo - 0.0001 0.00025 0.00034

Cu 0.16 0.09 0.22 0.23

Page 160: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

794 Vol. 7, Issue 3, pp. 790-797

50 Psi

0

2

4

6

8

10

30 60 90 120

Temperature (90oC)

Co

rro

sio

n r

ate

(m

py

)

API X-60

API X-56

API X-52

L-80

Fig. 1: Corrosion rate patterns of exposed samples in CO2 environment

50 psi

0

5

10

15

20

0 30 60 90 120 150

Temperature (0C)

Co

rro

sio

n R

ate

(m

py)

API X-52

API X-56

L-80

API X-60

Fig. 2: Weight loss graph patterns of exposed samples in CO2 environment

Page 161: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

795 Vol. 7, Issue 3, pp. 790-797

Fig. 3: XRD Patterns of all the four exposed samples exposed at 90oC and in CO2 environment

Fig. 4: ESEM micrographs of exposed samples (a) API X-52 (b) API X-56 (c) API X-60 and (d) L-80 at 90oC

and in 50 psi CO2

(a) (b)

(c) (d)

Page 162: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

796 Vol. 7, Issue 3, pp. 790-797

IV. RESULTS AND DISCUSSION

The corrosion rate patterns of exposed samples in CO2 environment are shown in Fig. 1. The results

indicated that all the steel exhibit corrosion in mildly acidic condition, however, the severity of

corrosion is different in different compositional steel. All exposed samples shown higher corrosion

rate at the 90oC due to formation of porous layer of siderite and their spallation from the metal

surface. Also, the results indicated that the corrosion rate decreases at higher temperature near about

100-120oC due to formation of adherent and dense layer of protective iron carbonate films that protect

the metal from further reaction of corrosion. The formations of iron carbonate scales on the metal

surface were confirmed by using X-ray diffraction analysis and are shown in Fig.3. The other phases

observed are Fe3O4, Fe2O3 and Fe3C respectively are also indicated in Fig. 3. The micrographs of

corroded samples were captured in environmental scanning electron microscope and presented in

Fig.4.

V. CONCLUSIONS

The corrosion rate patterns of all the steels increases with increasing temperature upto 90oC due to

formation of porous layer on the surface of the metal. The weight loss data as a function of

temperature at a constant partial pressure of carbon dioxide are shown in Fig. 2. Beyond this

temperature the corrosion rate again fall down due to formation of dense and adhered layer of siderite

with magnetite films on the metal surface which acts as protective in nature. At aggressive

temperature the layer formed on the surface dissolving continuously and the reaction rate becomes

faster and shown higher corrosion rate while at higher temperature the iron carbonate films formed are

very dense in nature and compact on the surface results in slower corrosion rate.

REFERENCES

[1] C. De Ward, D.E. Milliams, Prediction of carbonic acid in natural gas pipelines, First International

Conference on the Internal and External Protection of Pipes paper F-1, University of Durham, September

1975.

[2] C. De Waard, U. Lotz and D.E. Milliams, Predictive model for CO2 corrosion engineering in wet natural

gas pipelines. Corrosion 47 (1991), pp. 976–985.

[3] C. DeWaard and U. Lotz Prediction of CO2 corrosion of carbon steel in the Oil and Gas Industry, Institute

of Materials Publisher, UK (1994), pp. 30–49.

[4] C.A. Palacios and J.R. Shadley , Characteristics of corrosion scales on steel in a CO2-saturated NaCl brine.

Corrosion 47 (1991), pp. 122–127.

[5] C. De Waard and D.E. Milliams, Carbonic acid corrosion of steel. Corrosion 31 (1975), pp. 177–181.

[6] S. Nesic, N. Thevenot, J.L. Crolet, D.M. Drazic, Electrochemical properties of iron dissolution in the

presence of CO2 Corrosion'96 NACE, USA, paper 3, 1996.

[7] K.D. Efird, E.J. Wright, J.A. Boros, T.G. Hailey, Wall shear stress and flow accelerated corrosion of carbon

steel in sweet production Proceedings of the 12th International Corrosion Congress, Houston 1993, pp.

2662–2679.

[8] G.I. Ogundele and W.E. White, Some observations on corrosion of carbon steel in aqueous environments

containing carbon dioxide. Corrosion 42 (1986), pp. 71–78.

[9] K. Videm and A. Dugstad, Corrosion of carbon steel in an aqueous carbon dioxide environment. Part 2.

Film formation. Mats. Perf. 28 (1989), pp. 46–50.

[10] A. Dugstad, The importance of FeCO3 supersaturation of carbon steel Corrosion'92, paper no. 14, USA,

1992.

[11] M.L. Johnson, M.B. Tomson, Ferrous carbonate precipitation kinetics and its impact CO2 corrosion,

Corrosion'91, NACE, USA, paper 268 1991.

[12] Videm K. and Kvarekval J., “Corrosion of Carbon Steel in CO2 Saturated Aqueous Solutions Containing

Small Amounts of H2S,” NACE CORROSION/94, paper No.12, 1994

[13] Srinivasan S. and Kane R.D., “Prediction of Corrosion Rates of Steel in CO2/H2S Production

Environments,” Prevention of Pipeline Corrosion Conference, Houston, TX, 1995.

[14] Ikeda A., Ueda M. and Mukai S., “Influence of Environmental Factors on Corrosion in CO2 Source Well,”

Advances in CO2 Corrosion, Vol. 2, 1985

[15] Kvarekval J., “The Influence of Small Amounts of H2S on CO2 Corrosion of Iron and Carbon Steel,”

EUROCORR ’97, Trondheim, Norway.

Page 163: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

797 Vol. 7, Issue 3, pp. 790-797

[16] Valdes, A., Case, R, Ramirez, M., and Ruiz, A., “The Effect of Small Amounts of H2S on CO2 Corrosion

of a Carbon Steel,” Paper No. 22, CORROSION/98.

AUTHOR

Ghanshyam Das, Asst. Professor in Materials and Metallurgical Engineering Department

at National Institute of Foundry and Forge Technology, Hatia, Ranchi-834003. He

graduated in Metallurgical Engineering from B.I.T. Sindri in 1996 and Doctor in

philosophy in 2007 from Materials Science and Metallurgical Engineering Department, IIT

Bombay, Powai, Mumbai. His research interest is degradation of ferrous materials and

published more than 20 papers in international and national journals and conference

proceedings. He has attended many international and national conferences in India and

abroad. He received best technical paper award in Nov. 2012 at International conferences (M3) in Singapore and

in Jan. 2014 at Raigarh, Chattishgarh, India.

Page 164: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

798 Vol. 7, Issue 3, pp. 798-806

PERFORMANCE COMPARISON OF POWER SYSTEM

STABILIZER WITH AND WITHOUT FACTS DEVICE

Amit Kumar Vidyarthi1, Subrahmanyam Tanala2, Ashish Dhar Diwan1 1M.Tech Scholar, 2Asst. Prof.

Dept. of Electrical Engg., Lovely Professional University, Phagwara, Punjab, India

ABSTRACT Power system damping through coordinated design of power system stabilizer (pss) & different types of FACTS

devices (SVC, SSSC & UPPC) is presented in this paper. In the condition of single phase fault power system

stabilizer is enough to damp the oscillations for the two area system but the settling time increases for the whole

system parameter, but in the case of three phase faults the whole system parameter goes into unbalanced

condition and the machine losses its synchronism even in the presence of power system stabilizer, in that case

PSS is not enough to suppress the faults, so the other compensator must put into action for the better stability of

the two area system. PSS including with FACTS devices reduce the settling time and enhance the response time

of the system for single phase faults and damp the oscillation for three phase fault and make the system stable.

This paper contains coordinated simulink model of PSS & different types of FACTS devices for two area systems

& checked for single phase fault and three phase fault condition with & without FACTS devices.

KEYWORDS- PSS, SVC, SSSC, UPFC, FACTS.

I. INTRODUCTION

For the better operation of power system without scarcity in system security and power quality

furthermore in the case of contingency conditions namely transmission lines and generating unit’s

losses, advance and modern control strategies need to be implemented. The FACTS is a power

electronic based controller which improve the power transfer capability of the transmission network.

Since these controllers operate very fast so they outbid the safe operating limits of transmission

system without taking a chance on stability. When the large power system is interconnected by weak

tie lines then the low frequency oscillation has been observed. Low frequency oscillation may

continue and rise above to cause system separation if perfect damping is not available. In these days

the conventional power system stabilizer is used by the power system utilities all over the world. The

FACTS devices play an important role in the operation and control of power system such as

scheduling power flow; reducing net losses; providing voltage support; limiting short circuit currents;

mitigating sub synchronous resonance; damping power system oscillation and improving transient

stability. Power system is nothing but a business of generation, transmissions & distribution of

electrical energy. Since power system is in normal operating condition works as a synchronous system

i.e. all the machines in power system are synchronized with each other i.e. they run at a common

frequency. During short circuit condition the very heavy current flow in the circuit as well as the

voltages at different part of the system go to very low value & this condition is known as abnormal

operating condition. So the fault does create a disturbance in the power system operation & due to

these disturbances will the power system remain in synchronism or not is a big problem for the power

system. Due to the fault there is an imbalance between the electrical & mechanical power & this

affects the rotor speed variations & this may lead the step of falls. Power system stabilizer improves

the damping oscillations of power system during the time of electromechanical transients [1].

Page 165: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

799 Vol. 7, Issue 3, pp. 798-806

Sometimes power system stabilizer is not enough to damp out the oscillation. So we use the

coordinated model of PSS & FACTS device that has a less settling time & more efficient.

This paper is organized as follows: In section 2 single machine infinite bus systems is introduced. In

section 3 power system stabilizer is introduced. In section 4 SVC, SSSC & UPFC are introduced. In

section 5 simulation result is shown & in the last section conclusion and prospects for future work is

given.

II. SINGLE MACHINE INFINITE BUS SYSTEM

Figure.1 shows the basic building block of single machine infinite bus system with exciter & AVR [2]

this is the basic block for better understanding of the machine & infinite bus which is connected to the

complex power system. The complete power system state space model is given in the following form.

𝑑/𝑑𝑡 [

∆𝜔𝑟∆𝛿

∆𝜓𝑓𝑑Δ𝑉1

]=[

𝑎11 𝑎12𝑎21 0

𝑎13 00 0

0 𝑎320 𝑎42

𝑎33 𝑎34𝑎43 𝑎44

] [

Δ𝜔𝑟Δ𝛿

ΔΨ𝑓𝑑Δ𝑉1

]+ [

𝑏1000

] Δ𝑇𝑚

The heffron –phillips model of single machine infinite bus system is given below[9].

Fig: 1 Single machine infinite bus system

III. POWER SYSTEM STABILIZER (PSS)

Power system stabilizer is generally adding extra damping to the rotor oscillation of synchronous

machine by controlling its excitation. The block diagram is given below.

Fig: 2 Power system stabilizers

The above block model contains a low pass filter, a gain, a washout high filter, a phase compensator

system & limiter. The input for the PSS may be speed deviation, frequency deviation or accelerating

power & its output is stabilizing voltage. The working of PSS is very simple & straight forward.

When the fault occur in power system it generally introduce electromechanical oscillation in the

synchronous generator & to stabilize the system this oscillation must be damp out. The PSS take any

of the three signal which is illustrated above & supply an additional output stabilize voltage signal to

the excitation system so the overall system may remain in a stable state. The block diagram of pss

with single machine infinite bus system is given below. [3]

Page 166: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

800 Vol. 7, Issue 3, pp. 798-806

Figure-3 Single machine infinite bus system with PSS

IV. FACTS DEVICE

FACTS are defined by the IEEE as “a power electronic based system & other static equipment that

provide control of one or more AC transmission system parameters to enhance controllability &

increase power transfer capability.”[4]

4.1 static var compensator (SVC)

Static var compensator is mainly a shunt facts device use to control power flow & improve transient

stability of the system. It mainly injects or absorbs reactive power in the bus when the voltage of the

bus is low or high respectively [5] .It is mainly connected mid- point of the transmission line because

at the mid -point static var compensator gives its maximum efficiency .So static var compensator is

use for mid- point voltage regulation of line segmentation, end of line voltage support to prevent

voltage instability, improvement of transient stability, power oscillation damping etc. Figure-4 which

is given below is the conventional figure of static var compensator and mainly contains thyristor

control reactor, thyristor switch capacitor, fixed capacitor and switch resistor. The inductor is used in

this circuit specially to prevent inrush current.

Figure-4 Static var compensator

4.2 static synchronous series compensator (SSSC)

Static synchronous series compensator is series facts device which is use to control power flow &

improve power oscillation damping of system. [6]. the schematic of static synchronous series

compensator is given in the figure below.

Page 167: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

801 Vol. 7, Issue 3, pp. 798-806

Figure-5 Static synchronous series compensator with line

4.3 Unified power flow controller (UPFC)

Unified power flow control is a device nothing but a combination of series & shunt facts device & it

obviously do the same work what is done by the series & shunt fact device alone. It is the most

powerful facts device [7]. UPFC is mainly a combination of SSSC & STATCOM. Use to improve the

transient stability of the power system [8].The schematic figure of unified power flow controller is

given below.

Figure-6 Unified power flow controller

V. SIMULATION

We have taken two generating unit 1000MW and 5000MW respectively, connected with 500KM

long transmission line which is given below.

Figure.7 Simulation of Two area system with power system stabilizer only

The initial power output of the SM1 and SM2 are 0.95 and 0.8 respectively. In the above model a

three phase fault occurs at sending end bus at time t=0.2s.

Action of Power system stabilizer on single phase fault is given below.

Voltage and power waveform of positive sequences is shown below.

Page 168: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

802 Vol. 7, Issue 3, pp. 798-806

Figure.8 Positive sequence voltage of power system

Figure.9 Waveform of Power

Now the machines rotor angle, speed and voltage waveform for single phase faults is given below.

Figure.10 Rotor angle difference between two machines

Figure.11 Speed of Machine (PU)

Page 169: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

803 Vol. 7, Issue 3, pp. 798-806

Figure.12 Terminal voltages waveform of two machines

Action of Power system stabilizer on three phase faults is given below.

Figure.13 Positive sequence voltages waveform

Figure.14 Line power of a system

Figure15.Rotor angle difference between two machines

Figure16.Machine speeds (PU)

Page 170: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

804 Vol. 7, Issue 3, pp. 798-806

Figure17.Terminal voltages of two machines

The simulink model of two area power system with power system stabilizer and static var

compensator is given below.

Figure18.Two area power system with PSS and SVC

For a single phase faults

Figure19.Positive sequence voltages and line power waveform

Figure20.Rotor angle, Machine speed, Terminal voltages of machine

For three phase faults

Page 171: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

805 Vol. 7, Issue 3, pp. 798-806

Figure21.positive sequence voltages and Line power

Figure22. Rotor angle, Speed and Terminal voltage of Machine

The simulink model result of two areas with power system stabilizer and static synchronous series

compensator is given below.

For three phase faults

Figure23.Sequence voltage, Line power, Reactive power waveform

The simulink model result of power system stabilizer with unified power flow controller is shown

below.

For three phase faults

Figure24.Sequence voltages, Line power, Reactive power waveform

VI. CONCLUSION AND PROSPECTS FOR FUTURE WORK

The objective of this study was to analyze power system stabilizer and coordinated model of PSS and

FACTS devices performance under several perturbations. Different types of simulations are carried

out in simulink environment. The FACTS devices are simulated for the transient stability on a two

area power system. The system is simulated by initiating a single phase fault and three phase fault

near the first machine in the absence of FACTS devices. In this case the difference between the rotor

Page 172: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

806 Vol. 7, Issue 3, pp. 798-806

angles of two machines is increased enormously and eventually loses its synchronism. But, when the

same faults are simulated in the presence of FACTS devices the system becomes stable. From the

simulink result it is shown that UPFC is the best one to suppress the three phase fault so the best

combination for suppressing the three phase fault in two area power system is power system stabilizer

and unified power flow controller when the tuning is in appropriate manner. In the future we will do

the same with the help of different artificial intelligent techniques such as fuzzy logic, artificial neural

network and compare the result for the optimal one.

REFERENCES

[1] Saidi Amara and Hadj Abdallah , ‘‘Power system stability improvement by FACTS devices: a comparison

between STATCOM, SSSC and UPFC”, First international conference on renewable energies and vehicular

technology, 2012.

[2] Kamalesh Chandra Rout and Dr.P.C. Panda, “Performance Analysis of Power System Dynamic Stability

Using Fuzzy Logic Based PSS for Positive and Negative Value of K5 Constant”, International conference on

computer, communication and Electrical Technology –ICCCET2011, 18th and 19th March2011.

[3] P. Kundur, Power system stability and control, Mc Grew Hill, U.S.A 1994

[4] Proposed terms and definitions for flexible AC transmission system(FACTS) , IEEE Transactions on power

Delivery, Volume 12, Issue 4, October1997,pp1848-1853.

[5] Pasala Gopi, Dr. I. Prabhakar Reddy and P.Sri Hari , ‘‘Shunt FACTS Devices for First – Swing Stability

Enhancement in Inter-area Power System”, Third International Conference on Sustainable Energy and

Intelligent system (SEISCON2012).

[6] N.G.Hingorani, L Gyugyi, ‘‘Understanding FACTS; Concepts and Technology of Flexible AC Transmission

System,”IEEE Press book, 2000.

[7] Mehrdad AhmadiKamarposhti, MostafaAlinezhadAlinezhad, ‘‘Effects of STATCOM, TCSC, SSSC and

UPFC on voltage stability”, International Review of Electrical Engineering, vol.4, no.6, pp.1376-1382.2009.

[8] Eskandar Gholipour and Shahrokh Saadate, ‘‘Improving of Transient Stability of Power Systems Using

UPFC”, IEEE TRANSACTIONS ON POWER DELIVERY vol.20 (2), pp.1677-1682.2005.

[9] D.K. Sambariya and Rajendra Prasad , “ Robust Power System Stabilizer

Design for Single Machine Infinite Bus System with Different Membership functions for Fuzzy Logic

Controller”, 7th International Conference on Intelligent system and Control (ISCO 2013).

BIOGRAPHIES

Amit Kumar Vidyarthi was born in Dhanbad, India on 8th Dec 1986. He is pursuing his

M. Tech from Lovely Professional University, Punjab. His research area interests are in the

areas of power system operation and control and Flexible AC Transmission systems

(FACTS).

Subrahmanyam Tanala is an Assistant Professor in the Department of Electrical

Engineering at Lovely Professional University, Phagwara, Punjab. His area of research and

interest includes Power electronics, power systems, FACTS devices and renewable energy.

Ashish Dhar Diwan is pursuing his M.Tech in Electrical Engineering (power system) from

Lovely Professional University. His area of interest includes Power systems, FACTS,

Transmission systems, etc. and has published papers in various journals subjecting the

same.

Page 173: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

807 Vol. 7, Issue 3, pp. 807-817

HYDROLOGICAL STUDY OF MAN (CHANDRABHAGA)

RIVER

Shirgire Anil Vasant1, Talegaokar S.D.2 1Student M.Tech, 2Professor,

Department of Civil Engineering, Bharati Vidyapeeth Deemed University

College of Engineering, Pune, India

ABSTRACT Hydrological study of Man (Chandrabhaga) river basin were done from the available rainfall and stream flow

records of years 1996-2010.Data was taken from HYDROLOGY PROJECT DIVISION NASHIK FOR CASE

STUDY. The mean annual rainfall of Sidhewadi station is about 627.64 mm. Cumulative mass inflow curve was

drawn to estimate the maximum reservoir capacity of small dam. In this study, a flood frequency analysis of

Man River basin is carried out by Log-Pearson Type-III probability distribution method and Gumbel’s

distribution. From the flood frequency analysis maximum flood discharge is calculated for the recurrence

interval of 10, 100, 200, 1000 years. From the study duration of flood was calculated. A small dam can be

constructed of storage capacity of 4.60 Mm3.Study was helpful to increase area under irrigation and design and

construction of hydraulic structure on Man river.

KEYWORDS: stream flow, rainfall, Log Person Type III, flood frequency, flow duration.

I. INTRODUCTION

Planning, design, and management of water resources systems often require knowledge of flood

characteristics, such as peak, volume, and duration. Fundamental importance in many design

problems is the determination of the probability distribution of maximum annual discharge (7). Based

on an assumed probability distribution, one can compute statistics of flows of various magnitudes,

which can then be used for planning, design, and management of water resources projects. (11)

The river system in any region depends upon the slope of land surface and velocity of stream. Rivers

of Indian plateau are shorter in length than the Himalaya region. These rivers have ample amount of

water in rainy season and in summer most of the rivers are dry (8). These rivers have greater

importance in western Maharashtra region. In the western Maharashtra region one of eastward

flowing river is Bhima it has tributary Man River.

The present study was taken to development of mass curve, flow duration curve and carry out flood

frequency analysis from the available records of the rainfall and stream flow. This study is a

prerequisite study in decision making policies for the design of any hydraulic structure.

The data is taken from hydrology project division Nashik for study.

Study Area

River Gauge Station: Sidhewadi River Gauge station is selected for hydrological study. This station is

65 km away from Solapur. The hydrological data of this station is taken from Hydrology project

division Nasik (Government of Maharashtra). Sidhewadi station is on Mangalvedha -Pandharpur road,

& 17 km away from Pandharpur.

Objectives of Research-

Keeping the above perspectives in view, the proposed study is undertaken with the following

objectives:

Page 174: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

808 Vol. 7, Issue 3, pp. 807-817

1. To calculate the storage capacity of dam if small dam is constructed by use of mass curve to

increase area under irrigation, water for drinking etc.

2. To develop flow duration curve from which assessments of hydropower design schemes, water

supply, planning and design of irrigation systems is possible.

3. To carry out comparative flood frequency analysis between Gumbels and Log Pearson type III

distribution from the available hydrological data for calculation of maximum annual discharge.

Fig 1. Diagram showing Man River basin

II. REVIEW OF LITERATURE

L.R. Singo et al., 2011; Annual maximum flow data from 8 stations with 50 years hydrological data

were used to analyze flood frequencies in the catchment. To derive the probability of occurrence of

flood events, the frequency distributions which could best describe the past characteristics and

magnitudes of such floods were tested. This involved the determination of the best flood frequency

models, which could be fitted to the available historical recorded data. The distribution models used

included the Generalized Extreme Value, Gumbel or Extreme Value type 1, Log-Normal and the Log

Pearson type III distributions. The extreme value analysis showed that the Gumbel and Log Pearson

type III distributions provided the best fit.

T. A. Ewemoje et al., 2011; The paper discusses how Normal, Lognormal, and log-Pearson type 3

distributions were investigated as distributions for annual maximum flood flows using the Hazen,

Weibull, and California plotting positions at Ogun-Oshun river basin in Nigeria. All the probability

distributions when matched with Weibull plotting position gave similar values near the center of the

distribution but varied considerably in the tails. The Weibull plotting position when matched with

Normal, Log-normal and Log Pearson Type III probability distributions gave the highest Coefficient

of determinations of 0.967, 0.987, and 0.986 respectively. Hazen plotting position gave minimal

errors with the RMSE of 6.988, 6.390, and 6.011 for Normal, Log-normal, and Log-Pearson Type III

probability distributions respectively. This implies that, predicting statistically using Hazen plotting

position, the central tendency of predicted values to deviate from observed flows will be minimal for

the period under consideration. Minimum absolute differences of 2.3516 and 0.5763 at 25- and 50-

year return periods were obtained under the Log-Pearson Type III distribution when matched with

Weibull plotting position, while an absolute difference of 0.2338 at 100-year return period was

obtained under the Log-Pearson Type III distribution when matched with California plotting position.

Comparing the probability distributions, Log-Pearson Type III distribution with the least absolute

differences for all the plotting positions is the best distribution among the three for Ona River under

Ogun-osun river basin study location.

Page 175: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

809 Vol. 7, Issue 3, pp. 807-817

Tefaruk Haktanir et al., 1993 ; A statistical model comprising nine different probability distributions

used especially for flood frequency analysis was applied to annual flood peak series with at least 30

observations for 11 unregulated streams in the Rhine Basin in Germany and two streams in Scotland.

The parameters of most of those distributions were estimated by the methods of maximum likelihood

and probability-weighted moments. The distributions were first compared by classical goodness-of-fit

tests on the observed series. Next, the goodness of predictions of the extreme right-tail events by all

the models were evaluated through detailed analyses of long synthetically generated series. The

general extreme value and 3-parameter lognormal distributions were found to predict the rare floods

of return periods of 100 years or more better than the other distributions used. The general extreme

value type 2 and log-Pearson type 3 (when skewness is positive) would usually yield slightly

conservative peaks. The Wakeby distribution also gave peaks mostly on the conservative side. The

log-logistic distribution with the method of maximum likelihood was found to overestimate greatly

high return period floods.

III. MATERIALS AND METHODS

To carry out hydrological studies of Man river basin, the rainfall data and discharge data of years

1995-2010 were collected from Sidhewadi station. The stream flow data recorded at Sidhewadi

stream flow gauging station. The data was taken from Hydrology project division Nasik utilized in

these studies.

3.1 Mean Annual Rainfall

Daily rainfall data is utilized for calculating mean rainfall of station Sidhewadi. Avarage annual

rainfall is calculated from daily data of station.

Table 1; Annual Rainfall data of siddhewadi station

Sr. No Year Rainfall (mm)

1 1995 527.4

2 1996 1071

3 1997 684

4 1998 1161

5 1999 552

6 2000 440

7 2001 454

8 2002 732

9 2003 423

10 2004 726

11 2005 671

12 2006 560

13 2007 532

14 2008 484

15 2009 522

16 2010 505

Avg Rainfall 627.64

3.2 Point Rainfall

The point rainfall is also known as station rainfall refers to the rainfall data of a station. Depending

upon the need, data can be listed as daily, weekly, monthly, seasonal or annual values for various

periods. Graphically these data are represented as plots of magnitude versus chronological time in the

form of a bar diagram. Such a plot however is not convenient for discerning a trend in the rainfall as

there will be considerable variations in the rainfall values leading to rapid changes in the plot. The

trend is often discerned by the method of moving averages, also known as moving means.

3.3 Moving Average

Moving average is a technique for smoothening out the high frequency fluctuations of a time series

and to enable the trend, if any, to be noticed. The basic principle is that a window of time range m

Page 176: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

810 Vol. 7, Issue 3, pp. 807-817

years is selected. Starting from first set of m years of data, the average of of the data for m years is

calculated and placed in the middle year of the range m. The window is next moved sequentially one

time unit (year) at a time and the mean of the m terms in the window is determined at each window

location. The value of m can be 3 or more years; usually an odd value. Generally, the larger the size of

range of m, the greater is the smoothening .There are many ways of averging (and consequently the

plotting position of the mean) and the method described above is called central simple moving

average.

3.4 Mass curve analysis

The flow–mass curve is a plot of the cumulative discharge volume against time plotted in

chronological order. The ordinate of the mass curve, V at any time t is thus

V= (1.1)

Where t0 is the time at the beginning of the curve and Q is the discharge rate. Since the hydrograph is

a plot of Q vs t, it is to see that the flow-mass curve is an integral curve (summation curve) of the

hydrograph. The maximum vertical ordinate was measured which gives the maximum storage

capacity of the reservoir. Fundamentally, a reservoir serves to store water and the size of the reservoir

is governed by the volume of the water that must be stored, which in turn is affected by the variability

of the inflow available for the reservoir.

3.5 Flood frequency study

The procedure for estimating the frequency of occurrence (return period) of a hydrological event such

as flood is known as (flood) frequency analysis. Two methods were used for analysis.

3.5.1 Log-Pearson Type III distribution The Log-Pearson Type III distribution tells you the likely values of discharges to expect in the river at

various recurrence intervals based on the available historical record. This is helpful when designing

structures in or near the river that may be affected by floods. The Log-Pearson Type III distribution is calculated using the general equation:

If X is the variate of a random hydrologic series, then the series of Z variates where

Z = Log x (1.2)

are first obtained. For this Z series, for any recurrence interval T gives

(1.3)

Where K is a frequency factor which a function of recurrence interval T and the coefficient of skew

Cs,

σz = standard deviation of the Z variate sample

(1.4)

Next, the skewness coefficient Cs of Z variate can be calculated as follows:

(1.5)

, N= sample size = number of years of record,

XT = antilog (ZT) (1.6)

3.5.2 Gumbel’s Equation for Practical Use

Equation (1.3) giving the values of the variate X with a recurrence interval T is used as

xT = x+ Kσn-1 (1.7)

Where σn-1= standard deviation of the sample size N, K= frequency factor expressed as

(1.8)

in which yT = reduced variate, a function of T and is given by

(1.9)

or

(1.10)

yn = reduced mean, a function of sample size N.

Page 177: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

811 Vol. 7, Issue 3, pp. 807-817

Sn = reduced standard deviation, a function of sample size N

Table 02- Reduced Mean (yn ) and Reduced Standard Deviation ( Sn )

N 10 15 20 25 30 40 50

yn 0.4952 0.5128 0.5128 0.5309 0.5236 0.5436 0.5309

Sn 0.9457 1.0206 1.0206 1.0915 1.0628 1.1413 1.1607

N 60 70 80 90 100 200 500

yn 0.5521 0.5548 0.5569 0.5586 0.5600 0.5672 0.5724

Sn 1.1747 1.1854 1.1938 1.2007 1.2065 1.2360 1.2588

IV. RESULTS AND DISCUSSIONS

4.1 Mean Annual Rainfall

Graph 1: Annual Rainfall Vs time in years

4.2 Moving Average Method

Annual rainfall data is used for moving average method.

Addition of three consecutive years is done as shown in following table.

Table 03 -Computation of three year moving mean

1 2 3 4

Year Annual

Rainfall (mm)

Three consecutive year total

for moving mean (Pi-1 +Pi +Pi+1 )

3-year moving

Mean (Col 3/3)

1995 527.4

1996 1071 527.4+1071+684=2282.4 760.8

1997 684 1071+684+1161=2916 972

1998 1161 684+1161+552=2397 799

1999 552 1161+552+440=2153 717.7

2000 440 552+440+454=1446 482

2001 454 440+454+732=1626 542

2002 732 454+732+423=1609 536.34

2003 423 732+423+726=1881 627

2004 726 423+726+671=1820 606.67

2005 671 726+671+560=1957 652.33

2006 560 671+560+532=1763 587.7

2007 532 560+532+484=1576 525.3

Page 178: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

812 Vol. 7, Issue 3, pp. 807-817

2008 484 532+484+522=1538 512.7

2009 522 484+522+505=1511 503.7

2010 505

627.64 mm

Graph 2: Annual rainfall Vs year for Three year moving mean

Moving mean calculation are shown in table 04 .Three year moving mean curve is shown plotted in

graph:02 with the moving mean value as the ordinate and the time in chronological order as abscissa.

Note that the curve starts from 1996 and ends in the year 2010. No apparent trend is indicated in this

plot.

4.3 Mass inflow curve

The cumulative stream flow data were used to derive the mass inflow curve for calculating the

reservoir capacity corresponding to specific yield. The mass curve is shown in graph 03 .The average

cumulative inflow is obtained as 2592.22 m3/s and total cumulative inflow into the reservoir for a

period of the 13 years is 38884.242 m3/s. From the mass curve, it was estimated that total quantity of

water available for utilization is about 8000 m3/sec.

Water available for storage is 8000 m3/sec for Five month .Therefore mean daily discharge available

is 8000/150 = 20 m3/s. i.e. (= 53.33×8.64 ×10 4 =4.60 Mm3 ).

Table 04: Inflow cumulative discharge

Year Inflow (m3/s) Cumulative Inflow (m3/s)

1991 3364.19 3364.19

1993 764.76 4128.95

1996 837.52 4966.47

1997 63.97 5030.44

1998 14975 20005.44

1999 1387.319 21392.75

2000 559.489 21952.239

2001 1223.816 23176.055

2002 130.63 23306.685

2004 302.27 23608.955

2005 236.037 23844.992

2007 2320.97 26165.962

2008 532.39 26698.352

2009 9630 36328.352

2010 2555.89 38884.242

Total 38884.242

Avg 2592.22

Page 179: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

813 Vol. 7, Issue 3, pp. 807-817

Graph 3: Cumulated volume Vs Time in years for Storage capacity of reservoir from mass curve

4.4 Flow duration curve

The annual discharge data was used for drawing the flow duration curve. From the data it is found that

maximum and minimum discharges of 14975 m3/sec, 63.97 m3/sec are recorded in the year 1998 and

1997 respectively .The average discharge for the period of the 13 years was estimated as 2592 m3/sec.

out of which only 2 years, the river flow is above average flow and in remaining 11 years the river

flow is below the average discharge rate. A plot of annual discharge Q verses the plotting position

(Pp) is shown in fig from this graph flood discharge for different probabilities of time can be

predicted.

Table 05: Annual discharge in descending order

Year Inflow

(m3/s)

Discharge

(Descending order ) (m3/s) M

Plotting position

1996 837.52 14975 1 7.14

1997 63.97 9630 2 14.28

1998 14975 2555 3 21.42

1999 1387.319 2320.97 4 28.57

2000 559.489 1387.319 5 35.71

2001 1223.816 1223.816 6 42.85

2002 130.63 837.52 7 50

2004 302.27 559.489 8 57.14

2005 236.037 532.39 9 64.28

2007 2320.97 302.27 10 71.42

2008 532.39 236.037 11 78.57

2009 9630 130.63 12 85.71

2010 2555 63.97 13 92.85

32.497 %

Page 180: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

814 Vol. 7, Issue 3, pp. 807-817

Graph 4: Accumulated volume Vs Percent time

for Flow duration Curve or Man basin Gauged at Sidhewadi station

4.5 Flood frequency study

Comparative flood frequency study was done by following two methods

4.5.1 Log Pearson Type III distribution

The Log Pearson Type III distribution was employed for flood frequency studies. The annual

maximum one day flood data was taken for the period of 13 years. From the data it was found that the

highest maximum one day flood of 2416.143 m3/s were recorded in the year 2009, whereas a

minimum flood of 16.92904 m3/s were recorded in 2005.The average maximum one day flood for the

man river basin over a period of 13 years was estimated as 449.31 m3/s with standard deviation of

0.5977. The maximum one day flood for recurrence interval of 100,200 and 1000 years were

determined as 4207.87m3/s, 5611.77 m3/s and 10071.6 m3/s respectively.

Table 06: Flood frequency studies of Man River Basin by log Pearson type III Distribution

Year Inflow

Max Flood

discharge

X

Z=Log x

1996 837.52 209.5074 2.321 0.006 0.000036 0.000000216

1997 63.97 63.97 1.805 -0.5101 0.2602 -0.1327

1998 14975 897.0745 2.952 0.6369 0.4056 0.2583

1999 1387.319 460.8193 2.6635 0.3479 0.1210 0.04210

2000 559.489 263.6401 2.4210 0.1059 0.01121 0.0011

2001 1223.816 252.2801 2.4018 0.0867 0.0075 0.00065

2002 130.63 30.49228 1.4841 -0.831 0.6905 -0.5738

2004 302.27 75.92681 1.8803 -0.4348 0.1890 -0.0821

2005 236.037 16.92904 1.2286 -1.0865 1.1804 -1.2825

2007 2320.97 698.1013 2.843 0.5279 0.2786 0.1471

2008 532.39 219.781 2.341 0.0259 0.0006 0.000017

2009 9630 2416.143 3.383 1.0679 1.1404 1.2178

2010 2555.89 236.4403 2.373 0.0579 0.0033 0.00019

Sum=

4.28834

Sum=

-0.40384

Page 181: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

815 Vol. 7, Issue 3, pp. 807-817

Table 07: Result Table of log Pearson Type III distribution

T (Years) σz= 0.5977 = 2.3151 Cs = -0.186

Xt = antilog Zt kz

(For Cs= -0.1860) σz kz

100 2.190 1.308 3.6231 4207.87

200 2.400 1.434 3.7491 5611.77

1000 2.825 1.688 4.0031 10071.6

Table: 08 Frequency Factors K for Gamma and log-Pearson Type III Distributions (Haan, 1977)

Skew Coefficient

Cs

Recurrence Interval In Years

1.0101 2 5 10 25 50 100 200

99 50 20 10 4 2 1 0.5

3 -0.667 -0.396 0.420 1.180 2.278 3.152 4.051 4.970

2.9 -0.690 -0.390 0.440 1.195 2.277 3.134 4.013 4.904

2.8 -0.714 -0.384 0.460 1.210 2.275 3.114 3.973 4.847

2.7 -0.740 -0.376 0.479 1.224 2.272 3.093 3.932 4.783

2.6 -0.769 -0.368 0.499 1.238 2.267 3.071 3.889 4.718

2.5 -0.799 -0.360 0.518 1.250 2.262 3.048 3.845 4.652

2.4 -0.832 -0.351 0.537 1.262 2.256 3.023 3.800 4.584

2.3 -0.867 -0.341 0.555 1.274 2.248 2.997 3.753 4.515

2.2 -0.905 -0.330 0.574 1.284 2.240 2.970 3.705 4.444

2.1 -0.946 -0.319 0.592 1.294 2.230 2.942 3.656 4.372

2 -0.990 -0.307 0.609 1.302 2.219 2.912 3.605 4.298

1.9 -1.037 -0.294 0.627 1.310 2.207 2.881 3.553 4.223

1.8 -1.087 -0.282 0.643 1.318 2.193 2.848 3.499 4.147

1.7 -1.140 -0.268 0.660 1.324 2.179 2.815 3.444 4.069

1.6 -1.197 -0.254 0.675 1.329 2.163 2.780 3.388 3.990

1.5 -1.256 -0.240 0.690 1.333 2.146 2.743 3.330 3.910

1.4 -1.318 -0.225 0.705 1.337 2.128 2.706 3.271 3.828

1.3 -1.383 -0.210 0.719 1.339 2.108 2.666 3.211 3.745

1.2 -1.449 -0.195 0.732 1.340 2.087 2.626 3.149 3.661

1.1 -1.518 -0.180 0.745 1.341 2.066 2.585 3.087 3.575

1 -1.588 -0.164 0.758 1.340 2.043 2.542 3.022 3.489

0.9 -1.660 -0.148 0.769 1.339 2.018 2.498 2.957 3.401

0.8 -1.733 -0.132 0.780 1.336 1.993 2.453 2.891 3.312

0.7 -1.806 -0.116 0.790 1.333 1.967 2.407 2.824 3.223

0.6 -1.880 -0.099 0.800 1.328 1.939 2.359 2.755 3.132

0.5 -1.955 -0.083 0.808 1.323 1.910 2.311 2.686 3.041

0.4 -2.029 -0.066 0.816 1.317 1.880 2.261 2.615 2.949

0.3 -2.104 -0.050 0.824 1.309 1.849 2.211 2.544 2.856

0.2 -2.178 -0.033 0.830 1.301 1.818 2.159 2.472 2.763

0.1 -2.252 -0.017 0.836 1.292 1.785 2.107 2.400 2.67

0 -2.326 0.000 0.842 1.282 1.751 2.054 2.326 2.576

-0.1 -2.4 0.017 0.846 1.27 1.716 2.000 2.252 2.482

-0.2 -2.472 0.033 0.850 1.258 1.680 1.945 2.178 2.388

-0.3 -2.544 0.050 0.853 1.245 1.643 1.890 2.104 2.294

-0.4 -2.615 0.066 0.855 1.231 1.606 1.834 2.029 2.201

-0.5 -2.686 0.083 0.856 1.216 1.567 1.777 1.955 2.108

-0.6 -2.755 0.099 0.857 1.200 1.528 1.720 1.880 2.016

-0.7 -2.824 0.116 0.857 1.183 1.488 1.663 1.806 1.926

-0.8 -2.891 0.132 0.856 1.166 1.448 1.606 1.733 1.837

-0.9 -2.957 0.148 0.854 1.147 1.407 1.549 1.660 1.749

-1 -3.022 0.164 0.852 1.128 1.366 1.492 1.588 1.664

-1.1 -3.087 0.180 0.848 1.107 1.324 1.435 1.518 1.581

-1.2 -3.149 0.195 0.844 1.086 1.282 1.379 1.449 1.501

-1.3 -3.211 0.210 0.838 1.064 1.240 1.324 1.383 1.424

-1.4 -3.271 0.225 0.832 1.041 1.198 1.270 1.318 1.351

Page 182: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

816 Vol. 7, Issue 3, pp. 807-817

-1.5 -3.33 0.240 0.825 1.018 1.157 1.217 1.256 1.282

-1.6 -3.880 0.254 0.817 0.994 1.116 1.166 1.197 1.216

-1.7 -3.444 0.268 0.808 0.970 1.075 1.116 1.140 1.155

-1.8 -3.499 0.282 0.799 0.945 1.035 1.069 1.087 1.097

-1.9 -3.553 0.294 0.788 0.920 0.996 1.023 1.037 1.044

-2 -3.605 0.307 0.777 0.895 0.959 0.980 0.990 0.995

-2.1 -3.656 0.319 0.765 0.869 0.923 0.939 0.946 0.949

-2.2 -3.705 0.330 0.752 0.844 0.888 0.900 0.905 0.907

-2.3 -3.753 0.341 0.739 0.819 0.855 0.864 0.867 0.869

-2.4 -3.800 0.351 0.725 0.795 0.823 0.830 0.832 0.833

-2.5 -3.845 0.360 0.711 0.711 0.793 0.798 0.799 0.800

-2.6 -3.899 0.368 0.696 0.747 0.764 0.768 0.769 0.769

-2.7 -3.932 0.376 0.681 0.724 0.738 0.740 0.740 0.741

-2.8 -3.973 0.384 0.666 0.702 0.712 0.714 0.714 0.714

-2.9 -4.013 0.390 0.651 0.681 0.683 0.689 0.690 0.690

-3.0 -4.051 0.396 0.636 0.660 0.666 0.666 0.667

4.5.2 Gumbel distribution:

The annual maximum one day flood data were arranged in descending order. The recurrence interval

(T) and percent probability P was calculated. From the data it was found that the highest maximum

one day flood of 2416.143 m3/s were recorded in the year 2009, whereas a minimum flood of

16.92904 m3/s were recorded in 2005.The average maximum one day flood for the man river basin

over a period of 13 years was estimated as 449.31 m3/s with standard deviation of 646.14 m3/s. The

maximum one day flood for recurrence interval of 10,100 and 200 years were determined as 1538.17

m3/s, 3101.05 m3/s and 3553.96 m3/s respectively.

Table 09 Gumbels distribution

Sr

No Year

Max Flood

discharge

Max Flood

discharge

X

(

Order M Tp

1 1996 209.5074 2416.143 1966.84 3868471.3 1 14

2 1997 63.97 897.0745 447.77 200501.5 2 7

3 1998 897.0745 698.1013 248.801 61901.93 3 2.8

4 1999 460.8193 460.8193 11.51 132.68 4 0.7035

5 2000 263.6401 263.6401 -185.6 34469.6 5 2.8

6 2001 252.2801 252.2801 -197.02 38816.8 6 2.33

7 2002 30.49228 236.4403 -212.86 45309.3 7 2

8 2004 75.92681 219.781 -229.781 52678.9 8 1.75

9 2005 16.92904 209.5074 -239.79 57500.68 9 1.55

10 2007 698.1013 75.92681 -373.37 139408.14 10 1.4

11 2008 219.781 63.97 -385.37 148513.11 11 1.27

12 2009 2416.143 30.49228 -418.80 175400.14 12 1.16

13 2010 236.4403 16.92904 -432.37 186944.68 13 1.07

Mean=449.31 x = 449.3 Σ(x- )2=

5010048.76

Table 10: Gumbels distribution Result

T (Years) σn-1= 646.14 = 449.3

K k σn-1 xt = + k σn-1

10 1.6852 1088.87 1538.17 m3/s

100 4.104 2651.75 3101.05 m3/s

200 4.804 3104.05 3553.96 m3/s

1000 6.4186 4147.35 4596.62 m3/s

Page 183: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

817 Vol. 7, Issue 3, pp. 807-817

V. CONCLUSIONS

-From the mass inflow curve, it was estimated that a quantity of 4.60 Mm3 water is available for

storage if small dam is constructed.

- From the hydrological studies it was estimated that the annual rainfall for Sidhewadi station is about

627.64 mm

-From the available records of annual discharges, flow duration curve was established.

- Log Pearson type III distribution and Gumbles method were used to calculate flood frequency study.

- It was found that by Log Pearson type III distribution the maximum one day flood for recurrence

interval of 100, 200 and 1000 years were determined as 4207.87 m3/s, 5611.77 m3/s and 10071.6 m3/s

respectively.

-Also It was found that by Gumbels distribution for return period of period of 10, 100 , 200 and 1000

years, the maximum one day flood discharges were 1538.17 m3/s , 3101.05 m3/s , 3553.96 m3/s and

4596.62 m3/s respectively.

-These are the prerequisite studies for the construction of any hydraulic structure across the Man river

basin.

REFERENCES

[1] Arora, K.R. 2007. Irrigation, Water Power and Water resources engineering. Standard Publishers

Distributors, New Delhi

[2] Asawa, G.L. 2005. Irrigation and Water resources engineering. New Age International Ltd. Publishers, New

Delhi

[3] Chow, V. T. (1964). Handbook of Applied Hydrology. McGraw-Hill, New York.

[4] Gumbel E.J.(1958) Statistics of extremes Columbia University ,New York.

[5] Gumbel E.J.(1954) ‘Statistical theory of droughts’ Proc. American Society of civil engineers,Volume 80 no

439 pp 1-19

[6] Haan, C. T. (1994). Statistical Methods in Hydrology. Iowa State University Press. Ames.

[7] Ibrahim M.H , and E.A. Isiguzo, 2009. Flood Frequency Analysis of Guara River Catchment at Jere, Kaduna

State, Nigeria,. Scientific Research and Essay Vol. 4 (6), pp. 636 – 646.

[8] M.V.Manjunatha and G.S.Somnatha (2001): Hydrological studies of Netravathy river basin, Indian Journal

of Power and river valley development,Vol.,1,Jan-Feb 2001 pp 22-28.

[9] Sathe B. K. (2012) ; Flood Frequency Analysis of Upper Krishna River Basin catchment area using Log

Pearson Type III Distribution IOSR Journal of Engineering (IOSRJEN) ISSN: 2250-3021 Volume 2, Issue 8

(August 2012), PP 68-77.

[10] Subramanya k.(1984):Engineering Hydrology; Tata McGraw hill publishing company Ltd, New Delhi

[11] Weibull W (1939); A statistical theory of the strength of Materials Ing.Ventenskps Akod Handi Vol 151

pp15

[12] Todkari G.U. (2012) ; Impact of irrigation on agriculture productivity in solapur district of Maharashtra

state.

AUTHORS BIOGRAPHY

Shirgire Anil Vasant is a student of M.Tech (Civil Engg) course in Bharti Vidyapeeth

College of Engineering, Pune. He has CGPA-8.34 upto sem III for M.tech-Civil course.

S. D. Talegaonkar, is working as an Assistant Professor in Bharti Vidyapeeth College of Engineering Pune. He

has total 10 years of teaching work experience. He has guided so many student in UG level for projects.

Page 184: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

818 Vol. 7, Issue 3, pp. 818-826

CROP DETECTION BY MACHINE VISION FOR WEED

MANAGEMENT

Ashitosh K Shinde and Mrudang Y Shukla Dept. of Electronics and Telecommunication

Symbiosis Institute of Technology, Pune, Maharashtra, India

ABSTRACT Weed management is one of the costliest input to the agriculture and it is one of the un-mechanised area. To

bring mechanization in this area the most important step is the detection of weed in agricultural field. Weed can

be detected by using machine vision techniques. Machine vision uses special image processing techniques.

Weeds in agricultural field can be detected by its properties such as Size, Shape, Spectral Reflectance, Texture

features. In this paper we are demonstrating weed detection by its Size features. After the image acquisition

Excessive green algorithm is developed to remove soil and other unnecessary objects from the image. Image

enhancement techniques are used to remove Noise from the images, By using Labelling algorithm each

components in the Image were extracted, then size based features like Area, Perimeter, longest chord and

longest perpendicular chord are calculated for each label and by selecting appropriate threshold value Weed

and Crop segmentation is done . Result of all features is compared to get the best result.

KEYWORDS: Machine vision, Camera, Area, Perimeter, Longest chord, longest perpendicular chord.

I. INTRODUCTION

Weeds have been existing on earth since men started cultivating. Every vegetation present in the

agricultural field which is unwanted is called as weed. Weeds compete with crop for Sunlight, Space,

Water and Nutrients in the soil. Weeds are the most underestimated crop pests in tropical agriculture

although they cause maximum reduction/loss in the yields of crops than other pests and diseases. The

total annual loss of agricultural produce from various pests in India, weeds roughly account for 37%

[15]. They decrease quantity and quality crop yield and cause health hazards for humans and animals.

Thus weed management is most important in every crop production system. Weeds are one of the

major constraints in agricultural production. As per the available estimates, weeds cause up to one-

third of the total losses in yield, besides impairing produce quality and various kinds of health and

environmental hazards. Despite development and adoption of weed management technologies [14],

the weed problems are virtually increasing. This is due to intercropping, mulching and crop rotations

involving shift in weed flora, due to adoption of fixed cropping systems and management practices

including herbicides development of herbicide resistance in weeds e.g. Phalaris minor in the 1990s

growing menace of wild rice in many states and Orobanche in mustard growing areas invasion by

alien weeds like Parthenium, Lantana, Ageratum, Chromolaena, Mikania and Mimosa[14]in many

parts of the country impending climate change favoring more aggressive growth of weed species, and

herbicide residue problems. This suggests that weeds problems are dynamic in nature, requiring

continuous monitoring and refinement of management strategies for minimizing their effects on

agricultural productivity and environmental health. A number of factors affects the quality and

quantity of yield such as competitiveness of crop and weed present, density of crop and weed present,

time of emergence of the weed relative to the crop, duration of weed present. The paper is organised

as follows, In Section 1 literature survey, Filed survey is discussed, Section 2 Image acquisition,

Page 185: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

819 Vol. 7, Issue 3, pp. 818-826

Excessive green algorithm, Image Enhancement, Size based feature extraction and Crop masking is

given, Conclusion and future work is discussed in section 3

1.1: Related Work

In biological morphology based techniques shape and size recognition can be conducted. Most

machine vision research on plant species identification has been done at leaf geometry level with

some at whole plant level. Biological morphology can be defined as the shape and structure of an

organism or any of its part. A wide range of machine vision shape features for leaf or plant detection

is used such as area, length, width, perimeter etc. and generally achieve high recognition rates under

ideal conditions.

Weed detection processes included were normalized excessive green conversion, statistical threshold

value estimation, adaptive image segmentation, median filter, morphological feature calculation and

Artificial Neural Network (ANN) by Hong Y. Jeon Lei F. Tian. With an accuracy of 72.6%[2]. For

crop and weed detection the seven shape features extracted from the images, four were selected by

discriminate analysis which was able to classify the two groups with 98.9% accuracy. This method is

developed to detect only corn crop in the field. By S. Kiani, A. Jafari .[3][15][16].

1.2: Field Survey

To get familiar with Indian agricultural field condition and to gather information about farmer’s

expectation from this project field survey is carried out in Loni kalbhor, Pune, Maharashtra, India.

According to farmer Onion and Sugar Cane is the main crop taken by maximum farmer in that area.

One of the costliest inputs to the agriculture is the De-Weeding which is 21.15%, 31.81% and 21.87%

of total expenses for onion, Sugar Cane and corn respectively as shown in table 1. Currently the

farmers are using expensive herbicides to kill the weed or the weeds can be removed manually which

is labor dependent task.

Table 1 De Weeding Expenses for 1 Acre

Crop Onion Sugar Cane Corn

Time Period 4 Month 1 Year 2 Month

Total Yield 15,000KG 60,000KG 3000KG

Market Price Rs 10/KG Rs 2/KG Rs 13/KG

Total Earning Rs 150000 Rs 1,20000 Rs 39000

Expenses

Labor Charges Rs 10000 Rs 3000 Rs 3000

Fertilizers Rs 3000 Rs 3000 Rs 3000

Transport Rs 6000 NA Rs 5000

Pesticides Rs 1500 Rs 1500 Rs 1500

Total Rs 20500 Rs 7500 Rs 12500

De-Weeding

After 10 Days NA NA Rs 2000

After 25 Days Rs 2000 Rs 2000 Rs 1500

After 2 Month Rs 2000 Rs 1500 NA

After 3 Month Rs 1500 NA NA

Total Rs 5500 Rs 3500 Rs 3500

II. MATERIALS AND METHODS

2.1 Image Acquisition

The digital images were captured under perspective projection and stored as 24-bit color images with

resolutions of 5MP saved in RGB (red, green and blue) color space in the JPG format. The images

were processed with MATLAB R2010 under Windows 7 and Intel Core i3-2370M CPU, 2.4 GHz, 2

GB RAM. The images were taken in month of March, April 2013. With different angles. The result of

a project of this type relies heavily on the quality of the photo material that is used as input. Ideally we

Page 186: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

820 Vol. 7, Issue 3, pp. 818-826

want an image acquisition system that is robust in different lighting and weather conditions. But it is

also important to keep the photo acquisition process uncomplicated since the system should be easy to

use. All photographs used in this project were taken under natural lighting conditions. During all

image acquisition the camera was pointing directly towards the ground.

2.2 Excessive Green

After the image acquisition it is necessary to remove unwanted information from the image. It is

necessary to segment image pixels into vegetation and Non vegetation. For this excessive green color

extraction algorithm is developed.

Outimage (x,y,z) = inimage (x,y,z) if

{

inimage (x, y, r) < 𝑖𝑛𝑖𝑚𝑎𝑔𝑒 (x, y, g)

𝑎𝑛𝑑inimage(x, y, b) < 𝑖𝑛𝑖𝑚𝑎𝑔𝑒 (x, y, g)

}

outimage(x,y,z) = 0 otherwise

where outimage (x,y,z) is the output image after excessive green segmentation saved in jpg format,

inimage(x,y,z) is the image acquired by an camera, x is the no of pixels in each row, y is the no of

pixels in each column and z is the primary color plane for red the z is equal to 1, for green the z is 2

and for blue the z is 3. Input image is shown in figure 1 and output image is shown in figure 2[5-8]

Fig 1. To be processed Image

Fig 2. Excessive Green

Page 187: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

821 Vol. 7, Issue 3, pp. 818-826

2.3 Image Enhancement

The aim of image enhancement is to improve the interpretability or perception of information in

images to provide better input for automated image processing techniques. In the proposed system

spatial domain image enhancement techniques are used.

4.6.1 RGB to gray image conversion:

When converting an RGB image to grayscale, we have to take the RGB values for each pixel and

make as output a single value reflecting the brightness of that pixel. One such approach is to take the

average of the contribution from each channel: (R+B+C)/3. However, since the perceived brightness

is often dominated by the green component, a different, more "human-oriented", method is to take a

weighted average, e.g.: 0.3R + 0.59G + 0.11B.

2.3.1 Median Filtering

Median filtering is a nonlinear method used to remove noise from images. It is widely used as it is

very effective at removing noise while preserving edges. It is particularly effective at removing ‘salt

and pepper’ type noise. The median filter works by moving through the image pixel by pixel,

replacing each value with the median value of neighboring pixels. The pattern of neighbors is called

the "window", which slides, pixel by pixel over the entire image pixel, over the entire image. The

median is calculated by first sorting all the pixel values from the window into numerical order, and

then replacing the pixel being considered with the middle (median) pixel value.

2.3.2 Intensity Adjustment

Intensity adjustment is an image enhancement technique that maps an image's intensity values to a

new range.

2.4 Labeling Algorithm

Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottom and

left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels which share

the same set of intensity values V. Connected component labeling works on binary or gray level

images and different measures of connectivity are possible. However, for the following we assume

binary input images and 8-connectivity. The connected components labeling operator scans the image

by moving along a row until it comes to a point p (where p denotes the pixel to be labeled at any stage

in the scanning process) for which V={1}. When this is true, it examines the four neighbors of p

which have already been encountered in the scan (i.e. the neighbors (i) to the left of p, (ii) above it,

and (iii and iv) the two upper diagonal terms). Based on this information, the labeling of p occurs as

follows:

If all four neighbors are 0, assign a new label to p, else

if only one neighbor has V={1}, assign its label to p, else

if more than one of the neighbors have V={1}, assign one of the labels to p and make a note

of the equivalences.

After completing the scan, the equivalent label pairs are sorted into equivalence classes and a unique

label is assigned to each class. As a final step, a second scan is made through the image, during which

each label is replaced by the label assigned to its equivalence classes. For display, the labels might be

different gray levels or colors.

2.5 Size based feature Extraction

Size based features can be extracted by using Mathematical morphology. Morphology is an approach

to image analysis which is based on the assumption that an image consists of structures which may be

handled by set theory. This is unlike most of the rest of techniques. As it can be seen in Figure 3, there

is a significant difference between the sizes of corn leaves and the leaves of the weeds.[4]

Page 188: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

822 Vol. 7, Issue 3, pp. 818-826

Fig. 3 Corn crop with Weed

2.5.1 Area

Area of any object in an image can be defined as number of pixels in that region. It possible to

Differentiate Weed from Crop By analysing area features of each object in an image which are

detected after performing the Labelling algorithm. From figure 4 it is clearly seen that the area

mapped by corn crop is higher than that mapped by a weeds. By selecting appropriate area of an

object weed and crop can be easily identified. In Figure 4. The crop of corn is identified.[4]

Fig.4 Area Based Crop detection

2.5.2: Perimeter

Perimeter of any object in an image can be defined as number of pixels in object boundary of that

region. It possible to Differentiate Weed from Crop By analysing Perimeter features of each object in

an image which is detected after performing the Labelling algorithm. From figure 5 it is clearly seen

that the Perimeter of corn crop is higher than that mapped by a weeds. By selecting appropriate

Perimeter of an object weed and crop can be easily identified. In Figure 5 the crop of corn is

identified.[4]

Page 189: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

823 Vol. 7, Issue 3, pp. 818-826

Fig.5 Perimeter Based Crop detection

2.5.3: Longest Chord

A line connecting two pixels on the object boundary is called a chord. For many shapes, the length

and orientation of the longest possible chord give us an indication of size and orientation of the object.

If the chord runs between the boundary pixels (x1, y1) and (x2, y2), then its length lc and orientation

are given by equation 2 . It possible to Differentiate Weed from Crop By analysing Longest Chord

features of each object in an image which is detected after performing the Labelling algorithm. From

figure 6 it is clearly seen that the significant difference between Longest Chord of corn crop and

weeds. By selecting appropriate Longest Chord Length of an object weed and crop can be easily

identified. In Figure 6 the crop of corn is identified.

𝑙𝑐 = √(𝑥2 − 𝑥1)2 + (𝑦2 − 𝑦1)2 (2)

Fig.6 Longest Chord Based Crop detection

2.5.4: Longest perpendicular chord

Longest perpendicular chord of any object in an image can be defined as the maximum length of all

chords that are perpendicular to the longest chord can give us additional shape information. It

possible to Differentiate Weed from Crop By analyzing longest perpendicular chord features of each

object in an image which is detected after performing the Labelling algorithm. From figure 7 it is

clearly seen that the Longest perpendicular chord of corn crop is higher than that mapped by a weeds.

By selecting appropriate Longest perpendicular chord Length of an object weed and crop can be

easily identified. In Figure 7 the crop of corn is identified.[4]

Page 190: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

824 Vol. 7, Issue 3, pp. 818-826

Fig. 7 Minor Axis

2.6 Crop Masking

After Successful detection of weed by different techniques it is necessary to find the weed in an image

for that crop detected is masked with black color by finding the origin, length, width of each bounding

box the result is shown in figure 8.

Fig. 8 Crop Masking

2.7 Weed Detection

After the crop masking weed is detected by applying excessive green algorithm the weed is detected.

The bounding box algorithm is performed on the image to map the weeds. Once the weed is mapped

by bounding box. Algorithm to find the co-ordinate of the each bounding box is developed and

performed and the co-ordinate of each detected weed is printed on the Image as shown in figure 9.

Page 191: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

825 Vol. 7, Issue 3, pp. 818-826

Fig. 8. Weed Detection

III. CONCLUSION & FUTURE WORK

Image processing algorithm for detection of weed in the indian agriculuteral field for the management

of weed is successfully developed and tested. For each stages of De weeding the thresholding

parameter for each feature is calculated for Suger cane and corn crop. Detected Weed co- ordinates

can be used for calculation of actuation parameters. Through the serial communication calculted

Coordinates can be given to the robot controller. The developed algorithm is simple and faster than

the algorithms which uses artifisial inteligence techniques as the number of calculation in this

algorithm is much lesser than that of AI Algorithms as their is no requirement training algorithms and

Huge Database. Accuracy of the algorithm can be increase by using the more features and localized

image processing techniques. In the future Accuracy of the algorithm can be incresed by using

spectral reflectance features based weed detection and texture features based weed detection, It is

poosible to develop Robotic machine which will Run through Agriculutral field and By using the

weed Co-ordinates it can Spray herbisides on perticuler weed plant precisely or by using mechanical

tool it Can Up-Root the Weeds.

REFERENCE

1.] D.C. Slaughter, D.K. Giles, D. Downey Autonomous robotic weed control systems: A review. computers

and electronics in agriculture 6 1 ( 2 0 0 8 ) 63–78

2] Hong Y. Jeon ,, Lei F. Tian and Heping Zhu, Robust Crop and Weed Segmentation under Uncontrolled

Outdoor Illumination, Sensors 2011, 11, 6270-6283; doi:10.3390/s110606270

3] S. Kiani, and A. Jafari, Crop Detection and Positioning in the Field Using Discriminant Analysis and Neural

Networks Based on Shape Features, J. Agr. Sci. Tech. (2012) Vol. 14: 755-765

4] Kamal N. Agrawal, Karan Singh, Ganesh C. Bora and Dongqing Lin, Weed Recognition Using Image-

Processing Technique Based on Leaf Parameters, Journal of Agricultural Science and Technology B 2 (2012)

899-908

5] Xavier P. Burgos-Artizzu , Angela Ribeiro , Alberto Tellaeche , Gonzalo Pajares , Cesar Fernández-

Quintanilla Analysis of natural images processing for the extraction of agricultural elements. Image and Vision

Computing 28 (2010) 138–149

6] J. Romeo, G. Pajares, M.Montalvo, J.M. Guerrero, M. Guijarro, and A. Ribeiro, Crop Row Detection

inMaize Fields Inspired on the Human Visual Perception, The ScientificWorld Journal Volume 2012, Article ID

484390

7] Muhammad Asif, Samreen Amir, Amber Israr and Muhammad Faraz A Vision System for Autonomous

Weed Detection Robot, International Journal of Computer and Electrical Engineering, Vol. 2, No. 3, June, 2010

1793-8163

Page 192: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

826 Vol. 7, Issue 3, pp. 818-826

8] Xavier P. Burgos-Artizzu,, Angela Ribeiroa, Alberto Tellaecheb, Gonzalo Pajaresc, Cesar Fernández-

QuintanilladImproving weed pressure assessment using digital images from an experience-based reasoning

approach. computers and electronics in agriculture 6 5 ( 2 0 0 9 ) 176–185

9] J. Bossua, Ch. Géea,∗, G. Jones, F. TruchetetbWavelet transform to discriminate between crop and weed in

perspective agronomic images. computers and electronics in agriculture 6 5 ( 2 0 0 9 ) 133–143

10.] Sajad KİANİ Crop-Weed Discrimination Via Wavelet-Based Texture Analysis Internatıonal Journal of

Natural and Engineering Sciences 6 (2) : 7-11 , 2012

11.] Alberto Tellaeche, Gonzalo Pajares A computer vision approach for weeds identification through Support

Vector Machines Applied Soft Computing 11 (2011) 908–915

12.] Lanlan Wu, Youxian Wen, Xiaoyan Deng and Hui Peng Identification of weed/corn using BP network

based on wavelet features and fractal dimension. Scientific Research and Essay Vol.4 (11), pp. 1194-1200,

November, 2009

13] Anup Vibhute, S K Bodhe Applications of Image Processing in Agriculture: A Survey. International Journal

of Computer Applications (0975 – 8887)

14] Imran Ahmed, Awais Adnan, Muhammad Islam,Salim Gul, Edge based Real-Time Weed Recognition

System for Selective Herbicides. IMECS 2008, 19-21 March, 2008, Hong Kong

15] T. K. Das, Weeds and their Control Methods Division of Agronomy, Indian Agricultural Research Institute,

New Delhi – 110 012

16] Dr. A.R. Sharma, vision 2050 ,the Director, Directorate of Weed Science Research Jabalpur-482 004 (M.P.),

India

AUTHOR’S BIOGRAPHY

Ashitosh Shinde pursuing the M.tech degree in Electronics & Telecommunication

Engineering from Symbiosis International University, Pune. He has Bachelor of engineering

degree from University of Pune, India. He has Diploma degree from Cusrow Wadia Institute

of Technology. His research interests include Technology in agriculture, Image processing,

Robotics, Embedded systems.

Mrudang Shukla is assistant professor at Symbiosis Institute of Technology in Electronics

and Telecommunication department. His research interest is in image processing and defining

vision and path for automated vehicle in agricultural field. He has M.Tech in Electronics and

Communication System DDU(Dharmsinh Desai University, Nadiad, Gujarat and BE in

Electronics and Telecommunication D N Patel COE Shahada, North Maharatra University

NMU, Jalgaon)

Page 193: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

827 Vol. 7, Issue 3, pp. 827-837

DETECTION & CONTROL OF DOWNEY MILDEW DISEASE IN

GRAPE FIELD

Vikramsinh Kadam1, Mrudang Shukla2 1Mtech (E&TC), 2Assisstant Professor

Symbiosis institute of Technology, Pune, Maharashtra, India

ABSTRACT Grape field is greatly affected by Downey mildew disease. The development of the disease is faster, within six

hour the spread of the disease get multiplied by twice. Once it is affected it will diminish the quantity & quality

of Grapes, it reduce the Photosynthesis process. For this, disease detection is an important part. By using

detection technique we can prevent the disease. Detection & control strategies are carried over in order to

control Downey mildew disease. The electromechanical system we have used. In this system we have used

Raspberry pi module for disease detection. The raspberry pi is the module which has image processing

capabilities. Traditionally farmer visually checks the disease. If disease is there, he applies pesticide spray

manually. In our system disease is detected by raspberry pi module, which is installed on the robo car. As soon

as disease is detected, detected signal is transmitted to another electromechanical module. Then pesticide is

sprayed automatically on infected area by using electromechanical system. The farmers need not to go in the

farm to check each leaf. Prevention is always better than cure. Instead of waiting for disease development, we

can prevent this disease on Grape field. If disease is prevented then Export quality grapes can be produced &

Farmer can have more profit from Grape production.

KEYWORDS: Image Processing, Robotics, Control of Downey mildew disease, Raspberry pi module

I. INTRODUCTION

In the Grape field Downey mildew disease is the biggest threat to the plant. Native of Downey

mildew disease is North America. Downey mildew disease caused by fungus Plasmopara Viticola.

Vinifera cultivars is the most susceptible for this disease, Wild species are more resistant. Downey

mildew comes naturally in the rainy season when humidity of environment is high. In the first 40 to

65 days after the cutting of plants for grape production, the leaves of Grapes are delicate & immature.

At that time this disease comes. It can reduce profitability by 50%. The correct identification should

be done in time. Little delay in identification can harm plant.

There are two favorable conditions for Downey mildew disease development as 10 to 23 degree

temperature or 23 to 27 degree temperature with relative humidity greater than 80%.then destruction

to the grape starts. Downey Causes deformed shoot, cluster growth reduction, premature defoliation

causes delayed ripening of fruit, young berries will turn light brown, becomes soft then fall off the

cluster easily. Downey mildew disease comes of fungus growth on the back side of the leaf. That’s

why its name is Downey mildew. Before proceeding towards detection we should know how disease

comes. If temperature remains in between 10 to 23 degree. This plasmopara viticola pathogen grows

rapidly. Another suitable temperature for this is from 23 to 27 degree with greater than 80% relative

humidity.

Once the fungus grows on back side of leaf. It finds stomata to enter in to the leaf tissue. If they don’t

get sufficient amount of stomata they will break 3 layers of leaf & get entry in to the leaf tissue. These

three layers are cutin, pectin & cellulose. Once it enters in to the leaf tissue. Its effect comes in the

form of yellowish leaves in the first 40 to 65 days. As mentioned earlier, if pathogen doesn’t get

Page 194: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

828 Vol. 7, Issue 3, pp. 827-837

enough stomata’s to enter in to leaf, it will enter by breaking 3 layers. Because of all these leaves are

delicate in these first 40 to 65 days, it is easy to break three layers. Once the leaves are getting older,

then there is no threat of Downey mildew disease as its immune system increased to resist that

disease. The fungus creates plasmopara viticola pathogen, which creates oogonium, oospores,

sporangia, zoospores & it creates infection to leaves. In another cycle it creates sporangia, zoospores

& it creates infection to leaves.

a) Need of detection & control:

Two cycles of disease development:

First cycle: oogonium->oospores->sporangia->zoospores

Second cycle: sporangia->zoospores

1) Disease is identified Visually by farmers. But when they identify this disease, the disease gets well

developed. To avoid this we can use image processing technique to identify the disease in early stage.

2) The duration of first cycle is 2 days & duration of second cycle is 6 hours. If we failed to detect this

disease for first 2 days then this disease pathogen production get multiplied by every 6 hours. It will

create huge destruction to grape field & ultimately it reduces profitability. So detection of disease is

more important

b)Statistics about grape field of farmer at Location Phaltan:

We visited the farmer named as Sampatro kadam At Post: Girvi, Taluka: Phaltan, District: Satara,

State: Maharashtra. They are grape producer since 1999. They have used grape variety named as

Tashi ganesh. The first cutting of grape field is done in the month of February. Once the cutting is

done they grow & nourish the stake. The duration of this process is 150 days. In these 150 days grape

stake get developed. After 150 days once again cutting is done at the top side. This is the starting of

grape production development process. This process is carried over for 105 days. In these days there

is great threat to grape field by Downey Mildew disease. In these 105 days total pesticide cost goes up

to 60000 to 70000 rupees.

In one acre the production of grape field is up to 8 to 10 tonne. The harvesting is done in October. In

these days grapes get exported. Farmer gets 80 to 100 rupees per kilogram of grape. Thus in one acre

of grape field, farmer gets 8 to 10 lakh rupees. Total expenditure is 1.5 to 2 lakh. Farmer has profit of

7 to 9 lakh rupees in one acre of farm.

When plasmopara viticola get sufficient amount of atmosphere (10 to 23 degree Celsius temperature

& greater than 80% relative humidity). It starts affecting grape leaves, berries& twigs. It forms

whitish growth on the back side of leaf. These spot kills the plant tissue. Thus photosynthesis process

gets stopped at there. Let us see how disease gets created. [41]

1) There are organs (Antheridium) [41] Producing male gamets & immature ovarian egg (Oogonium)

within which fetus is developing. The fusion (Karyogami) of these two nuclei is done. It produces

fertilized female zygote (oospores). Germination of oospores leads to sporangium. Sporangium is

nothing but container in which Sporangia’s are stored. In one sporangium contains near about 40000

to 50000 sporangia. Germination of sporangia leads to zoospores. Zoospore again infects leaves,

debris & twigs. This process takes 3 to 6 days.

2) The dormant twigs [41] get affected due to vegetative part of fungus (mycelium).it again forms

sporangia & then zoospores are created. This process takes 6 to 8 hours.

An infected leaf gives sporangia. It again forms zoospores. This process takes 6 to 8 hours. Therefore

detection is required for first step.

II. DEVELOPMENT OF DOWNEY MILDEW DISEASE

Leaf color changes due to disease:

a) Stage 1: Creation of white spots on backside of leaf Plasmopara viticola is the pathogen responsible for Downey mildew disease. It creates the white spots

on the back side of leaves. It is shown in figure no.1.

Page 195: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

829 Vol. 7, Issue 3, pp. 827-837

Figure 1: White spot of Downey mildew disease on backside of leaf

b) Stage2: creation of yellow spots on upper side of leaf

As seen in first stage white spots are present on backside of leaf. Exactly on the opposite side (upper

side) of white spot (downside).yellow spot comes. Primary stage of yellow spot development is

shown in figure no.2.

Figure 2: Yellow spot of Downey mildew disease on upper side of leaf

Full development of yellow spots on leaf (on upper side) is shown in figure.no.3

Figure 3: Dark Yellow spot created

c) Stage 3: Leaf get damaged at areas where white spots come

This Disease kills the leaf tissues. It is shown in figure no.4. Its color turns to brownish. It stops the

photosynthesis process. Then it affects the production of Grape

Page 196: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

830 Vol. 7, Issue 3, pp. 827-837

Figure 4: Yellow spot kills the tissue of leaf

III. ACTUAL DETECTION FOR DOWNEY MILDEW DISEASE

a) White spot detection

Detection of white spots is very important. Because this is a primary stage of disease. So with the help

of image processing toolbox of Matlab, We can detect it. First we have converted this binary image to

double precision intensity image. Then green color thresholding is done. That image is converted to

gray. To reduce salt & pepper noise we used median filtering. After that we complement the image. In

morphological operation, we removed connected components less than 400. Then we find perimeter

for less than 800 & then locate that point. We applied Excessive white condition on infected leaf. fig.

No.5 shows Excessive white condition. fig. No. 6 shows thresholding & fig.no.7 shows white spot

detection.

Page 197: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

831 Vol. 7, Issue 3, pp. 827-837

Flow chart no.1: White spot detection

Fig. no.5: Gray image Fig. no.6: Morphological operation Fig. no.7: White spot detected

b) Yellow spot detection:

First we have converted this binary image to double precision intensity image. Then green color

thresholding is done. That image is converted to gray image. To reduce salt & pepper noise we used

median filtering. After that we complement the image. In morphological operation remove

connected component less than 400.then we find perimeter for less than 800 & then locate that point.

Gray image is shown in fig.no.8. Binary image is shown in fig.no.9.then yellow spot detection is

done, as shown in fig.no.10

Page 198: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

832 Vol. 7, Issue 3, pp. 827-837

Flow chart no.2: Yellow spot detection

Fig. no.8: Gray image Fig. no.9: Morphological operation Fig. no.10: Yellow spot detected

IV. CONTROL OF DOWNEY MILDEW DISEASE

a) Raspberry pi module

The Raspberry pi module, Robocar, pesticide sprinkling system is used to control the disease.

Page 199: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

833 Vol. 7, Issue 3, pp. 827-837

Fig.no.11: Raspberry pi module

Raspberry pi consist of following parts:

1) ARM 11 with 700 MHz frequency

2) Ethernet port

3) Two USB port

4) SD card slot

5) Micro USB power

6) HDMI port for communication

7) GPIO ports

The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is shown

in fig no.11.It is a capable little computer which can be used in electronics projects, and for many of

the things that your desktop PC does, like spreadsheets, word-processing and games. It also plays

high-definition video. The Raspberry Pi measures 85.60mm x 56mm x 21mm (or roughly 3.37″ x

2.21″ x 0.83″), with a little overlap for the SD card and connectors which project over the edges. It

weighs 45g. The Soc is a Broadcom BCM2835. This contains an ARM1176JZFS, with floating point,

running at 700 MHz; and a Video core 4 GPU. The GPU is capable of Blue Ray quality playback,

using H.264 at 40MBits/s. The camera module is an Omni vision 5647. It is comparable to cameras

used in mobile phones.

b) Detection System

Page 200: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

834 Vol. 7, Issue 3, pp. 827-837

Fig.no.12: Block diagram of Detection system Fig.no.13: Hardware of detection system

At first we downloaded the Raspbian operating system. The file size is 577 MB. After extraction of

that file, it becomes 2.75 GB. This operating system is taken in to 4 GB memory card. Then it is

inserted in to raspberry pi module with help of SD card reader. Raspberry pi uses Pithon

programming. Open CV is a library from which we fetch the function in pithon programming. In fig.

No. 11. We have shown Pibot. Pibot is nothing but a robo car which has four wheels. We used two

DC motors, These DC motors require 12 volt for operation. We selected its rpm is 30.Also to run this

pibot we use 12 volt battery. On that pibot we install Raspberry pi module & camera is connected to

the Raspberry pi module. Zigbee module we have used. It is used for the wireless transmission.

Raspberry pi module & Zigbee module is on the pibot. This pibot will run in the grape field. Camera

face is in the upward direction. As soon as disease is detected by Raspberry pi module, That detected

signal is transmitted through Zigbee module. The range of Zigbee module is 100 meters. The grape

field consists of number of rows, our robot will move in between these rows. Robot can run in

between these rows because there are no ups & downs in this area.

c) Control System

\

Fig.no.14: Block diagram of control system Fig.no.15: Microcontroller based hardware

The system shown in fig.no.13 indicates control of Downey mildew Disease. The block diagram &

Hardware design is shown in fig.no.13. The detected signal is received by Zibee module. As earlier

said its range is 100 meters. This zigbee module connected to the microcontroller AT89C51 through

MAX 232 IC. The 12 volt relay we can use. But it is also depends on which kind of pesticide pump

Page 201: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

835 Vol. 7, Issue 3, pp. 827-837

we are using. If it is three phase then we can use contractor for switching on & off the pesticide pump.

These pesticide pumps are connected to the pesticide sprinklers. There are number of pesticide

sprinklers we can use in entire grape field. It is distributed in all rows of grape field through pesticide

pipes.

Suppose our Robo car is going through a first row & it has detected the disease in first row. Then

detection system will send signal to control system that disease is detected through Zigbee module.

Then control system will on the particular sprinklers in first row. Therefore we have used valves to on

sprinklers in first row. Solenoid valve is normally closed. But when we energize the solenoid coil,

solenoid valve becomes open. With the help of solenoid valve, we can switch on particular set of

sprinklers. If we want to on all the sprinklers at a time, then it will create low pressure at the output of

sprinklers. Thus by using number of solenoid valves, we can spray pesticide in to whole Grape field

V. FUTURE WORK

In our system we have detected & control of downey mildew disease. We can use GSM based system

to operate robo. If a farmer is far away from a Grape field & he wants to check for disease, then he

can operate that robo from a long distance. The system used in our paper is only for one disease in

Grape field. We can integrate all Grape field diseases. If we integrate all the diseases then it will give

complete solution for every disease in Grape field. Apart from Grape field, We can use this

technology for identifying diseases of the crops who has canopy like structure.

VI. CONCLUSION

As plasmopara viticola is dangerous pathogen causing Downey Mildew Disease. it reduces

Profitability of farmer. With the help of cultural practices we can reduce the disease up to some

extent. Therefore we can use novel approach to reduce this disease by detection. Prevention is better

than controlling the disease. As detection will give precautionary message, that will lead to prevent

the disease. Detection & control technology is reliable, accurate. It will reduce labor cost, cost on

pesticides, and destruction due to Downey mildew diseases & finally it will lead to increase

production & profit.

REFERENCES

[1] Sindhuja sankaran, Ashish Mishra, Reza Ehsani, Cristina Davis, 2010, a review of advanced techniques for

detecting plant diseases, Computers and Electronics in Agriculture 72

[2] Cesare Gessler, Ilaria, Pertot & Michele Perazzolli, 2011, plasmopara viticola: a review knowledge on

Downey mildew of grapevine& effective disease management.

[3] Veronica Saiz-rubio, Francisco Rovira Mas, 2013, proximal sensing mapping method to generate field maps

in vineyards

[4] Jaime lloret, Ignacio Bosch, Sandra Sendra, Arturo Serrano, 2011, wireless sensor network for vineyard

monitoring that uses image processing, ISSN 1424-8220.

[5] Anushka Srivastava, 2010, Robo kisan-a helping hand to the farmer.

[6] Anushka Srivastava & Swapnil Kumar Sharma, 2010, development of a robotic navigator to assist the

farmer in field.

[7] Federico Hahn, 2009, actual pathogen detection : sensor & algorithm, ISSN 1999-4893.

[8] R.C.Seem, P.A.Magarey, P.I.Mccloud & M.F, Wachlet, 1985, a sampling procedure to detect grapevine

Downey mildew.

[9] N. Lalancette, L. V. Madden, and M. A. Ellis, 1988, A Quantitative Model for Describing the Sporulation of

Plasmopara viticola on Grape Leaves.

[10] G.Staudt and H.H.Kassemeyer, 1995, Evaluation of Downy mildew resistance in various accessions of wild

Vitis species.

[11]Stuart.P.Falk,Roger.C.Pearson,David.M.Gadoury, Robert, C.Seem & Abraham Sztejnberg, 1996, Fusarium

proliferation as a Biocontrol agent against grape Downey mildew

[12] A. Kortekamp, 1997, Epicoccum Nigrum Link: A biological control agent of Plasmopara viticola

[13] Maurus V. Brown1 and James N. Moore2, Patrick Fenn3, Ronald W. McNew4,1999, Comparison of Leaf

Disk, Greenhouse and Field Screening Procedures for Evaluation of Grape Seedlings for Downy Mildew

Resistance

Page 202: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

836 Vol. 7, Issue 3, pp. 827-837

[14] R. Beresford, H. Paki, G. Brown, G. Follas and G. Hagerty,1999, strategies to avoid resistance

development to Strobilurin and related fungicides in Newziland

[15] S. M. Liu, S. R. Sykes and P. R. clingeleffer, 2003, a method using leafed single-node cuttings to evaluate

downy mildew resistance in grapevine

[16] A. Calonneca, P. Cartolaroa, C. Poupotb, D. Dubourdieub and P. Darrietb, 2004, Effects of Uncinula

necator on the yield and quality of grapes

[17] Thomas M. Perring, 2004, Epidemiological analysis of glassy winged sharpshooter and pierce’s disease

data.

[18] Ken Shackel, John Labavitch, 2004, magnetic resonance imaging: A non destructive approach for detection

of xylem blockages in xylella fasttidiosa-infected grapevines

[19] Mark A. Matthews, Thomas L. Rost, 2004, mechanisms of pierce’s disease in grape vine: the xylem

pathways of Xylella fastidious a progress report: comparison with symptoms of water deficiated the impact of

water stress.

[20] D. Gobbin, M. Jermini,B. Loskill, I. Pertot,M. Raynal and C. Gessler,2005, Importance of secondary

noculums of Plasmopara viticola to epidemics of grapevine downy mildew

[21] S.Bosco, M.C.Martinez, S.Anger and H.H.Kassemeyer, 2006, Evaluation of foliar resistance to downy

mildew in different cv. Albariño clones

[22] Wei-SenChen, Francois Delmotte, Sylvie Richard-cervera, lissette douence, Charles Greif & Marie France

Corio-Cortest, 2007, at least two origins of fungicides resistance in grapevine Downey mildew populations

[23] Sotolář R, 2007, comparison of grape seedling population against Downey mildew by using different

provocation methods.

[24] Franco mannini, 2007, Hot water treatment and field coverage of mother plant vineyards to prevent

propagation material from Phytoplasma infections.

[25] Y.cohen, Erubin, T.Hadad, D.gotlieb, u.gisi, 2007, sensitivity of phytothora infestans to mandipropamid &

the effect of enforced selection pressure in the field

[26] Santell Burrano, Antonto Alfonzo, 2008, Interaction between Acremonium byssoides & plasmopara

viticola in vitis vinifera

[27] Lance Cadle-Davidson, 2008, Variation Within and Between Vitis spp. for Foliar Resistance to the Downy

Mildew Pathogen Plasmopara viticola.

[28] M.Jermini, P.Blaise, C.Gessler, 2010, Quantitative effect of leaf damage, caused by Downey mildew on

growth & yield quality of grape vine ’morlot’.

[29] W.S.Lee, V.Alchantis, C. Yang, M. Hirafuji, D. Moshou, 2010, sensing technologies for precision

speciality crop production

[30] Jayamala, K. Patil, raj Kumar, 2011, advances in image processing for detection of plant diseases

[31] Jan-Cor brink, 2012, Optimization of fungicide Spray coverage on grapevine & the incidence of botrytis

Cineria (book).

[32] Kanji Fatema Aleya, Debabrata Samantha, 2013, automated damaged flower detection using image

processing

[33] Paolo Tirelli, Massimo Marchi, Aldo Calante, Sara vitalini, Marcello iriti, n.alberto Borghese, Roberto

oberti,2013,multispectral image analysis for grapevine diseases automatic detection in field conditions

[34] Dan Egel, Downey mildew of pumpkin

[35] T.J.Wicks, B.H.hall & A. Somers, first report of matalaxyl resistance of grapevine Downey mildew in

Australia.

[36] Andrew Taylor, Farm note-how to bag test for Downey mildew of grapes

[37] Joost H.M. Stassen, identification & functional analysis of Downey mildew effectors in lettuce &

Arabidopsis

[38] Ron Becker, Sally miller, fact sheet: managing Downey mildew in organic & conventional vine crops.

[39] Uncorking the grape, Genome

[40] Jenna Burrell, Tim Brooke & Richard Beckwith, vineyard computing: sensor networks in agricultural

production.

[41] Plant pathology Book by George Agrios.

[42] Diseases of fruit crops Book by R.S.Singh

Page 203: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

837 Vol. 7, Issue 3, pp. 827-837

AUTHORS BIOGRAPHY

Vikramsinh Kadam pursuing the Mtech degree in Electronics & Telecommunication

Engineering from Symbiosis International university, Pune. His research interests include

Technology in agriculture, Image processing, Robotics, Embedded systems.

Mrudang Shukla is assistant professor at Symbiosis Institute of Technology in Electronics

and Telecommunication department. His research interest is in image processing and

defining vision and path for automated vehicle in agricultural field.

Page 204: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

838 Vol. 7, Issue 3, pp. 838-844

GROUND WATER STATUS- A CASE STUDY OF ALLAHABAD,

UP, INDIA

Ayush Mittal, Munesh Kumar PG Student, Department of Civil Engineering, MNNIT Allahabad, Uttar Pradesh, India

ABSTRACT The ground water study is essential for assessing the ground water quality. These studies are of vital importance

from the point of view of development of water-logging. Vast area of state of Uttar Pradesh is already water-

logged, and any addition to such area is not in the interest of the state. It therefore, becomes imperative to have

a careful watch over the behaviour of ground water table since rise of ground water can be permitted only to a

certain extent. More than 90% of rural population uses ground water for domestic purpose. It is therefore

extremely important to have detailed information and knowledge about the quantity and quality of ground

water. In the present research work we have discussed about the groundwater scenario which includes the data

of groundwater availability in different blocks, groundwater quality of Allahabad district.

KEYWORDS: Alluvial, Chaka, Ganga, Groundwater, Vindhyan.

I. INTRODUCTION

Ground water is an important component of the water system for domestic, industrial and agricultural

purpose. It is commonly used source for drinking water for urban and rural sectors in India. Ground

water is a renewable natural resource with a relatively short and shallow circulation with close

dependence on precipitation and surface water. Ground water once was supposed to be the hygienic,

secure and safe for human consumption. Groundwater is now being gradually polluted by human

being because of the intense industrial activities. The quality of ground water depends upon the

characteristics and type of the subsurface soil and nature of recharge water.(‘Srinivasa CH, 2000’)

Today the accelerated pace of development, rapid industrialization and population density have

increased demand of water resources. Ground water, a gift of nature, is about 210billion m3 including

recharge through infiltration, seepage and evaporation. Out of this nearly one third is extracted for

irrigation, industrial and domestic use, while most of the water is regenerated into rivers. Over 98% of

the fresh water on the earth lies below its surface. The remaining 2% is what we see in lakes, rivers,

streams and reservoirs. Of the fresh water below the surface, about 90% satisfies the description of

ground water, that is, water which occurs in saturated materials below the water table. About 2%

water occurs as soil moisture in the unsaturated zone above the water table and is essential for plant

growth. The crucial role that the ground water plays as a source of drinking water for millions of rural

and urban families cannot be neglected. According to some estimates it accounts for nearly 80 percent

of the rural domestic water needs and 50 percent of urban domestic water needs(‘Kumar M. Dinesh,

2004’)India receives annual precipitation of about 4000 km3, including snowfall rain. India is gifted

with a river system comprising of more than 20 major rivers with several tributaries. Many of these

rivers are perennial and some of them are seasonal. India occupies 3.29 million km2geographical area,

which forms 2.4 percent of the world’s land area and having 4 percent of world’s fresh water

resources. Monsoon rain is the main source of fresh water, with 76 percent of the rain fall occurring

between June to September. The Precipitation in volumetric terms is 4000 Billion Cubic meters

(B.C.M.).The average annual flow out of this is 1869 B.C.M. The rest of water is lost in infiltration

and evaporation. Due to topographical and other constraints only 600 B.C.M., can be utilized.

Page 205: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

839 Vol. 7, Issue 3, pp. 838-844

Ground water is generally less susceptible to contamination and pollution when compared to surface

water bodies. Also the natural impurities in rain water which replenishes ground water system get

removed while infiltrating through soil strata. But in India, where ground water is used intensively for

irrigation and industrial purposes, a variety of land and water based human activities are causing

pollution of this precious resource.

In many parts of country available water is rendered not-potable because of the presence of toxic

material in excess. The situation gets worsened during the summer due to water scarcity and rain

water discharge. Contamination of water resources available for household and drinking purpose with

toxic element, metal ions and harmful microorganisms is one of the serious major health problems. As

a result huge amount of money is spent in India for chemical treatment of contaminated water to make

it potable.

Ground water contamination includes spillage or disposal of pesticides, fertilizers, petroleum by

carbon industrial chemicals, and waste products. Contamination can also respect from change in land

use pattern. The contaminated ground water system can pose a threat to other connected ecosystem.

The contamination can be described as coming from either point sources or diffuse sources. Point

source contamination may range from land fill sites on the other hand diffuse source contamination

includes the spreading of fertilizer to agriculture land, urban runoff, and the fallout from industrial

smoke stacks. When ground water becomes polluted it is difficult or impossible to clean it up

completely. The slow rates of ground water flow and low microbial activity limits any self-

purification process which takes place in days or weeks in surface water system may take decades to

occur in ground water. In addition the coasts of remediating ground water system are very high. It is

therefore better to prevent or reduce the risk of ground water contamination than to deal with its

consequences. This large use and dependency upon ground water dictates that these resources are

valuable and must be protected for both present day and future use (‘Donald K., Keech, 1979’). The

ground water of the region has also been classified and characterized on the basis of hydro chemical

facies and their quality for agricultural use. [‘Jain C.K. et al (2000)’].

II. DESCRIPTION OF STUDY AREA

The district of Allahabad lies at tail end of Allahabad division to the south between latitude 24º 47’

and 25º 47’ N and longitude 81º 19’ and 82º 29’ E. On the North it is bounded by the districts of

Pratapgarh and Jaunpur, the former being separated by the river Ganga for about one third of its

boundary. On the East is the district of Varanasi and the district of Mirzapur on the Southeast. The

southern boundary is formed by the State of Madhya Pradesh, and the district of Banda and Fatehpur

bound it on the south-west and West. The length of the district from east to west is about 117 km and

the breadth from north to south is about 101 km while the total area 7261 sq. kms. The district

headquarters is located at Allahabad which is also known as Prayag, situated at the confluence of the

great rivers the Ganga, the Yamuna and the mythical Saraswati. Allahabad is one of the most

important towns which are situated along the river Ganga. This great city is famous for the annual

MaghMela and for MahaKumbh, which is held at every twelve years interval, the biggest Mela in the

World. The main town is bounded by river Ganga on Northern and Eastern sides, the river Yamuna

and Doab plain forms its Southern and Western boundaries respectively. The Kanpur- Varanasi Road,

in most of its length runs on the ridge line dividing the town in two parts. The area on the north of this

road slopes towards Ganga whereas the area on the south side slopes towards river Yamuna. The

general information about Allahabad is as given below:

2.1 Information Data Table-1: Information

S.No Parameter Value

1. Population 5,959,798 As Per 2011 Census

2. Area 63.07 Sq.Kms

3. Altitude 98 Meters Above Sea Level

4. Temperature Summer 26.6 To 41.0 °C

5. Temperature Winter 9.1 To 29.0 °C

6. Rainfall 102.8 Cms

7. Language Hindi, Urdu & English

Page 206: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

840 Vol. 7, Issue 3, pp. 838-844

III. PHYSIOGRAPHY

The district is drained by river Ganga and its right bank tributary Yamuna and Tons broadly

represents following geomorphic units:-

1. Ganga Alluvial plain

2. Yamuna Alluvial plain

3. Vindhyan Plateau

Topographically Allahabad can be divided into three parts- the trans-Ganga tract or the Gangapar

plain, the doab and the trans-Yamuna tract. These are formed by the two main rivers, the Ganga and

the Yamuna. The trans Ganga part consists of the Soran, Phulpur and Handia tehsils. It is plain area

but there are long belts of Khadar land. The high banks of the Ganga are covered with poor sandy soil.

Belts of loam and usar lands also exist in this part. The doab tract comprises the Chail, Manjhanpur

and Sirathu tehsils and lies between the Ganga on the north and the Yamuna on the south. It is rich

and fertile. The land is plain and it consists of alluvial and light loam soils. In the south west the soil is

dark and it resembles the mark of the adjoining parts of Madhya Pradesh. The trans Yamuna tract lies

to the south of the Yamuna and comprises the Karchhana and Meja tehsils. It forms a part of

Bundelkhand region. The ridge formed by the Ganga and the Yamuna which lies in the north of

Karchhana is crowned with light sandy soil. The Kachhar land lies near the confluence of the Ganga

and the Tons. The central parts of karchanna tehsil and some parts old meja tehsil consist of upland.

The ranges of the vindhyan series of the Deccan plateau also lies in this tract. The Panna range lies for

about 16 km along the southern boundary of the district.

Figure 1. Allahabad district

IV. CLIMATE & RAINFALL

Allahabad district is continental. The climate of Allahabad is tropical with moderate winter and severe

extended summer. The Allahabad experiences both very dry hot summer and very cold winter every

year. During winter the temperature ranges from 9.5ºC to 26.2ºC whereas in summer it ranges from

29.5ºC to 48.0ºC. The average normal maximum temperature has been observed as 47.8ºC during

June and minimum 3.5ºC during January. The district receives rainfall from south west monsoon from

June to September. The average rainfall being 973.8 mm. Total evaporation of Allahabad district is

1476.9 mm. The maximum is observed during May and June which are the peak summer months.

Page 207: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

841 Vol. 7, Issue 3, pp. 838-844

V. HYDROGEOLOGY

Ground water in the district occurs both in alluvium and the weathered and joined sandstones in areas

which are underlain by the hard rocks. Two broad based hydrogeological units, namely,

unconsolidated (Alluvium) and consolidated (hard rock) are the major components. The Alluvial

formations occur in the Trans-Ganga and Doab region. Localized patches of Trans-Yamuna region are

also covered by unconsolidated formations. Occurrence of consolidated formations is restricted

primarily to Trans Yamuna tract.

5.1 Alluvium Area

Field observations by government agencies indicates that depth of water table is less than 15 m during

pre-monsoon in the Trans- Ganga region whereas in Doab it stands in the depth range between 5.5

and 20 m. During post monsoon period, however depth of water table in the Trans-Ganga region

ranges between 0.65 and 12.0 mbgl. The Doab region indicates the depth of water table ranging

between 4.2and 10.0 mbgl.

5.2 Hard Rock Area

The ground water in the widely covered Vindhyan Plateau region is primarily under unconfined

condition. Exploratory data indicates that kaimur sandstones found at depths do have enough

potentiality at favourable locations. These sandstones after leaching of cementing materials get

disintegrated and reduced to silica sands which are loose and act as promising repository of ground

water. The lithological characteristics of bore holes have clearly indicated the presence of loose silica

sands.

Figure 2 . Hydrogeological map of Allahabad, UP, India

Page 208: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

842 Vol. 7, Issue 3, pp. 838-844

VI. WATER EXTRACTION SOURCE

Table-2: Overall Status of Water Supply

(Source: U.P Jal Nigam)

Overall Status of Water Supply:

The total length of distribution pipeline is 1055 Kms. There are three zonal service reservoirs which

are located at Daraganj, Bhardwaj Ashram and Mayo Hall having capacity of 1.8 ML, 1.35 ML and

2.7 ML respectively. In addition to these three service reservoirs, there are 34 Over Head Tanks

(OHTs) which store the water before supply. Unaccounted flow of water is about 30% of total water

supply.

VII. GROUND WATER RESOURCES

To facilitate the ground water development the ground water resources of the district have been

worked out and are as follows Table-3: Block wise ground water resource of Allahabad district

Sl. No. Assessment

unit (Blocks)

Ground

Water

Availability

(Ham)

Ground

Water Draft

(Ham)

Level of

development

(%}

Category

as on

05/2013

Balance Ground

Water (Ham)

1. Bahadurpur 6269.09 552.88 82.64 Safe 5212.89

2. Baharia 8923.83 2509.56 81.44 Safe 4778.45

3. Chaka 2545.76 403.23 75.68 Safe 2123.22

4. Dhanupur 4743.23 854.79 87.12 Safe 3387.86

5. Handia 4834.02 1025.95 71.22 Safe 3232.22

6. Holagarh 7023.11 2544.43 69.82 Safe 2389.99

7. Jasara 5904.15 1634.21 57.00 Safe 2756.66

8. Kaundhiara 6952.68 1822.22 84.21 Safe 3154.32

9. Karchhana 6300.47 1263.39 89.79 Safe 3989.25

10. Kaurihar 5933.43 1309.67 75.12 Safe 3909.24

11. Koraon 8512.02 2605.88 34.54 Safe 5124.78

12. Manda 3633.87 764.60 69.89 Safe 2467.55

13. Mauiama 5754.46 1588.20 82.13 Safe 2971.35

14. Meja 4533.22 987.44 53.33 Safe 3103.38

15. Phulpur 5998.23 1102.93 63.33 Safe 4455.33

16. Pratapur 5331.22 1122.74 75.08 Safe 3487.90

17. Saidabad 5334.67 1205.76 73.05 Safe 3212.21

18. Soraon 4654.23 1190.98 67.19 Safe 2690.09

19. Shankargarh 3278.34 671.56 33.03 Safe 2012.08

20. Urva 5278.55 744.22 69.17 Safe 4232.33

Total 111738.19 24775.88 68682.32

(Source: U.P Jal Nigam)

VIII. GROUND WATER QUALITY

8.1 Quality of shallow ground water

The chemical analysis of shallow ground water consists of pH, E.C., Na, K, Ca, Mg, HCO3, CL, SO

4,

NO3, F and TH as CaCO

3 reflects that there is no contamination of the shallow ground water in the

Page 209: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

843 Vol. 7, Issue 3, pp. 838-844

district and all the constituents are well within the range. The chemical data of shallow aquifers

reveals that the ground water quality is more deteriorated in canal command area. The map of E.C.

and Chloride show that in most of the area E.C. varies from 180-1780μ siemens/cm at 25°C. It is

interesting to find that different radicals in the shallow ground water have not changed over the year's

in spite of upcoming canal irrigation and use of fertilizers.

8.2 Quality of Deeper Aquifers

Data of water samples from deeper aquifers are few but there analysis reveals that the water is safe

and potable. It is observed that the E.C. and other salts are in higher concentration in alluvial area than

hard rock area. The quality in hard rock area is inferior near the stream than away from the stream.

IX. RECOMMENDATIONS

1. Variation in water quality was observed during both the periods of the study i.e. per-monsoon

and post-monsoon period.

2. Ground water quality varies from place to place with the depth of water table which is

reflected from the values obtained at same locations with different sources.

3. Water source should be thoroughly investigated before recommending it for use, whether it is

private or government boring.

4. Periodical investigation should be conducted every two to three years on quarterly basis to

evaluate the level of ground water contamination.

5. Ground water withdrawal should be minimized in Bahadurpur and Chaka blocks

immediately.

6. Alternative drinking water source may be provided along the river bank because people

residing nearby are using hand pump water for drinking and other domestic purposes.

7. Public awareness should be created among the masses particularly for the people residing

along the bank of the river Yamuna for consumption of safe drinking water.

8. It is suggested that some low cost and easy to implement technique may be provided to the

consumers for removing hardness, total dissolved solids and chloride in water where the

values exceed the permissible limit.

X. CONCLUSIONS

The stage of groundwater development in the district is 69.73%. Maximum groundwater development

in Karchhana block (89.79%) and minimum is in Shankargarh i.e. 33.03%. In five blocks viz:

Bahadurpur, Baharia, Dhanupur, Kaundhiara, Mauiama the stage of groundwater development is 80%

to 90%. In five blocks viz: Chaka, Handia, Kaurihar, Pratappur and Saidabad, the stage of

groundwater development is 70% to 80%. All the blocks fall under "SAFE" category. Construction of

canals or strengthening of the existing canal system should be emphasized in four blocks viz:

Bahadurpur, Chaka, Kaundhiara, and Meja. In rest of the blocks, emphasis may be given to irrigation

through groundwater development either by medium to shallow or deep tubewells. There is no block

in the district identified under polluted area but localized area like Chand Khamria (Meja), Naini

Industrial area and Shankargarh (part) where E.C., NO3and Fe has increased the permissible limit.

Ground water quality in general is fresh and potable except few pockets. Deeper aquifer also reveals

that there is no contamination or pollution of groundwater.

REFERENCES

[1] Kumar M. Dinesh and Shah Tushaar,(2004)“Ground Water pollution and contamination In India- The

emerging challenge”. pp 1-6.

[2] Kumar. Rakesh, Singh R.D., and Sharma K.P,(2005)“Water resources of India current science”, Vol. 89,

No.5, pp 794-811.

[3] Central Pollution Control Board (Ministry Of Environment And Forests),(2008)“Status Of Groundwater

Quality In India -Part-II”.

[4] Uttar Pradesh Development Report Vol-2.

Page 210: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

844 Vol. 7, Issue 3, pp. 838-844

[5] cpcb.nic.in/upload/NewItems/NewItem_50_notification.pdf.

[6] Uttar Pradesh Jal Nigam, Lucknow, India.

[7] Donald, K., Keech Inc. N.Y., (1979), “Ground Water Assessment of Surface and subsurface”, Ground Water

Hydrology journal vol. 25. [8[ Jain CK, Bhatia KKS, Kumar Vijay (Natl Inst Hydro, Roorkee 247667). Ground water quality in Sagar

district, Madhya Pradesh. Indian J Environ Health, 42(4) (2000), 151-158 [13 Ref].

[9] Jain CK, Sharma MK (Natl Inst Hydro, Roorkee, 247667, UP). Regression analysis of ground water quality

data of Sagar district, Madhya Pradesh. Indian J Environ Health, 42(4) (2000), 159-168 [8 Ref].

[10] Jain CK, Sharma MK, Bhatia KKS, Seth SM (Natl Inst Hydro, Roorkee 247667, UP). Ground water

pollution – endemic of fluorosis. Polln Res, 19(4) (2000), 505-509 [2 Ref].

[11] Srinivasa CH, Piska Ravi Shankar, Venkateshwar C, Satyanarayana Rao MS, Ravinder Reddy R (Dept

Zoo, PG Coll Sci, Saifabad, Osmania Univ, Hyderabad 500004). Studies on ground water quality of Hyderabad.

Polln Res, 19(2) (2000), 285-289 [15 Ref]

AUTHORS BIOGRAPHIES

Ayush Mittal presently a postgraduate student in department of civil Engineering MNNIT

Allahabad, India. His area of research is Environmental geo technology. He obtained his

Bachelor of technology from United College of Engineering and Research, Naini,

Allahabad, India. His Interests are Geotechnical Engineering, Groundwater Management,

Rock Mechanics and Foundation Engineering.

Munesh Kumar presently a postgraduate student in department of civil Engineering

MNNIT Allahabad, India. His area of research is Geotechnical Engineering. He obtained his

Bachelor of technology from GLA University, Mathura, India. His interests are Geo

environmental Engineering, Water Conservation and Remediation, Soil Dynamics and

Earthquake Engineering.

Page 211: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

845 Vol. 7, Issue 3, pp. 845-855

CLUSTERING AND NOISE DETECTION FOR GEOGRAPHIC

KNOWLEDGE DISCOVERY

Sneha N S1 and Pushpa2 1PG Student, Department of Computer Science and Engineering,

Adichunchanagiri Institute of Technology, Chikmagalur, Karnataka, India 2Associate Professor & Head, Department of Computer Science and Engineering,

Adichunchanagiri Institute of Technology, Chikmagalur, Karnataka, India

ABSTRACT Ample amount of geographic data has been collected with modern data acquisition techniques such as a Global

Positioning System, high resolution remote sensing and internet based volunteered geographic information.

Spatial datasets are large in size, multidimensional and have high complexity measures. To address these

challenges Spatial Data Mining (SDM) for Geographic Knowledge Discovery (GKD) are the emerging fields

for extraction of useful information and knowledge mining for many applications. This paper addresses the

clustering and noise detection technique for spatial data. We considered multidimensional spatial data to

provide feasible environment to place sensitive devices in a laboratory by using the data collected from the

sensors. Various sensors were used to collect the spatial and temporal data. The GDBSCAN algorithm is used

for clustering, which relies on density based notation of clustering and is designed to discover clusters of

arbitrary shape and distinguish noise. The proposed work reduces the computation cost and increase the

performance.

KEYWORDS: Spatial Data, Temporal Data, Spatial Clustering

I. INTRODUCTION

Due to the development of information technology, a vast volume of data is accumulated on many

fields. Since automated methods for filtering/analyzing the data and also explaining the results are

required, a variety of data mining techniques finding new knowledge by discovering hidden rules

from vast amount of data are developed. In the field of geography, due to the development of

technology for remote sensing, monitoring, geographical information systems, and global positioning

systems, a vast volume of spatial data is accumulated. An automated discovery of spatial knowledge

is required because of the fast expansion of spatial data and extensive use of spatial databases.

Nowadays, the spatial data mining turn out to be more eminent and stimulating for the reason that

abundant spatial data have been stored in spatial databases. The mining of meaningful patterns from

spatial datasets is more knotty than mining the analogous patterns from conservative numeric and

categorical data, due to the difficulty of spatial data types, spatial relationships and spatial

autocorrelation. In various applications, spatial patterns have excessive demand. Since the spatial data

has its own characteristics different from the non-spatial data, direct using of general data mining

techniques incurs many difficulties .So there have been many studies of spatial data mining

techniques considering the characteristics of the spatial data [1].

Spatial data are the data related to objects that occupy space. A spatial database stores spatial objects

represented by spatial data types and spatial relationships among such objects. Spatial data carries

topological and/or distance information and it is often organized by spatial indexing structures and

accessed by spatial access methods. These distinct features of a spatial database pose challenges and

bring opportunities for mining information from spatial data. Spatial Data mining or knowledge

discovery in spatial database refers to the extraction of implicit knowledge, spatial relations, or other

Page 212: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

846 Vol. 7, Issue 3, pp. 845-855

patterns not explicitly stored in spatial databases. Spatial data mining refers to the extraction of

knowledge, spatial relationships, or other interesting patterns not explicitly stored in spatial data sets.

It is expected to have wide applications in geographic information systems, geo-marketing, remote

sensing, image database exploration, medical imaging, navigation, traffic control, environmental

studies, and many other application areas where spatial data are used. A crucial challenge to spatial

data mining is the exploration of efficient spatial data mining techniques due to the huge amount of

spatial data, and the complexity of spatial data types and spatial access methods. The extra features

that distinguish spatial data from other forms of data are spatial co-ordinates, topological distance, and

direction information. By inclusion of many features, query language has become too complicated. In

contrast to the mining in relational databases, spatial data mining algorithms need to consider the

objects that are near-by in order to extract useful knowledge as there is influence of one object on the

neighboring object.

In Data analysis, Cluster analysis is very frequently used, which organizes a set of data items into

groups (or clusters) so that items in the same group are similar to each other and different from those

in other groups. Clustering methods can be broadly classified into Five groups they are Partitioning

algorithms, Density based clustering, Hierarchical Algorithms, Grid-Based Methods and Model-

Based Clustering Methods. Example algorithms of the above classification are K-Means, K-medoids,

Density-based spatial clustering of applications with noise (DBSCAN) and Generalized Density-

based spatial clustering of applications with noise (GDBSCAN), Chameleon. To consider spatial

information in clustering, three types of clustering analysis are existing; they are spatial clustering,

regionalization, and point pattern analysis. In this work only Density-Based clustering methods are

considered.

In section 2, we discuss the related literature with respect to various clustering algorithms for

geographic knowledge discovery using spatial data mining. The section 3 discusses data collection,

GDBSCAN algorithm, spatial clustering and noise detection methods. Section 4 presents the results

related to clustering and noise detection system.

II. LITERATURE SURVEY

N.Santhosh Kumar, V. Sitha Ramulu, K.Sudheer Reddy, Suresh Kotha, Mohan Kumar [2], presented

how spatial data mining is achieved using clustering. Spatial data is a highly demanding field because

huge amounts of spatial data have been collected in various applications, ranging from remote

sensing, to geographical information systems (GIS), computer cartography, environmental assessment

and planning, etc. Spatial data mining tasks include: spatial classification, spatial association rule

mining, spatial clustering, characteristic rules, discriminant rules, trend detection. Cluster analysis

groups objects (observations, events) based on the information found in the data describing the objects

or their relationships.

All the members of the cluster have similar features. Members belong to different clusters has

dissimilar features. Several clustering methods for spatial data mining include; Partitioning Around

Medoid (PAM), Clustering LARge Applications(CLARA), Clustering LARge Applications based

upon RANdomized Search (CLARANS), Spatial Dominant approach SD(CLARANS), Non Spatial

Dominant approach NSD(CLARANS).

Ester M., Kriegel H.-P., Sander J. and Xu X.[3] in their paper provided a Density-Based Algorithm

for Discovering Clusters in Large Spatial Databases with Noise. They presented the clustering

algorithm DBSCAN which relies on a density-based notion of clusters. It requires only one input

parameter and supports the user in determining an appropriate value for it. They also performed a

performance evaluation on synthetic data and on real data of the SEQUOIA 2000 benchmark. The

results of these experiments demonstrated that DBSCAN is significantly more effective in discovering

clusters of arbitrary shape than the well-known algorithm CLARANS. Furthermore, the experiments

have shown that DBSCAN outperforms CLARANS by a factor of at least 100 in terms of efficiency.

Ng R.T., and Han J.[4] developed and Efficient and Effective Clustering Methods for Spatial Data

Mining. They developed a new clustering method called CLAHANS which is based on randomized

search. We also develop two spatial data mining algorithms that use CLAHANS. Their analysis and

experiments show that with the assistance of CLAHANS, these two algorithms are very effective and

can lead to discoveries that are difficult to find with current spatial data mining algorithms.

Page 213: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

847 Vol. 7, Issue 3, pp. 845-855

Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing

clustering methods show that CLAHANS is the most efficient.

III. METHODOLOGY

3.1. Geographic Knowledge Discovery System

Geographic knowledge discovery (GKD) is the process of extracting information and knowledge from

massive geo-referenced databases. Spatial objects by definition are embedded in a continuous space

that serves as a measurement framework for all other attributes. This framework generates a wide

spectrum of implicit distance, directional, and topological relationships, particularly if the objects are

greater than one dimension. Figure 1 gives the System Architecture of Spatial Data Mining System

for Geographic Knowledge Discovery (GKD). The architecture is divided into three parts; they are,

the Data Collection from various databases, the Processing stage which consists of spatial clustering

method and noise detection and the analysis phase where the discovered patterns are analysed for

Equipment feasibility.

Figure 1: System Architecture of GKD

3.2 Data Collection

Spatial data has positional and topological data that do not exist in general data, and its structure is

different according to the kinds of spatial data. Temporal data are the data which explicitly refer to

time. The spatial data consisting of Temperature, Light, Humidity, Voltage, Location information,the

Temporal data consists of date, the non-spatial data consists of Sensor ID are collected.The spatial

dataset used in this work consists of multidimensional data of size 145.133 MB. The dataset includes

23,03,450 Records. The Table 1 gives ten sample records of the spatial data used in the proposed

work. These data are stored in the database in a well-defined format in form of table. The database

consists of set of similar tables to store data pertaining to location of the sensors and the spatial data

specifications of the equipments to be placed.

The attributes used for storing spatial data as shown in Table 1 are Date, Sens ID, Temp, Humid,

Light,Volt. Date gives the date on which these data are entered, Sens ID, gives the ID of the sensors

for which the data is entered, Temp, consists the temperature of the particular sensor defined by the

Data Collection phase

Data

Collection

Processing phase Pattern Analysis Phase

Pattern Discovery

and Analysis

Concept

Hierarchies

Spatial

Data

Non-Spatial

Data

Temporal

Data

Spatial

Clustering

Noise

Detection

Page 214: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

848 Vol. 7, Issue 3, pp. 845-855

sensor ID. Similary Humid, Light, Volt gives the humidity, light and voltage information of the

sensors selected. All these data are collected and stored.

Table 1: Spatial data Table of SDM for GKD

Date Sens ID Temp Humid Light Volt

18/04/14 S1 10.25 20.12 50.15 1.256

18/04/14 S18 24.26 14.26 78.24 6.456

18/04/14 S19 41.25 31.29 136.1 5.698

18/04/14 S2 8.25 16.25 36.24 2.020

18/04/14 S20 30.24 10.24 99.12 5.240

18/04/14 S21 45.78 35.65 187.2 4.250

18/04/14 S22 49.25 39.45 146.2 3.450

18/04/14 S25 14.36 26.24 151.4 2.456

18/04/14 S28 47.54 36.24 100.5 4.500

18/04/14 S30 34.25 61.24 200.1 1.458

3.3. Spatial Clustering and Noise Detection

Spatial Clustering is interpreted as the task of collecting the objects of a spatial database into

meaningful detectable subclasses (i.e. clusters) so that the members of a cluster are as similar as

possible whereas the members of different clusters differ as much as possible from each other.

3.3.1 Density Based Spatial Clustering

Density based algorithms typically regard clusters as dense regions of objects in the data space that

are separated by regions of low density. The main idea of density-based approach is to find regions of

high density and low density, with high-density regions being separated from low-density regions.

These approaches can make it easy to discover arbitrary clusters. A common way is to divide the

high-dimensional space into density-based grid units. Units containing relatively high densities are the

cluster centers and the boundaries between clusters fall in the regions of low-density units. In density-

based clustering, clusters are defined as areas of higher density than the remainder of the data set.

Objects in these sparse areas - that are required to separate clusters - are usually considered to be

noise and border points.

The most popular density based clustering method is DBSCAN. In contrast to many newer methods, it

features a well-defined cluster model called "density-reachability". Similar to linkage based

clustering; it is based on connecting points within certain distance thresholds. However, it only

connects points that satisfy a density criterion, in the original variant defined as a minimum number of

other objects within this radius. A cluster consists of all density-connected objects (which can form a

cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these

objects' range. Another interesting property of DBSCAN is that its complexity is fairly low - it

requires a linear number of range queries on the database - and that it will discover essentially the

same results (it is deterministic for core and noise points, but not for border points) in each run,

therefore there is no need to run it multiple times. The key idea of density-based clustering is that for

each object of a cluster the neighborhood of a given radius Ɛ has to contain atleast a minimum number

of objects, i.e. the cardinality of the neighborhood have to exceed a given threshold.

3.3.2 Generalized Density-Based Spatial Clustering of Applications with Noise (GDBSCAN) The clustering algorithm DBSCAN relies on a density-based notion of clusters and is designed to

discover clusters of arbitrary shape as well as to distinguish noise. In this work generalized version of

this algorithm is used. The generalized algorithm - called GDBSCAN - can cluster point objects as

well as spatially extended objects according to both, their spatial and their non-spatial attributes.

GDBSCAN algorithm is based on center-based approach. In the center-based approach, density is

estimated for a particular point in the dataset by counting the number of points within a specified

radius, eps of that point. This includes the point itself. The center-based approach to density allows

us to classify a point as a core point, a noise or border point. A point is core point if the number of

points within eps , a user-specified parameter, exceeds a certain threshold, MinPts , which is also a

user-specified parameter taken as 3 in this work. Any two core points that are close enough within a

Page 215: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

849 Vol. 7, Issue 3, pp. 845-855

distance eps of one another are put in the same cluster. Likewise, any border point that is close

enough to a core point is put in the same cluster as the core point. Noise points are discarded.

To find a density-connected set, GDBSCAN starts with an arbitrary object p and retrieves all objects

density-reachable from p with respect to Eps and MinPts. If p is a core object, this procedure yields a

density-connected set with respect to Eps and MinPts. If p is not a core object, no objects are density-

reachable from p and p is assigned to NOISE. This procedure is iteratively applied to each object p

which has not yet been classified. Thus, clusters are formed and the noises are detected.

The Algorithm of the function is[5],

SetofPoints is either the whole database or a discovered cluster from a previous run. Eps and MinPts

are the global density parameters whose parameters are considered as 3 and 115.0 respectively.

ClusterIds are from an ordered and countable datatype where UNCLASSIFIED < NOISE < “other

Ids”, and each object is marked with a clusterId Sens. The function nextId (clusterId) returns the

successor of clusterId in the ordering of the datatype. The function SetofPoints.get (i) returns the i-th

element of SetofPoints.

A call of SetofPoints.Region(Sens, Eps) returns the Eps-neighborhood of Point in SetOfPoints as a list

of objects. Obviously the efficiency of the above algorithm depends on the efficiency of the

neighborhood query because such a query is performed exactly once for each object in SetofPoints

which satisfies the selection condition.

The clusterId of some objects p which are marked to be NOISE because Eps (p)) < MinPts may be

changed later if they are density-reachable from some other object of the database. This may happen

only for border objects of a cluster. Those objects are then not added to the seeds-list because we

already know that an object with a ClusterId of NOISE is not a core object, i.e., no other objects are

density-reachable from them.

In algorithm, function Expand-Cluster constructing a density-connected set for a core object Object is

presented in more detail next[5] :

GDBSCAN (SetofPoints, Eps, MinPts,)

// SetofPoints is UNCLASSIFIED

ClusterId:= 1;

FOR i FROM 1 TO SetofPoints.size DO

Sens: = SetofPoints.get (i);

IF Sens.ClId = UNCLASSIFIED THEN

IF ExpandCluster (SetofPoints, Sens, ClusterId,Eps, MinPts) THEN

ClusterId: =nextId (ClusterId)

END IF

END IF

END FOR

END; // GDBSCAN

ExpandCluster (SetofPoints, Sens, ClId, Eps,MinPts): Boolean;

seeds:=SetofPoints.Region(Sens,Eps);

IF Count (seeds) < MinPts THEN // no core point

SetofPoints.changeClId(Sens, NOISE);

RETURN False;

END IF

// still here? sens is a core object

SetofPoints.changeClIds (seeds,ClId);

seeds.Remove(Sens);

WHILE seeds > 0 DO

currentSens := seeds.first();

result := SetofPoints.Region(currentSens, Eps);

Page 216: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

850 Vol. 7, Issue 3, pp. 845-855

IV. RESULTS AND DISCUSSION

4.1 Experimental setup

For the proposed work, the data is collected from various data sources such as sensors which are

deployed in a research laboratory. These sensors are required to collect time stamped topology

information, along with humidity, temperature, light and voltage values. The x and y coordinates of

the sensors are considered in meters. The data collected includes more than 2.3 million records.

Sensor ids range from 1-54; Temperature is in degrees Celsius. Humidity is temperature corrected

relative humidity ranging from 0-100%. Light is Lux, voltage is expressed in terms of volts.

4.2. Data Collection Form

The data collection is the main part of data mining. This section deals with the various data collection

forms used in our project.

4.2.1 Data Entry Forms

Figure 2 shows the form used for obtaining the geographic details pertaining to sensors such as co-

ordinate location. The data entered are displayed in the grid format. The density based clustering is

done based on these location details. The sensor Id takes the Ids of each sensors and the location x,

location y takes the x and y coordinate values of the location of sensor. On clicking the Add button

these data are added to the database and these sensors entered are displayed in the grid-format. The

Sensor details can be edited or deleted. The Reset button is used to clear all the textbox values.

IF Count(result) ≥ MinPts THEN

FOR i FROM 1 TO result.size DO

P: = result.get (i);

IF P.ClId IN {UNCLASSIFIED,NOISE} THEN

IF P.ClId = UNCLASSIFIED THEN

seeds.Add(P);

END IF;

SetofPts.changeClId(P,ClId);

END IF; // UNCLASSIFIED or NOISE

END FOR;

END IF; // MinPts

seeds.Remove(currentSens);

END WHILE; // seeds Empty

RETURN True;

END; // ExpandCluster

Page 217: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

851 Vol. 7, Issue 3, pp. 845-855

Figure 2. Sensor Location Entry

Figure 3 shows the form used to obtain the spatial and temporal details of the sensors such as the date

of entry, temperature, humidity, light and voltage. These data entered are represented in the grid form,

which can be updated if necessary. The data must be entered within a specified threshold. The sensor

Id lists the Id’s of the sensors entered in the previous page and the required sensor to which the data

must be entered should be selected. The date for which the data needs to be entered must also be

selected. By default the current date will be considered. The spatial data will be entered by the

administrator. On clicking the Add button, these data entered will be stored into data base if they are

within the specified threshold. The entered data will be displayed in the grid format in the page. These

data can be edited or deleted.

Page 218: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

852 Vol. 7, Issue 3, pp. 845-855

Figure 3. Sensor Spatial Data Entry

Figure 4, shows the equipment data form. This form is used to obtain the spatial data of the equipment

such as temperature, humidity, light and voltage specifications of the equipment to be placed. These

data are added to the database only if they do not exceed a specified threshold. These data are

represented in a grid format. These data can be updated if required.

Figure 4. Equipment Data Entry

4.3. Clustering and Noise Detection Forms

The clustering and noise detection is done through the density based spatial clustering algorithm with

noise called GDBSCAN. Figure 5. This page gives a grid representation of various clusters formed

Page 219: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

853 Vol. 7, Issue 3, pp. 845-855

along with the sensor ID’s of those sensors that fall under each of the clusters. The sensors which

constitute noise are also detected and are displayed in the noise grid. The clustering is done based

only on the location information of the sensors entered through the sensor location entry page.

Figure 5. Cluster and Noise Grid

Figure 6 shows the graphical representation of the sensors and their respective clusters. The noises are

those sensors which are not included in any cluster formations. The purple dots represent the sensors;

the red lines are used to group the sensors of a particular cluster together. On clicking the show button

present in the top of the screen we get the sensors which form the clusters and those sensors which

form noise.

Figure 6. Cluster Formation and Noise Detection Representation

Page 220: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

854 Vol. 7, Issue 3, pp. 845-855

V. CONCLUSION

We considered 54 sensors to capture the spatial data, out of which 6 sensors do not belong to any of

the clusters, so that are treated as outliers. Hence 11% of the noise is detected. This method is found

to be feasible for Spatial Data Mining (SDM) in Geographic Knowledge Discovery (GKD). The

density based GDBSCAN algorithm is used for clustering. It estimates the density for a particular

point in the dataset by counting the number of points within a specified radius. There are 6 clusters

formed for 54 sensors, named as C1, C2, C3, C4, C5, C6. The number of sensors in each cluster, after

using GDBSCAN algorithm are such that, C1 consists of 17 sensors, C2 consists of 6 sensors, C3

consists of 5 sensors, C4 consists of 5 sensors, C5 consists of 8 sensors and C6 consists of 7 sensors.

By this analysis cluster C1 has more number of sensors and it senses more spatial data and coverage

of the location is also high so that we can place more sensitive devices in C1 compared to other

clusters.

VI. FUTURE WORK

Clustering and Noise Detection method for the GKD will help in placing Laboratory devices in a

feasible location, so that the damage to the devices due to high temperature, high humidity and high

voltage can be avoided. By using this technique maintenance cost can be reduced and increase the

lifetime of the devices. Further this work can be enhanced for pattern analysis, so that computed

scoring for the temperature, light, humidity and voltage values can increase the speed of analysis

process.

REFERENCES

[1]. Duck-Ho Bae, Ji-Haeng Baek, Hyun-Kyo Oh, Ju-Won Song, Sang-Wook Kim “SD- Miner: A Spatial Data

Mining System”, Proceedings of IC-NIDC 2009.

[2]. N.Santhosh Kumar, V. Sitha Ramulu, K.Sudheer Reddy, Suresh Kotha, Mohan Kumar, “Spatial Data

Mining using Cluster Analysis”, International Journal of Computer Science & Information Technology

(IJCSIT) Vol 4, No 4, August 2012.

[3]. Ester M., Kriegel H.-P., Sander J. and Xu X. 1996. “A Density-Based Algorithm for Discovering Clusters

in Large Spatial Databases with Noise”. Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining.

Portland, OR, 226-231.

[4]. Ng R.T., and Han J. 1994. “Efficient and Effective Clustering Methods for Spatial Data Mining”. Proc.

20th Int. Conf. on Very Large Data Bases. Santiago, Chile, 144-155.

[5]. Jörg Sander, Martin Ester, Hans-Peter Kriegel, Xiaowei Xu, “Density- Based Clustering in Spatial

Databases: The Algorithm GDBSCAN and its Applications”, Data mining and knowledge discovery 2,169-

194(1998).

[6]. Ertöz L., Steinbach M., Kumar V.: "Finding Clusters of Different Sizes, Shapes, and Densities in Noisy,

High Dimensional Data", SIAM International Conference on Data Mining (2003)

[7]. Sayal M., Scheuermann P.: "A Distributed Clustering Algorithm for Web-Based Access Patterns", in

Proceedings of the 2nd ACM-SIGMOD Workshop on Distributed and Parallel Knowledge Discovery,

Boston, August 2000

[8]. Fayyad U., Piatetsky-Shapiro G., and Smyth P. 1996. “Knowledge Discovery and Data Mining: Towards a

Unifying Framework”. Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining, Portland, OR, 82-

88.

[9]. Ester M., Kriegel H.-P., and Xu X. 1995. A Database Interface for Clustering in Large Spatial Databases,

Proc. 1st Int. Conf. on Knowledge Discovery and Data Mining, Montreal, Canada, 1995, AAAI Press,

1995.

[10]. Jain A. K., Dubes R.C.: "Algorithms for Clustering Data", Prentice-Hall

[11]. Dr. Mohammed Otair,” Approximate K-Nearest Neighbour Based Spatial Clustering Using K-D Tree”,

IJDMS Vol.5, No.1, February 2013.

[12]. Lovely Sharma1, K. Ramya, “ A Review on Density based Clustering Algorithms for Very Large

Datasets”, IJETEA, Volume 3, Issue 12, December 2013

[13]. Richa Sharma Bhawna Malik Anant Ram,” Local Density Differ Spatial Clustering in Data Mining”,

IJARCSSE, Volume 3, Issue 3, March 2013

Page 221: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

855 Vol. 7, Issue 3, pp. 845-855

AUTHORS

Sneha N S, is currently Pursuing 4th Semester, Master of Technology in Computer Science

and Engineering at AIT, Chickmagalur. She has completed her Bachelor of Engineering

from Srinivas Institute of Technology, Mangalore. She had published a paper. Her areas of

interests include data mining and Information Security.

Pushpa, Associate Professor in the Department of Computer Science and Engineering,

AIT, Chickmagalur and Research Scholar at R.V.College of Engineering, Bangalore. She

had 12 years of teaching experience. She has completed her Master of Technology from

Visvesvaraya Technological University. She had publishes research papers in international

journals. Her research interests are in the area of data mining and Computer Networks.

Page 222: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

856 Vol. 7, Issue 3, pp. 856-862

PROXY DRIVEN FP GROWTH BASED PREFETCHING

Devender Banga1and Sunitha Cheepurisetti2

1,2Department of Computer Science Engineering,

SGT Institute of Engineering and Technology, Gurgaon, India

ABSTRACT Now days, number of people are connected to the web to get results for their various queries from the servers.

Servers are also heavily loaded to respond to large number of user’s. At this moment of time when millions of

users are connected to the web, results for their queries should be as fast as possible. There are number of

schemes by which we can reduce the response time of a query to a server, such as caching scheme, prefetching

scheme. This paper proposes a framework for prefetching in the proxy server. For the reduction in the web

latency and improving user satisfaction, prefetching is important. The proposed framework uses FP growth

algorithm for determining the user’s frequent sets of patterns. User data will be collected from the web log

history. Now, depending upon the threshold and patterns generated by the FP Growth, list of patterns to be

prefetched will be generated and passed to the predictor module. Predictor module will prefetch the web

objects by creating session with the main server. Using FP Growth algorithm of association rule mining

frequent patterns can be determined without candidate key generation. Working of FP Growth is shown using

Data-applied. The proposed framework improves the efficiency of the existing network.

KEYWORDS: Prefetching, Proxy server, cache, prediction.

I. INTRODUCTION

Ones dependency on network is increasing day by day, in developing countries where mostly every

person is addicted to web, as we can get result to any query of any subject we heard even once. The

moment we start searching with a statement or a keyword, that keyword is stored by browser cache

for further providing fast results. In today’s network where speed is of major concern we need

schemes and algorithms to reduce the response time from server to a client, where using only a cache

is not sufficient to reduce the latency. As today’s network is full of congestion and getting a response

faster is a major issue of concern. Many researchers worked in this area and also there are schemes

and frameworks available which reduces the response time. One of the schemes available is

prefetching with web caching. Web mining is used to mine the data received from the proxy server in

three tier web architecture and then results are fed back to proxy cache which then responds to user

queries on demand. One of the example of mining is, if we visited a shop to buy bread slices, then the

milk can, butter bar and cheese slices which are placed nearby bread can be mined by using

association rule based mining, in this example we have only one query named bread but other results

such as slices, milk can are not required but these are the expected results of query for persons which

may looked for milk can and cheese slices along with bread at some earlier point of time. So by

applying mining rules we can get results to expected queries from the cache rather than asking the

results for various pre-questioned queries again and again from the server. This greatly reduces the

latency of a network. There are three types of caches; browser cache, proxy cache and server cache.

Applying a cache with a browser gives faster results but while using three tier architecture where

proxy server is the mediocre; much of the burden is barred by proxy server instead of main server. So,

by applying web caching rules on proxy server instead of main server will help in providing the

results much faster. Proxy servers are used not only to reduce web latency but for achieving various

tasks like: NAT (Network Address Translation), Firewall and for security of web servers so that

unauthorized access could be stopped [1]. Proxy servers also provide a way where we can grant

Page 223: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

857 Vol. 7, Issue 3, pp. 856-862

access to any of the link on the web by hiding the actual details of the users like IP (Internet Protocol)

address.

Paper is divided into different sections for the ease of understanding. Section II focuses on the

primary work done in this field. Section III depicts the proposed framework and section IV provides

the experimental work done to prove the proposed framework. At the end, conclusion of the work has

been provided.

II. LITERATURE REVIEW

Pallis et. al[2] addressed the short-term prefetching problem on a web cache environment using an

algorithm for clustering inter-site web pages. The proposed scheme efficiently integrates web caching

and prefetching. According to this scheme, each time a user requests an object, the proxy fetches all

the objects which are in the same cluster with the requested object.

Sharma and Dubey proposed a framework for web traffic reduction [3]. The approach first extracts

data from proxy server web log, and then the extracted data is preprocessed. The preprocessed data is

then mined using clustering, and sequence analysis is being done to know the patterns to be pre-

fetched.

Divya and Kumar presented a survey on three different association rule mining algorithms AIS,

Apriori and FP-tree algorithm and their drawbacks which would be helpful to find new solution for

the problems found in these algorithms [4].

Kosala et.al surveyed the research in the area of web mining [5]. The paper explores the connection

between web mining categories and the related agent paradigm. This paper focuses on representation

issues, on the process, and on the learning algorithm, and the application of the recent works as the

criteria.

Chang et.al[6] describes that Information extraction (IE) from semi-structured web documents is a

critical issue for information integration systems on the Internet. The discovery of repeated patterns is

realized through a data structure call PAT tree. The paper also focuses that incomplete patterns are

further revised by pattern alignment to comprehend all pattern instances.

Sharma and Dubey provided the literature survey in the area of web mining [7]. The paper basically

focuses on the methodologies, techniques and tools of the web mining. The basic emphasis is given

on the three categories of the web mining and different techniques incorporated in web mining.

Dominic and Abdullah dealt with FP-growth’s Variation algorithms [8]. The paper considers to the

“Classic” frequent itemsets problem, which is the mining of all frequent itemsets that exist in market

basket-like data with respect to support thresholds. The execution time and the memory usage were

recorded to see which algorithm is the best. For the time consumption AFOPT(A Frequent Pattern

Tree) algorithm took advantage for most of the data set even though it suffers from segmentation fault

in the low support values on connect4 data set.

Venketesh et. al[9] presented a prediction model that built a Precedence Graph by considering the

characteristics of current websites in order to predict the future user requests. The algorithm

differentiated the relationship between the primary objects (HTML) and the secondary objects (e.g.,

images) when creating the prediction model.

Kasthuri et. al[10] shows that deduction of future references on the basis of predictive Prefetching,

can be implemented by based on past references. The prediction engine can be residing either in the

client/ server side. But in our context, prediction engine resides at client side. It uses the set of past

references to find correlation and initiates Prefetching that is driving user’s future requests for web

documents based on previous requests.

Sharma and Dubey proposed a framework for web traffic reduction [11]. The paper presented a

framework for the prefetching and prediction in web. According to the framework, previous web

requests of the user will be extracted from the proxy web log. From this web log, strong rules will be

generated using FP Growth algorithm. These rules will be used to prefetch the upcoming requests of

the current user.

Singh et. al[12] proposed a framework for prediction of web requests of users and accordingly,

prefetching the content from the server. The proposed framework improved performance of web

proxy server using web usage mining and prefetching scheme. They have clustered the users

according to their access pattern and usage behaviour with the help of K-Means algorithm and then

Page 224: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

858 Vol. 7, Issue 3, pp. 856-862

Apriori algorithm is applied to generate rules for prefetching pages. This cluster based approach was

applied on proxy server web log data to test the results using LRU and LFU prefetching schemes.

Ford Lumban Gaol proposed web log sequential pattern mining using Apriori-all algorithm. The

experiment will be conducted base on the idea of Apriori-all algorithm, which first stores the original

web access sequence database for storing non-sequential data. An important application of sequential

mining techniques is web usage mining, for mining web log accesses, where the sequences of web

page accesses made by different web users over a period of time, through a server, are recorded [13].

III. THE PROPOSED FRAMEWORK

Prefetching and caching scheme is used to improve the performance of web even when more number

of user are connected and demanding access to the content over the web. In our proposed framework a

browser sends a request to the listener (forwarding function) to setup a connection between client and

proxy server. Listener then forwards the request for connection establishment via TCP/IP

Handshaking Protocol to the proxy server. Once the connection is established then proxy server sends

a hint list to the predictor. Predictor maintains a web log history which tells us about the pages and

content demanded with the number of hits of a particular web page with the IP address of the user.

Hints from the predictor are then applied with FP (Frequent Pattern) growth algorithm. FP Growth

algorithm gives the compressed data objects as an output, applying association rule mining approach.

The proposed framework is as shown in Figure 1.

Figure 1: Proposed Framework for web prefetching

2.1. Predictor

Predictor collect hint list from the proxy server with the help of keywords from the user query. Hint

list is very much essential to trace the type of data for which user is demanding, which is again

forwarded to web log history and to the block where FP growth algorithm is applied.

2.2. Web Log History

Web log history is very important aspect in prefetching. Web logs redirect the prefetch engine to

collect and cache particular type of data according to the demand of the user. Web log maintains the

session of the user, IP address of the user, for how long a user was connected, type of information for

which a user demanded, most frequently visited pages etc.

2.3. FP Growth

FP Growth approach is used for producing frequent item data sets based on divide and conquer

method. FP Growth is mainly used for mining frequent item sets without candidate generation. Major

step in FP-growth is to compress the database showing frequent item set in to FP-tree. To build a FP

Page 225: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

859 Vol. 7, Issue 3, pp. 856-862

tree further we need to apply 2 steps on the dataset. Further FP tree is divided into a set of more

number of conditional databases in which we applied association rule mining for mining of frequent

item datasets from FP tree [8].FP growth algorithm pseudo code is as shown in Figure 2 [14].

Figure 2:FP Tree algorithm pseudo code [14]

2.4. Proxy Cache

Proxy cache is used with proxy server to save certain type of web data. The data after prefetching and

mining is fed to proxy cache. A proxy cache works very similar to that of cache in operating system.

When certain type information is demanded by a user / client through web browser passes through a

proxy server than proxy server look for the response of that query in its cache memory. If the result of

query for which the user is asking for found in cache memory of proxy server than it is called a hit

else it is a miss. If no result is found in server cache then the query is forwarded to main server to get

the result of the query.

2.4. Handshaking Process

As we know that TCP is a connection oriented protocol, but TCP does not establish connection on its

own TCP uses handshaking protocol for establishment of connection. Connection establishment is

very essential part in TCP so that further session of query and response could be maintained.

Handshaking protocol is of two types:

2.4.1. Two way handshaking

In this type of handshaking protocol there are 2 types of messages passed between client and server

for connection establishment and they are: Request message from host to server, accept/reject

message from server to host.

2.4.2. Three way handshaking

In this type of handshaking protocol there are 3 types of messages passed between client and server

for connection establishment and they are: Syc message from host to server, Syc_Ack from server to

host, Ack from host to server.

IV. EXPERIMENTAL WORK

The result analysis is done using online tool data-applied. Demo data from the tool is being used and

FP Growth is applied on that pre-processed data set. The results found shows that pattern generation is

fast and more accurate as compared to existing methodologies. Since web log data is huge and

FP_GROWTH

Input: FP-tree

Method: Call FP-growth (FP-tree, null).

procedure FP-growth (Tree, α)

{

1) if (single path P) do

2) for each combination

= minimum support of nodes in β.

3) Else For each header ai do

{

5) Construct β.s conditional pattern base and then β.s

conditional FP-tree Tree β

6) If Tree β = null

7) Then call FP-growth (Tree β, β)}

}

Output: complete set of frequent patterns

Page 226: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

860 Vol. 7, Issue 3, pp. 856-862

existing works like as Apriori is slow because it generates candidate set of keys, whereas FP Growth

doesn’t use this intermediate step. Hence, results in fast pattern generation process. Figure 3: shows

the online web data which is pre-processed using FP Growth algorithm on the data set named Web

Clickstream. It shows a log of data having various entities like; date, city, visits, page views, visit

duration, origin etc. Figure 4 shows the Data set when association rule is applied on Clickstream data;

it shows the association between various data elements. Figure 5 shows the descriptive information

about Web Clickstream data in text format. It also shows association rule that how data is gathered

and the percentage of data which is a hit.

Figure 3: Online pre-processed web using FP Growth algorithm.

Figure 4: Data set after applying association

Page 227: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

861 Vol. 7, Issue 3, pp. 856-862

Figure 5: Descriptive data of dataset Web Clickstream after applying association

V. CONCLUSION

By the use of FP Growth algorithm in association rule mining we reduced the response time of most

of the user queries thereby increasing proxy server hit ratio, it improves the overall performance of

three tier web architecture. The user query is forwarded to proxy server through the listener and then a

predictor maintains web log history and passes the hints to FP Growth block where association is

applied on the loaded data set using association rule mining. This proposed framework helps in

reducing network congestion and it also helps in providing security to the main server as by

increasing the hit ratio in proxy server somehow we kept the user away from main server. By using

this framework we can also reduce DOS (Denial of service) attacks up to some extent. This

framework will also reduces the time for which a network channel is allocated to a user in TCP, as

soon as a user gets results to its queries, the allocated channel will become free and network traffic

can be reduced. The merits of this framework includes: increase in proxy server cache hit ratio. The

simulation of this framework is performed using Data-Applied tool on data set named Web

Clickstream collected from data-applied.com. Actual realization of this framework results in

improved network architecture.

VI. FUTURE WORK

In this framework, change in hit ratio will result thereby changing the size of proxy server cache and

threshold value. We are now focusing on realization of this framework under various circumstances

and varying network conditions mostly when network channels are heavily loaded. We are also

planning to investigate the performance of this framework in different networks like mobile network

where mobility can affect the performance of a network.

Page 228: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

862 Vol. 7, Issue 3, pp. 856-862

REFERENCES

[1]. R.Manjusha, Dr.R.Ramachandran, (2011) “Web Mining Framework for Security in E-commerce”

IEEE-International Conference on Recent Trends in Information Technology, ICRTIT 2011. MIT,

Anna University, Chennai. June 3-5, 2011. [2]. Pallis G., A. Vakali and J. Pokorny, (2008) “A clustering-based prefetching scheme on a Web cache

environment”, Computers and Electrical Engineering 34, Elsevier, pg 309–323.

[3]. N. Sharma and S.K. Dubey, (2013) “Fuzzy C-means clustering based prefetching to reduce web

traffic”, International Journal of Advances in Engineering & Technology, vol. 6, Iissue 1, pp. 426-435,

ISSN: 2231-1963,2013.

[4]. R. Divya and V. S. Kumar, (2012) “Survey on ais, apriori and fp-tree Algorithms”,International Journal

of Computer Science and Management Research, vol. 1, issue 2, pp 194-200,ISSN 2278-733X,2012.

[5]. Kosala R. and Blockeel H, (2000) “Web Mining Research: A Survey”, ACM SIGKDD Explorations

Newsletter, June 2000, Volume 2 Issue 1.

[6]. Chang C., Lui S., Wu Y., (2001) “Applying Pattern Mining to Web Information Extraction”, Advances

in Knowledge Discovery and Data,2001 – Springer.

[7]. N. Sharma and S.K. Dubey, (2012) “A Hand to Hand Taxonomical Survey on Web Mining”,

International Journal of Computer Applications, vol. 60, issue 3, ISSN 0975 – 8887,2012.

[8]. A.M. Said, P.D.D. Dominic and A. B. Abdullah ,(2009) “A Comparative Study of FP-growth

Variations”, International Journal of Computer Science and Network Security(IJCSNS), vol.9,

issue.5,pp 266-272,2009.

[9]. P. Venketesh and R.Venkatesan, (2011) “Graph based Prediction Model to Improve Web

Prefetching”, International Journal of Computer Applications (0975 – 8887)Volume 36– No.10.

[10]. I. Kasthuri, M.A. Ranjit Kumar , K. SudheerBabu, and Dr. S. S. S. Reddy, (2012) “ An Advance

Testimony for Weblog Prefetching Data Mining”, International Journal of Advanced Research in

Computer Science and Software Engineering, Volume 2, Issue 4.

[11]. N. Sharma and S.K. Dubey, (2013) “FP tree use in prefetching”, Proc. of Int. Conf. on Advances in

Computer Science, AETACS, Elsevier, 2nd December, 2013, pp 555-561.

[12]. N. Singh, A. Panwar and R. S. Raw, (2013) “Enhancing the Performance of Web Proxy Server

through Cluster Based Prefetching Techniques”, International Conference on Advances in Computing,

Communications and Informatics (ICACCI), 2013, ISBN 978-1-4799-2432-5.

[13]. Ford Lumban Gaol, (2010) “Exploring The Pattern of Habits of Users Using WebLog Squential

Pattern” Second International Conference on Advances in Computing, Control, and

Telecommunication Technologies. 978-0-7695-4269-0/10 $26.00 © 2010 IEEE.

[14]. Kuldeep Malik, NeerajRaheja and PuneetGarg, (2011) “Enhanced FP-Growth Algorithm” IJCEM

Vol. 12, April 2011.

AUTHORS

Devender Banga is Pursuing M.Tech. from SGT Institute of Engineering and

Technology, Gurgaon, India. His current research area is Web Usage Mining.

Sunitha Cheepurisetti is Assistant Professor in Department of Computer Science

Engineering, SGT Institute of Engineering and Technology, Gurgaon, India. Her

research area is Operating System, Networks and Security. She has more than 6

publications in various National/International Journals.

Page 229: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

863 Vol. 7, Issue 3, pp. 863-867

SEARCH NETWORK FUTURE GENERATION NETWORK FOR

INFORMATION INTERCHANGE

G. S. Satisha IMPACT College of Engineering & Applied Sciences, Banglore, India

ABSTRACT With the turn of the century, these communication technologies started to evolve in ways that forever changed

how users connect with the world. Broadband quickly replaced slower dial-up connections. In Wireless

technology, Ten years later, users—from consumers to enterprises—see broadband and wireless technologies as

integral to what they do every day. Consumers and workers expect an “always on” connection to their world,

whether that’s the ability to instantly share images with different network across the country or analyzing

supply chain issues in near real-time from virtually anywhere around the world. Businesses and governments

realize the value of networks to provide solutions to many of their greatest challenges, especially integrating

systems. Fundamental to innovation in communications are networks. The past decade saw significant

advancements; innovation in the next decade promises to be just as impactful.

I. INTRODUCTION

Over the next decade, communication between networks will pay close attention to consumer

Communication and network technology trends as this shape the workplace .Users will expect the

same experience wherever they are, with whatever they are doing. This expectation will drive the

need for new solutions, creating new opportunities for organizations that respond. Broadband,

wireless, and global IP technologies will be at the heart of global economic growth to meet future

needs, today’s networks are already undergoing major changes. Advancements in the way technology

components relate, including moving toward a more service-oriented architecture, provide increased

bandwidth flexibility, more rapid provisioning of network services, and put much more control in the

hands of the enterprise. These network changes, along with user demand, produce trends that reduce

the challenges of today and introduce new solutions for tomorrow.

To enable major, new communication capabilities, intelligence will be embedded in networks.

Embedded intelligence will make network control more automated, dynamic, and flexible. In the

coming decade, communication between network components won’t be tightly tied to specific

hardware, but will instead reside in an interoperable standards-based control layer so that newly-

added network components, like switches and multiplexers, can signal themselves to the other

network components, allowing dynamic configuration. Networks are moving towards being more

effectively application aware. This means they will provide different service levels, depending on the

application. Application-aware search network enable organizations to achieve

Prioritized levels of performance for Private IP network applications such as VoIP, enterprise

resource planning (ERP), and video. The proper network assessment, reporting, dynamic bandwidth,

and packet-marking tools let organizations closely monitor performance, make adjustments, and

achieve cost efficiencies. This paper gives information on the future network expectation and how the

Search tree network can incorporate the demand for the future traffic and for the future needs.

The paper is organized into pervasive bandwidth pervasive IP connectivity, Purpose built solution.

Amd future work.

Page 230: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

864 Vol. 7, Issue 3, pp. 863-867

II. PERVASIVE BANDWIDTH

In the years ahead, multimedia applications will continue to increase. The surge includes 3-D video,

video sharing, Biometric analysis, conferencing, and streaming—all in higher definition than is

possible today. As an example of the trend, a 2010 volcanic eruption in Iceland caused a sharp rise in

video conferencing. Globally, mobile data traffic will double every year through 2014, and will grow

to 3.6 Exabyte’s per month.2 The increased volume will cause data traffic to skyrocket and require

capacity not currently available. In the past, bandwidth available to a user at a given site was

available only in coarse increments, selected at the time of provisioning. Enterprises upgrading to

higher capacity faced costly projects that could take months to plan and implement. Increasingly,

network technologies such as bandwidth deployment over fiber optic cables and via Ethernet access

networks will allow rapid upgrades often without the need for long projects, costly site visits, and

physical construction projects. On-demand bandwidth capability is available today, though not yet

widely used. Over the next decade, use will increase dramatically. Organizations will still need to be

conscious of bandwidth needs when designing networks, but will find that the flexibility after the

initial deployment is far greater. Greater flexibility allows easy adjustments that more readily meet

changing business needs.

The growth of bandwidth-intensive applications will make dynamic and on-demand bandwidth

capabilities routine. In many cases, businesses will be able to upgrade bandwidth almost instantly,

without human intervention, triggered by consumption patterns and parameters set in advance.

Organizations may pay for underlying capacity and then peak bandwidth and data transfer utilization,

in a model similar to today’s commercial electricity suppliers. Wireless will realize the most

impactful bandwidth milestone. Fourth-generation (4G) search networks will bring broadband

capacity to mobile devices at rates approaching, and potentially surpassing, 10 times the current

capacity. The first commercial 4G wireless network was deployed in Scandinavia in late 2009 using

Long Term Evolution (LTE) technology, and large scale deployments began in the U.S. in 2010. LTE

not only provides customers with true broadband speeds, it will also embed wireless connections in

cars, buildings, machines, and appliances enabling what some people call the “Internet of things.”

Verizon Wireless was among the first carriers in the U.S. to launch LTE9. Trials in Boston and

Seattle demonstrate the network is capable of peak download speeds of 40 to 50 mega bits per

second (Mbps) and peak upload speeds of 20 to 25 Mbps, though average, real-world rates are

expected to be 5 to 12 Mbps on the downlink and 2 to 5 Mbps on the uplink. Based on internal

estimates, aggregate growth in wireless data carried by Verizon will skyrocket after LTE is deployed,

with an increase of more than 2,000 percent between now and the end of 2017.Search network allows

user to transmit and receive information via various evolutions like 3G, 4G, 5G.

III. PERVASIVE IP CONNECTIVITY

As technology platforms continue to evolve, the barriers between wireless and wired networks and

devices around the world will eventually disappear. Consumers and businesses will expect their

applications to move seamlessly between platforms, no matter which network they’re connected to at

the moment. And they’ll demand access to all of their content—regardless of where it’s stored—

anytime, anywhere, and on any device. This trend is already underway with fixed mobile convergence

(FMC), where mobile phones transparently switch a voice call in progress between the cellular

network and VoIP. The goal of Search network is to provide a seamless transition of voice, data, and

even video communications between different types of networks, no matter the location or what

device is used, providing the user with an optimized, always available experience. Soon it will be

commonplace to continually watch a television show or video presentation while moving between

devices.

The move toward 4G technology is pushing networks closer toward FMC. In the near term, LTE will

enable billions, perhaps trillions, of devices to connect. Wireless sensors will then integrate everyday

items, such as household appliances and medical monitoring equipment and businesses will widely

deploy Machine to Machine (M2M) wireless communications solutions to track assets, remotely

monitor inventory, and ensure that distant equipment is operating properly. These types of sensors

Page 231: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

865 Vol. 7, Issue 3, pp. 863-867

will provide data that suggest the need for proactive maintenance, or instantly report service

interruptions. LTE will also enable a new generation of broadband to wireless applications.

IV. PURPOSE-BUILT SOLUTIONS

A trend in the next decade will be purpose-built networks that solve particular business requirements.

By separating the network functions or services from the technology, businesses can specify a custom

network to suit particular business needs. In the past, a physical network would have been built to

accomplish this. Virtualization of these services will make it possible to create a logical network

without building a physical network.

The industry is moving away from the slow and awkward methods of adding point-to-point links that

result in a tangle of lines and excess equipment. Many are choosing VPNs that run over the public

Internet or private clouds based on Multi-Protocol Label Switching (MPLS) network technologies.

VPNs and private clouds let organizations customize their network solution by specifying the type

and level of security required to meet the business and regulatory needs, the bandwidth required, and

the data storage features.

Through 2022, the VPN and private cloud trends will continue. While lingering concerns about

availability and security will encourage many enterprises to participate in private cloud-based

interconnections for key relationships, the flexibility of Internet-based connections have enormous

and growing appeal. However, even in industries like financial services, where the largest players

tend to be late adopters of many technologies, a marked shift away from physical private networks

and toward MPLS-based private cloud networks exists.

It’s not too difficult to imagine different types of virtual industry markets and exchanges developing

as a result of the expected changes.9 On the energy horizon is smart grid. By providing two-way

communication between the user and the utility company, and between utility companies, an energy

market is taking shape that allows more fluid pricing, encouraging high value conservation at times of

peak demand, and will let users sell electricity to other users. In the healthcare field, health

information exchanges are forming as a way to securely share patient information across different

medical facilities. These exchanges help doctors avoid repeat patient tests, reduce

Missing patient data, and increase the quality of care. Early in 2009, network selected to move to an

open connectivity model for its Nordic markets by replacing its proprietary participant network with

the Verizon financial Network (VFN). The VFN is a dedicated and purpose-built business

infrastructure specifically designed to share market data and execute timely trades. VFN offers

financial services customers a fully end-to-end managed and supported, highly scalable and low

latency interface to the financial services ecosystem

V. THE FUTURE WORK

By the year 2022, communications networks will be an even more integral part of everyday life than

they are today, both at home and in the workplace. Network-driven technology will be a key enabler

of daily activities, yet it will become more transparent to the user. No longer will the user care about

how it works just that it works. The network of tomorrow8 will produce a hyper-connected

environment. Intelligence will be built into the fabric of everything imaginable and some things not

yet imagined—all enabled by pervasive communications technologies. While many of the

advancements that will be commonplace in 2022 are already taking shape, a few advancements seem

to be straight out of a science fiction novel. These innovations will impact everyone and everything.

For example

Apparel.

Various wearable devices such as glasses or visors with built-in cameras and video displays will both

record and transmit information. Inconspicuous displays will send streaming information to the user,

such as a restaurant menu as a diner walks by a restaurant or tech support through a virtual reality

demonstration. Gaming vests will provide forced feedback as part of an augmented reality

experience.

Home.

Page 232: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

866 Vol. 7, Issue 3, pp. 863-867

The fiber-enabled smart home will be a platform for managing every function of the digital

ecosystem, from home security and energy management to medical monitoring, telework and distance

learning. Refrigerators will have a touch surface that displays grocery lists and coupons, and the

ability o track contents for real time “inventory” control. Even carpets will be smarter, tracking the

health of the elderly by sensing erratic movements that may predict a fall.

Energy.

Household appliances can already be remotely controlled to run at off-peak times. San Diego Gas and

Electric found that if 80 percent of its customers used their washers and dryers at off-peak times, it

could eliminate two power plants. Energy pricing will become dynamic, changing in real-time,

motivating users to be more energy efficient.

Healthcare.

A patient’s wireless device will receive reminders about medication and therapy, in-home devices

will provide daily monitoring of vital statistics for preventative care, and patients will consult with

out-of-town doctors and specialists over high-definition 3-D video connections. Many people will

have body sensors, tiny wireless devices intended to track their vital signs

Government.

City-centric applications will report traffic and parking conditions using GPS-enabled sensors that

provide real-time notifications for public transit, and even monitor the city’s air and water quality

Crime detection will be aided with context-aware video surveillance that reports unusual activity.

Civil services will be tailored to the individual, supported by the full integration of government

systems.

Enterprise.

Using the cloud model, traditional businesses will sell their internal capabilities as services that are

separate and distinct from their regular business offering, just as Amazon does today with their Web-

store infrastructure. RFID tags will become multifunctional sensors that not only provide item

location but also item health, which is useful for tracking food shipment. The percentage of tele

workers with no fixed regular work location will grow significantly, with the ability to work and

video conference on one device that can connect anywhere. The combination of increasingly

powerful and intelligent networks and innovative applications and devices will create a whole new

way to run a home, an enterprise, a community, or an economy

VI. CONCLUSION

Users already have high expectations of their communication technology. This will continue to the

point of dependence. Users will come to expect always-on access to the Internet that supports their

lifestyles in every way. None of this would be possible without the foundation of solid and advanced

communication networks. In the future, network providers will continue to drive open, IP-based

technical standards that allow new technologies to work together.. To drive the solutions and services

of 2022, network providers will form alliances with other providers and partnerships with application

developers and device makers.. Unprecedented access to real-time data, combined with

communication platforms that are available anywhere and anytime, will not only increase the rate of

change for existing models, but will substantially increase the pressures of global competition.

Having identified potential opportunities, the team should then conduct forward-looking pilot

projects. While some opportunities may not prove fruitful, the ones that do will create a new

competitive advantage. Broadband, wireless, and global IP technologies will be the heart of coming

economic growth. The evolution of the network over the next decade will not only enable new

products and services, It’s also so important because of the vital role broadband must play in

advancing key societal goals in areas like education, health care, energy, public safety, democracy,

and small-business opportunity. Future search tree network should be able to large traffic and can

transmit and receive data of images from any mobile network and it should be compatible with the

high speed traffic data. It should take care of traffic from different generation and should pass data

from different network layers.

Page 233: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

867 Vol. 7, Issue 3, pp. 863-867

REFERENCES

[1]. N. R. French and J. C. Steinberg, “Factors Governing the Intelligibility of speech Sounds,” The Acoustical

Society of America Journal,19 (1) (1947): 90.

[2]. A. H. Inglis, “Transmission Features of the New Telephone Sets,” Bell System Technical Journal 17 (1938):

358-380.

[3]. Anthony Rix and Mike Hollier, “Perpceptual speech quality assessment from narrowband telephony to

wideband audio”, AES 107 th Convention, New York: 24-27 September 1999

[4]. Forge, Simon, Colin Blackman and Erik Bohlin (2005) The Demand for Future Mobile Communications

Markets and services in Europe, Technical Report EUR 21673 EN, April .

[5]. Hazlett, Thomas W. (2006) “An economic evaluation of spectrum allocation policy," in Richards, Foster,

and Kiedrowski, pp. 249–258.

[6]. Evolution of Mobile Wireless Communication Networks-1G to 5G as well as Future Prospective of Next

Generation Communication Network IJCSMC, Vol. 2, Issue. 8, August 2013, pg.47 – 53

[7]. Naik, G., Aigal, V., Sehgal, P. and Poojary, J. (2012). Challenges in the implementation of Fourth

Generation Wireless Systems. International Journal of Engineering Research and Applications (IJERA) 2 (2)L:

1353-1355.

[8]. Singh, S. and Singh, P. (2012). Key Concepts and Network Architecture for 5G Mobile Technology.

International Journal of Scientific Research Engineering & Technology (IJSRET) 1 (5): 165-170.

[9]. Hrikhande, S. and Mulky, E. (2012). Fourth Generation wireless Technology. Proceedings of the National

Conference "NCNTE-2012" at Fr. C.R.I.T., Vashi, Navi Mumbai: 24-25.

[10]. UMTS World (2009). “UMTS/3G History and Future Milestones”, [Online] Available:

http://www.umtsworld.com/umts/history.htm

AUTHORS BIOGRAPHY

The Author is currently working in the Electronics and Communication Engineering

Department as a HOD and Professor.

Page 234: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

868 Vol. 7, Issue 3, pp. 868-878

A BRIEF SURVEY ON BIO INSPIRED OPTIMIZATION

ALGORITHMS FOR MOLECULAR DOCKING

Mayukh Mukhopadhyay Tata Consultancy Services, Kolkata-700091, India

ABSTRACT The idea in molecular docking is to design pharmaceuticals computationally by identifying potential drug

candidates targeted against proteins. The candidates can be found using a docking algorithm that tries to

identify the bound conformation of a small molecule ligand to a macromolecular target at its active site, which

is the region of an enzyme where natural substrate binds. Mathematically, molecular docking can be formulated

as an optimization problem in which the objective is to minimize the intermolecular bound conformational

energy of two interacting molecules. In this brief survey, after a concise introduction on nature inspired

computing, the application of bio inspired optimization algorithms in molecular docking has been studied.

KEYWORDS: Bio-Inspired Algorithm, Differential Evolution, Protein-Ligand Docking Problem

I. NATURE INSPIRED COMPUTING

1.1 Introduction

Nature Inspired Computing (NIC) is one that aims to develop new computing techniques after getting

ideas by observing how nature behaves in various situations to solve complex problems. A typical

NIC system is based on self-organization and complex systems. It is a computing system operated by

population of autonomous entities surrounded by environment.

Autonomous entity of the NIC system consists of two devices: detectors and effectors [25]. There

may be one or more detector that receives information related to its neighbours and its environment.

Obviously the information depends on the system to be modeled or problem to be solved. There may

be multiple effectors that make changes to the internal state, exhibits certain behaviors and make

changes to the environment. Basically the effector facilitates sharing of information among

autonomous entities. NIC system has a repository of local behavior rules. The rules of behavior are

crucial to autonomous entity. They are used to decide how autonomous entity must act or react to the

information collected by the detector from the environment and neighbors. Autonomous entity should

be capable of learning. It must respond to local changing conditions by modifying its rules of

behaviour over time.

Autonomous entities have ADEAS (Autonomous, Distributed, Emergent, Adaptive, Self-organized)

characteristics [25]. Autonomous entities are independent and rational. They use formal computing

methods to describe how entities acquire and improve reactive behavior. They have decision making

capabilities and are distributed in the environment. They follow predefined protocols and interact with

each other to exchange their state information. New complex behaviors emerge when the entities act

collectively. The entities respond to changes in the environment by changing their behavior. By

interacting with each other the entities self-organize to fine tune their behaviour.

Environment may be static or dynamic. In static environment, autonomous entities are free to roam.

Dynamic environment acts as notice board allowing autonomous entities post and read local

information. The central clock helps synchronize the actions of the autonomous entities.

By far the majority of nature-inspired algorithms are based on some successful characteristics of

biological system [22]. Therefore, the largest fraction of nature-inspired algorithms is biology-

inspired, or bio-inspired for short.

Page 235: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

869 Vol. 7, Issue 3, pp. 868-878

Among bio-inspired algorithms, a special class of algorithms has been developed by drawing

inspiration from swarm intelligence. Therefore, some of the bio-inspired algorithms can be called

swarm-intelligence based. In fact, algorithms based on swarm intelligence are among the most

popular. Good examples are ant colony optimization, particle swarm optimization, cuckoo search, bat

algorithm, and firefly algorithm.

Obviously, not all algorithms were based on biological systems. Many algorithms have been

developed by using inspiration from physical and chemical systems. Some may even be based on

music. In the next section, we will briefly divide all algorithms into different categories, and we do

not claim that this categorization is unique. This is a good attempt to provide sufficiently detailed

references.

1.2 Classification

We can divide all existing nature inspired algorithms into four major categories: swarm intelligence

(SI) based, bio-inspired (but not SI-based), physics/chemistry-based, and others. We will summarize

them briefly in the rest of this section. However, we will focus here on the relatively new algorithms.

Well established algorithms such as genetic algorithms are so well known that there is no need to

introduce them in this brief paper. It is worth pointing out the classifications here are not unique as

some algorithms can be classified into different categories at the same time.

Loosely speaking, classifications depend largely on what the focus or emphasis and the perspective

may be. For example, if the focus and perspective are about the trajectory of the search path,

algorithms can be classified as trajectory based and population-based. Simulated annealing is a good

example of trajectory-based algorithms, while particle swarm optimization and firefly algorithms are

population-based algorithms.

If our emphasis is placed on the interaction of the multiple agents, algorithms can be classified as

attraction-based or non-attraction-based. Firefly algorithm (FA) is a good example of attraction based

algorithms because FA uses the attraction of light and attractiveness of fireflies, while genetic

algorithms are non-attraction-based since there is no explicit attraction used.

On the other hand, if the emphasis is placed on the updating equations, algorithms can be divided into

rule-based and equation-based. For example, particle swarm optimization and cuckoo search are

equation based algorithms because both use explicit updating equations, while genetic algorithms do

not have explicit equations for crossover and mutation. However, in this case, the classifications are

not unique. For example, firefly algorithm uses three explicit rules and these three rules can be

converted explicitly into a single updating equation which is nonlinear.

This clearly shows that classifications depend on the actual perspective and motivations. Therefore,

the classifications here are just one possible attempt, though the emphasis is placed on the sources of

inspiration.

1.2.1 Swarm intelligence based

Swarm intelligence (SI) concerns the collective, emerging behaviour of multiple, interacting agents

who follow some simple rules [22]. While each agent may be considered as unintelligent, the whole

system of multiple agents may show some self-organization behaviour and thus can behave like some

sort of collective intelligence. Many algorithms have been developed by drawing inspiration from

swarm-intelligence systems in nature.

All SI-based algorithms use multi-agents, inspired by the collective behaviour of social insects, like

ants, termites, bees, and wasps, as well as from other animal societies like flocks of birds or fish. A

list of swarm intelligence algorithms is presented in Table 1.

The classical particle swarm optimization (PSO) uses the swarming behaviour of fish and birds, while

firefly algorithm (FA) uses the flashing behaviour of swarming fireflies. Cuckoo search (CS) is based

on the brooding parasitism of some cuckoo species, while bat algorithm uses the echolocation of

foraging bats. Ant colony optimization uses the interaction of social insects (e.g., ants), while the class

of bee algorithms are all based on the foraging behaviour of honey bees.

SI-based algorithms are among the most popular and widely used. There are many reasons for such

popularity; one of the reasons is that SI-based algorithms usually sharing information among multiple

agents, so that self-organization, co-evolution and learning during iterations may help to provide the

high efficiency of most SI-based algorithms. Another reason is that multiple agent can be parallelized

easily so that large-scale optimization becomes more practical from the implementation point of view.

Page 236: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

870 Vol. 7, Issue 3, pp. 868-878

1.2.2 Bio-inspired but not SI-based

Obviously, SI-based algorithms belong to a wider class of algorithms, called bio-inspired algorithms.

In fact, bio-inspired algorithms form a majority of all nature-inspired algorithms. From the set theory

point of view, SI-based algorithms are a subset of bio-inspired algorithms, while bio-inspired

algorithms are a subset of nature-inspired algorithms [22]. That is

SI-based ⊂ bio-inspired ⊂ nature-inspired (1)

Conversely, not all nature-inspired algorithms are bio-inspired, and some are purely physics and

chemistry based algorithms as we will see below. Many bio-inspired algorithms do not use directly

the swarming behaviour. Therefore, it is better to call them bio-inspired, but not SI-based. For

example, genetic algorithms are bio-inspired, but not SI-based. However, it is not easy to classify

certain algorithms such as differential evolution (DE). Strictly speaking, DE is not bio-inspired

because there is no direct link to any biological behaviour. However, as it has some similarity to

genetic algorithms and also has a key word ‘evolution’, we tentatively put it in the category of bio

inspired algorithms. These relevant algorithms are listed in Table 1.

For example, the flower algorithm or flower pollination algorithm developed by Xin-She Yang in

2012 is a bio-inspired algorithm, but it is not a SI-based algorithm because flower algorithm tries to

mimic the pollination characteristics of flowering plants and the associated flower consistency of

some pollinating insects.

1.2.3 Physics and Chemistry based

Not all metaheuristic algorithms are bio-inspired, because their sources of inspiration often come from

physics and chemistry. For the algorithms that are not bio-inspired, most have been developed by

mimicking certain physical and/or chemical laws, including electrical charges, gravity, river systems,

etc. As different natural systems are relevant to this category, we can even subdivide these into many

subcategories which are not necessary. A list of these algorithms is given in Table 1.

Schematically, we can represent the relationship of physics and chemistry based algorithms as the

follows [22]:

(2) Though physics and chemistry are two different subjects, however, it is not useful to subdivide this

subcategory further into physics-based and chemistry. After all, many fundamental laws are the same.

So we simply group them as physics and chemistry based algorithms.

1.2.4 Other algorithms

When researchers develop new algorithms, some may look for inspiration away from nature.

Consequently, some algorithms are not bio-inspired or physics/chemistry-based, it is sometimes

difficult to put some algorithms in the above three categories, because these algorithms have been

developed by using various characteristics from different sources, such as social, emotional, etc. In

this case, it is better to put them in the other category, listed in Table 1 [22].

1.2.5 Some Remarks

Though the sources of inspiration are very diverse, the algorithm designed from such inspiration may

be equally diverse. However, care should be taken, as true novelty is a rare thing. For example, there

are about 28,000 living species of fish; this cannot mean that researchers should develop 28000

different algorithms based on fish. Therefore, one cannot call their algorithms trout algorithm, squid

algorithm or shark algorithm[22].

It is worth pointing out that studies show that some algorithms are better than others which are still

not quite understood why. However, if one looks at the intrinsic part of algorithm design closely,

some algorithms are badly designed, which lack certain basic capabilities such as the mixing and

diversity among the solutions. In contrast, good algorithms have both mixing and diversity control so

that the algorithm can explore the vast search space efficiently, while converge relatively quickly

when necessary. Good algorithms such as particle swarm optimization, differential evolution, cuckoo

search and firefly algorithms all have both global search and intensive local search capabilities, which

may be partly why they are so efficient.

Page 237: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

871 Vol. 7, Issue 3, pp. 868-878

Table 1. A list of algorithms

Swarm intelligence based algorithms Bio-inspired (not SI-based) algorithms

Algorithm Author Algorithm Author

Accelerated PSO Yang et al. Atmosphere clouds model Yan and Hao

Ant colony optimization Dorigo Biogeography-based optimization Simon

Artificial bee colony Karaboga and Basturk Brain Storm Optimization Shi

Bacterial foraging Passino Differential evolution Storn and Price

Bacterial-GA Foraging Chen et al. Dolphin echolocation Kaveh and Farhoudi

Bat algorithm Yang Japanese tree frogs calling Hern´andez and Blum

Bee colony optimization Teodorovi´c and Dell'Orco Eco-inspired evolutionary algorithm Parpinelli and Lopes

Bee system Lucic and Teodorovic Egyptian Vulture Sur et al.

BeeHive Wedde et al. Fish-school Search Lima et al.

Wolf search Tang et al. Flower pollination algorithm Yang

Bees algorithms Pham et al. Gene expression Ferreira

Bees swarm optimization Drias et al. Great salmon run Mozaffari

Bumblebees Comellas and Martinez Group search optimizer He et al.

Cat swarm Chu et al. Human-Inspired Algorithm Zhang et al.

Consultant-guided search Iordache Invasive weed optimization Mehrabian and Lucas

Cuckoo search Yang and Deb Marriage in honey bees Abbass

Eagle strategy Yang and Deb OptBees Maia et al.

Fast bacterial swarming algorithm Chu et al. Paddy Field Algorithm Premaratne et al.

Firefly algorithm Yang Roach infestation algorithm Havens

Fish swarm/school Li et al. Queen-bee evolution Jung

Good lattice swarm optimization Su et al. Shuffled frog leaping algorithm Eusuff and Lansey

Glowworm swarm optimization Krishnanand and Ghose Termite colony optimization Hedayatzadeh et al.

Hierarchical swarm model Chen et al. Physics and Chemistry based algorithms

Krill Herd

Gandomi and Alavi

Big bang-big Crunch

Zandi et al.

Monkey search Mucherino and Seref Black hole Hatamlou

Particle swarm algorithm Kennedy and Eberhart Central force optimization Formato

Virtual ant algorithm Yang Charged system search Kaveh and Talatahari

Virtual bees Yang Electro-magnetism optimization Cuevas et al.

Weightless Swarm Algorithm Ting et al. Galaxy-based search algorithm Shah-Hosseini

Other algorithms Gravitational search Rashedi et al.

Anarchic society optimization

Shayeghi and Dadashpour

Harmony search

Geem et al.

Artificial cooperative search Civicioglu Intelligent water drop Shah-Hosseini

Backtracking optimization search Civicioglu River formation dynamics Rabanal et al.

Differential search algorithm Civicioglu Self-propelled particles Vicsek

Grammatical evolution Ryan et al. Simulated annealing Kirkpatrick et al.

Imperialist competitive algorithm Atashpaz-Gargari and Lucas Stochastic difusion search Bishop

League championship algorithm Kashan Spiral optimization Tamura and Yasuda

Social emotional optimization Xu et al. Water cycle algorithm Eskandar et al.

1.3 Applications

Nature Inspired Computing techniques are so flexible that they can be applied to wide range of

problems, so adaptable that they can deal with unseen data and capable of learning, so robust that they

can handle incomplete data. They have decentralized control of computational activities.

Biological inspired computing is a subset of nature inspired computing. There are three key

differences between traditional computing systems and biological information processing systems:

components of biological systems respond slowly but implement much higher-level operations. The

ability of biological systems to assemble and grow on their own enables much higher interconnection

densities. The implementation of biological systems is not a planned one.

One who expects solutions from nature for complex problems has to first observe the nature’s

behaviour carefully. The next step is to use models and list all the behaviours observed so far. The

Page 238: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

872 Vol. 7, Issue 3, pp. 868-878

above steps should be repeated till a near perfect working model is obtained. As a by-product some

unknown mechanisms may be found. Based on the observation from nature a problem-solving

strategy is formulated. Two main computational applications of NIC are Clustering and Optimization.

1.3.1 Clustering

Clustering is the unsupervised classification of patterns (observations, data items or feature vectors)

into groups (clusters). A loose definition of clustering could be the process of organizing objects into

groups whose members are similar in some way.

A cluster is therefore a collection of objects which are “similar” between them and are “dissimilar” to

the objects belonging to other clusters.

The goal of clustering is to determine the intrinsic grouping in a set of unlabeled data. It can be shown

that there is no absolute “best” criterion which would be independent of the final aim of the

clustering. Consequently, it is the user which must supply this criterion, in such a way that the result

of the clustering will suit their needs.

For instance, we could be interested in finding representatives for homogeneous groups (data

reduction), in finding “natural clusters” and describe their unknown properties (“natural” data types),

in finding useful and suitable groupings (“useful” data classes) or in finding unusual data objects

(outlier detection).

1.3.2 Optimization

An optimization problem is the problem of finding the best solution from all feasible solutions.

Optimization problems can be divided into two categories depending on whether the variables are

continuous or discrete.

Classification of optimization algorithm can be carried out in many ways. A simple way is to look at

the nature of the algorithm, and this divides the algorithm into two categories: deterministic

algorithms, and stochastic algorithms.

Deterministic algorithms follow a rigorous procedure, and its path and values of both design variables

and the function are repeatable. On the other hand, stochastic algorithms always have some

randomness and each individual path towards a feasible solution is not exactly repeatable.

II. MOLECULAR DOCKING

2.1 Purpose and Motivation

Proteins found in nature have evolved by natural selection to perform specific functions. The

biological function of a protein is linked to its three-dimensional structure or conformation.

Consequently, protein functionality can be altered by changing its structure.

The idea in molecular docking is to design pharmaceuticals computationally by identifying potential

drug candidates targeted against proteins. The candidates can be found using a docking algorithm that

tries to identify the bound conformation of a small molecule, also referred to as ligand, to a

macromolecular target at its active site, which is the region of an enzyme (large proteins that catalyse

chemical reactions) where the natural substrate (specific molecule an enzyme acts upon) binds.

Often, the active site is located in a cleft or pocket in the protein’s tertiary structure. The structure and

stereochemistry (spatial arrangement of the atoms) at the active site complements the shape and

physical/chemical properties of the substrate so as to catalyse a particular reaction. The purpose of

drug discovery is thus to derive drugs or ligands that bind stronger to a given protein target than the

natural substrate. By doing this, the bio-chemical reaction that enzyme catalyses can be altered or

prevented.

Until recently, drugs were discovered by chance via a trial-and-error manner using a high-throughput

screening methods that experimentally test a large number of compounds for activity against the

target in question. This process is very expensive and time consuming. If a three- dimensional

structure of the target exits, simulated molecular docking can be a useful tool in the drug discovery

process because it allows for many possible lead candidates to be tested before committing expensive

resources for wet lab experiments (synthesis), toxicological testing, bioavailability and clinical trials.

The focus of the below sections will be on realizing the complexity of simulating such a docking of

ligand into the active site of target protein and a brief review of the most representative bio-inspired

metaheuristic algorithms for doing the same.

Page 239: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

873 Vol. 7, Issue 3, pp. 868-878

2.2. Problem Definition

Molecular docking may be defined as an optimization problem, which would describe the “best-fit”

orientation of a ligand that binds to a particular protein of interest. Figure 1 below illustrates docking

of a small molecule into a protein’s active site where solvent molecules are indicated by filled circles

[23]:

Figure 1: Protein Ligand docking in solvent

Protein-ligand docking can also be defined as a search problem where the task is to find the best

ligand conformation (low energy binding mode) within the active site of protein. Since the relative

orientation and conformation of both the protein and the ligand is taken into account it is not an easy

problem. Usually, the protein is treated as fixed in three-dimensional coordinate system while the

ligand is allowed to be flexible. The flexibility of the ligand is measured in terms of the number of

rotatable bonds in the ligand. Following are the four types of Docking Problem based on increasing

complexity:

(i) Protein-Rigid & Ligand-Rigid

(ii) Protein-Rigid & Ligand-Flexible

(iii) Protein-Flexible & Ligand-Rigid

(iv) Protein-Flexible & Ligand-Flexible

Category (iv) can be further divided in two cases:

(iv.a) Rigid protein backbone, selected side chains are flexible

(iv.b) Fully flexible protein backbone and side chains.

It can be thought of as a problem of “lock-and-key”, where one is interested in finding the correct

relative orientation of the “key” which will open up the “lock” (where on the surface of the lock is the

key hole, which direction to turn the key after it is inserted, etc.). Here, the protein can be thought of

as the “lock” and the ligand can be thought of as a “key”. Molecular docking may be defined as an

optimization problem, which would describe the “best-fit” orientation of a ligand that binds to a

particular protein of interest. However, since both the ligand and the protein are flexible, a “hand-in-

glove” analogy is more appropriate than “lock-and-key”. During the course of the process, the ligand

and the protein adjust their conformation to achieve an overall “best-fit” and this kind of

conformational adjustments resulting in the overall binding is referred to as “induced-fit” as illustrated

in figure 2 below:

Figure 2: Analogy for docking problem

2.2.1 The Docking Problem

Let A and B be a ligand and protein molecule, respectively. Further, let f be a scoring function(energy

function) that ranks solutions with respect to binding energy and let C be the conformational search

Page 240: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

874 Vol. 7, Issue 3, pp. 868-878

space of all possible conformations(docking solutions) between A and B. Then we can define docking

problem as an optimization (minimization) problem in which the task is to find the following [23]:

(3)

A docking problem requires [24]:

1) A Scoring function to score protein-ligand complex by measuring the binding strength of the ligand

at a specific position of the protein.

2) A search algorithm to search for the protein-ligand combination that corresponds to the lowest

energy conformation.

2.2.2 Complexity of Docking Problem

Docking problems have huge search spaces, and the number of possible conformations to consider

increases dramatically when flexibility is taken into account. Unfortunately, the search space is not

unimodal but rather highly multimodal due to the numerous local optima caused by the energy

function used. Moreover, the complexity of the docking problem is influenced by the size, shape and

bonding topology of the actual ligand being docked.

Despite great improvements in computational power, docking remains a very challenging problem,

generally considered to be NP-hard (although no formal proof exists) [23]. Thus, a brute-force

approach looking at all possible docking conformations is impossible for all but the simplest docking

problems. To handle docking flexibility in an efficient manner, search heuristics that sample the

search space of possible conformations are required.

2.3. Survey of Bio Inspired Algorithms on Docking Problem

In most of the protein- ligand docking problems in literature, which has been solved by evolutionary

algorithms, the protein is kept rigid and ligand’s 3 translational, 3 rotational and r torsional degrees of

freedom is optimized. Thus the total number of variables, the dimension of the optimization problem

equals 6+r. A wide variety of optimization strategies is used to find the global minimum

corresponding to a complex structure.

One of the first applications of EAs to molecular docking was introduced by Dixon [3] .He used a

simple GA with binary encoding representing a matching of ligand atoms and spheres placed inside

the active site similar to DOCK (Kuntz et al., 1982. He used incremental construction algorithms.)

Only limited experimental results were presented by him. Later on, Oshiro, Kuntz and Dixon [16]

introduced a similar GA using a DOCK scoring function. This time the experiments resulted in the

low root mean square deviation (RMSD) values.

Genetic Algorithm for Minimization of Energy (GAME) by Xioa and Williams [18] was another early

attempt for molecular docking. GAME was designed to dock one or more rigid ligand to rigid protein.

Solutions were encoded as binary strings representing three rotational angles and three translational

coordinates. The algorithm was tested on one protein ligand complex. It was done with different

search strategies firstly; ligand was allowed to move with fixed rotation. Secondly, ligand was

allowed to rotate with fixed translation. At last all six degrees of freedom have been taken into

account.

Clark and Ajay [1] in 1995 introduced DIVALI. It used Gray-coded binary representation, bit-flip

mutation, and an elitism scheme. Two point crossover was used as it was believed to perform better.

The protein remained rigid during docking process whereas the ligand was flexible. Its key feature

was masking operator. It was evaluated on four protein ligand complexes using an AMBER-like force

field as scoring function and obtained good docking results.

Genetic Optimization for Ligand Docking, GOLD was developed by Jones et al [9].They used a

mixed encoding combining binary string and real valued numbers. Binary strings were used to

represent torsion angles of the ligand and selected side chain angles of the protein thus giving partial

flexibility to protein. Integer strings were used to represent mappings of hydrogen bonding sites

between ligand and protein. Both number and strength of hydrogen bonds and Van der Waals energy

contributed in the fitness function there. Standard one point crossover and bit-flip mutation were used.

Levine et al [12] developed Stalk system in which they applied new technologies to molecular

docking prediction. They combined the concepts of parallel and distributed computing, high speed

Page 241: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

875 Vol. 7, Issue 3, pp. 868-878

networking, genetic algorithms and virtual reality for the study of molecular interactions. In Stalk they

have used CAVE (CAVE Automatic Virtual Environment) as the virtual reality system. Stalk

considered a rigid body docking in which it is assumed that no modification of proteins backbone or

side chain occurs. Each string of GA contained six parameters three translational and three rotational.

The fitness function included non-bonded interactions only. Stalk was run on one protein-ligand

complex and showed good docking results.

AutoDock is another popular docking program that originally used Simulated annealing which was

later replaced by Genetic Algorithm to search for promising docking conformations. GA was then

replaced by LGA, Lamarckian Genetic Algorithm (Morris et al [14] which was a hybrid GA

combining a genetic algorithm and local search. LGA used a real valued representation, two point

crossover and Cauchy mutation with fixed value of, defining the mean and spread of Cauchy

distribution. In their representations each chromosome (solution) was composed of a string of real

valued genes: three Cartesian coordinates for the ligand translation; four variables defining a

quaternion specifying the ligand orientation; and one for flexible torsion angle. Further, local search

was applied on each generation on a user defined proportion of population. When local search is

applied the genotype of the individual is replaced with the new best solution found by evaluating its

fitness, which is the sum of intermolecular interaction energy between the ligand and the protein and

the intramolecular interaction energy of the ligand. The implementations of AutoDock on various

protein-ligand complexes and corresponding numerical results have been presented (Morris et al,

1998). There they concluded that out of three search algorithms (SA, GA, LGA) which they have

applied in AutoDock, the most efficient, reliable and successful is Lamarckian Genetic Algorithm

LGA.

Vieth et al.[17] compared the efficiency of molecular dynamics (MD), Monte Carlo (MC) and

Genetic Algorithms (GA) for docking five representative ligand- receptor complexes. All the three

algorithms employed CHARMM-based energy function. The results were also compared with

AutoDock. The receptor was kept rigid while flexibility of ligand was permitted. They concluded that

MD was most efficient in case of large search spaces while GA outperformed others in small search

spaces.

In 1999, Wang [7] et al. applied GA combined with random search. Steric complementarity and

energetic complementarity of ligand with its receptor have been separately considered in two stage

automated docking. Eight complexes have been randomly selected. For most of the cases the root

mean square (RMS) of the GA solutions was smaller than 1.0. In the first stage rough searching of a

set of bound sites based on steric complementarity was done whereas in the second stage detailed

searching of the locally associated sites based on energetic complementarity was performed.

With the passage of time several other algorithms have been developed, changes in variation operators

have been made to improve the efficiency of docking algorithms. Multi-Objective, multi-population

genetic algorithms came into picture. Of recent improvements C-l Li et al. [13] proposed an

information entropy based evolution model for molecular docking. The model was based on a multi-

population genetic algorithm. The GA proposed to solve this was binary coded. In each generation

there were three operators: selection, crossover and mutation. Selection was performed by an integer

decimal method, crossover was two-point and mutation used was uniform. An elitist maintaining

mechanism was designed in the proposed GA.

Krob et al.[15] applied Ant Colony Optimization to structure based drug design. They introduced a

new docking algorithm PLANTS (Protein-Ligand Ant System) based on ACO metaheuristic. An

artificial Ant Colony was employed to find a minimum energy conformation of the ligand in the

proteins active site. They presented the effectiveness of PLANTS for several parameter settings as

well as a direct comparison to GOLD, which is based on a GA. PLANTS, treated the ligand as

flexible while almost whole receptor except rotatable hydrogen bond donors were kept rigid. The

continuous variables have been discretized so that ACO can directly be applied, as it is designed to

tackle combinatorial optimization problems.

Most recently Janson [8] et al. have done work on molecular docking with multi-objective particle

swarm optimization (PSO). A new algorithm ClustMPSO was proposed which is based on PSO and

follows a multi-objective approach for comparing the quality of solutions. Energy evaluations were

done using AutoDock tools. In the second part of the paper a new approach for predicting the docking

trajectory was proposed. It was shown empirically that ClustMPSO finds solution for docking

Page 242: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

876 Vol. 7, Issue 3, pp. 868-878

problem with lower energy than LGA and SA that are incorporated in AutoDock. ClustMPSO is also

faster with respect to the solution evaluations.

Differential evolution (DE) was introduced by Storn and Price[19]. Compared to more widely known

EA-based techniques (e.g. genetic algorithms, evolutionary programming and evolution strategies),

DE uses a different approach to select and modify candidate solutions. The main innovative idea in

DE is to create offspring from a weighted difference of parent solutions. The DE algorithm has

recently demonstrated superior performance, outperforming related heuristics, such as GAs and PSO

on both numerical benchmark functions and in several real world applications.

Based on the great success of DE, a novel application of DE to flexible ligand docking (termed

DockDE) was introduced by Thomsen [20]. The comparison was performed on a suite of six

commonly used docking benchmark problems where DockDE obtained the best energy for all of them

compared to the DockEA and LGA algorithms as depicted in the below figure 3 extracted from [20].

Figure 3. Energy Graph of DockDE against LGA and DockEA

In a recent study, a new docking algorithm called MolDock was introduced by Thomsen and

Christensen [21] which is based on a hybrid search algorithm that combines the DE optimization

technique with a cavity prediction algorithm.

Figure 4. Comparative Survey of MolDock docking accuracy

The docking accuracy of MolDock on selected 77 complexes is 87%, outperforming the other docking

algorithms on the benchmark data set which is depicted in above Figure 4 extracted from [21]. In

conclusion, the results strongly suggest that the Differential Evolution algorithm has a great potential

for protein-ligand docking.

Page 243: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

877 Vol. 7, Issue 3, pp. 868-878

III. CONCLUSIONS

Molecular docking is a valuable tool for pharmaceutical companies because they can be used to

suggest promising drug candidates at a low cost compared to making real-world experiments. As a

result of the need for methods that can predict the binding of two molecules when complexed,

numerous algorithms have been suggested during the last three decades. In this survey, a review of the

most representative bio-inspired optimization algorithms for protein-ligand docking was studied. In

particular, docking methods utilizing differential evolution have been surveyed to have superior

performance to other well studied approaches regarding accuracy and speed, issues that are important

when docking hundreds of thousands of ligands in virtual screening experiments.

IV. FUTURE/PROPOSED WORK

Cancer is a serious and often fatal disease, widespread in the developed world. One of the most

common methods of treating cancer is chemotherapy with cytotoxic drugs. These drugs, around 30

used regularly in chemotherapy, themselves can often have debilitating or life threatening effects on

patients undergoing chemotherapy. Hence chemotherapy can be perceived as a complex control

problem of how to balance the need for tumour treatment against the undesirable and dangerous side

effects that the treatment may cause. Proposed work can be subdivided into two phases:

A) Literature Survey and Virtual screening for drug delivery

Here we survey current scientific literature to track potential cytotoxic Ligands. Then we use

molecular docking tools to simulate novel protein-ligand complexes using these potential candidates

for virtual drug on the basis of scoring function and optimized conformation.

B) Drug Administrating Optimization

Here we design a system, specifically a workbench, which supports the design of novel multidrug

cancer chemotherapy treatments. The aim of the system is to allow clinicians to interact with models

of tumour response and to use nature-inspired optimization algorithm to search for improved

treatments. In simple words, the system should take in effective input of patient current status of

tumour growth and provide output of administered dose of a combination of cytotoxic drug for

optimal control.

ACKNOWLEDGEMENTS

The author would like to thank Department of Information Technology, Jadavpur University, Salt lake

Campus for providing necessary access to scientific literature while conducting this survey. The

author is grateful to Dr Parama Bhaumik, for guidance and valuable technical inputs. Finally, the

author is indebted to constant motivation and grammatical editing from his better-half Mrittika

Chatterjee.

REFERENCES

[1]. Clark, K.P. and A.N. Jain (1995) “Flexible ligand docking without parameter adjustment across four

ligand receptor complexes,” J. Comput. Chem., Vol. 16, pp. 1210-1226.

[2]. Deb, K.(2005). Optimization For Engineering Design, Prentice-Hall of India Pvt. Limited, New Delhi.

[3]. Dixon, J.S.(1993). “Flexible docking of ligands to receptor sites using genetic algorithms,” Proceedings

of the 9th European Symposium on Structure-Activity Relationships: QSAR and Molecular Modelling,

pp 412-413.

[4]. Dorigo, M., T. Stiitzle: Ant Colony Optimization.MIT Press, Cambridge, MA, USA 2004.

[5]. Goldberg, D.E. (1989). Genetic algorithms in search, optimization and machine learning, Addision –

Wesley, New York.

[6]. Hart, W.E., C. Rosin, R.K. Belew, and G.M. Morris (2000). “Improved evolutionary hybrids for flexible

ligand docking in AutoDock,” Optim. Comput. Chem. Mol. Biol., pp. 209-230.

[7]. Hou, T., J. Wang, L. Chen, and X. Xu (1999) “Automated docking of peptides and proteins using a

genetic algorithm combined with a tabu search,” Pro. Engi., Vol. 12 No.8, pp. 639-647.

[8]. Janson, S., D. Merkle, and M. Middendorf (2008) “Molecular docking with multi-objective Particle

Swarm Optimization,” App. Soft Computing Vol. 8, pp. 666-675.

[9]. Jones, G., P. Willett, and R.C. Glen (1995). “Molecular recognition of receptor sites using a genetic

Page 244: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

878 Vol. 7, Issue 3, pp. 868-878

algorithm with a description of desolvation.” J. Mol. Biol., Vol. 245, pp. 43-53.

[10]. Kennedy, J., R. Eberhart, Particle Swarm Optimization, IEEE International Conference on Neural

Networks (ICNN’95), Vol. 4, 1995.

[11]. Kuntz, I.D., J.M. Blaney, S.J. Oatley, R. Langridge and T.E. Ferrin (1982). “A geometric approach to

macromolecule-ligand interactions,” J. Mol. Biol., Vol. 161, pp. 269-288.

[12]. Levine, D., M. Facello, P. Hallstrom, G. Reeder, B. Walenz, and F. Stevens (1997). “Stalk: An

interactive system for virtual molecular docking,” IEEE Comput. Sci. & Engi., Vol. 4, Issue 2, pp. 55-

65.

[13]. Li, Chun-lian, Y. Sun, D. Long, and X. Wang (2005) “A genetic algorithm based method for molecular

docking,” ICNC: International conference on advances in natural computation. LNCS 3611, pp. 1159-

1163.

[14]. Morris, G.M., D.S. Goodsell, R.S. Hallyday, R. Huey, W.E. Hart, R.K. Belew, and A.J. Olson (1998).

“Automated docking using Lamarckian genetic algorithm and an empirical binding free energy

function,” J. Comput. Chem., Vol. 19, pp. 1639-1662.

[15]. Oliver, K., T. Stiitzle, and T.E. Exner (2006) “PLANTS: Application of Ant Colony Optimization to

Structure-Based Drug Design,” 5th International workshop on Ants (proceedings), vol. 4150, pp. 247-

258.

[16]. Oshiro, C.M., Kuntz, and J.S. Dixon (1995). “Flexible ligand docking using a genetic algorithm,” J.

Comput. Aid. Mol. Des., Vol. 9, pp. 113-130.

[17]. Vieth, M., J.D. Hirst, B.N. Dominy, H. Daigler, and C.L. Brooks III (1998). “Assessing search

strategies for flexible docking,” J. Comput. Chem.,Vol. 19, pp. 1623-1631.

[18]. Xiao, Y.L. and D.E. Williams (1994), Proceedings of the 1994 ACM symposium on Applied

computing. pp. 196-200.

[19]. Rainer Storn, Kenneth Price (1997), Differential Evolution – A Simple and Efficient Heuristic for

global Optimization over Continuous Spaces,December 1997, Volume 11, Issue 4, pp 341-359

[20]. Thomsen, R.(2003), Flexible ligand docking using differential evolution, Evolutionary Computation,

2003. CEC '03. The 2003 Congress on (Volume:4)

[21]. Thomsen R1, Christensen MH.(2006), MolDock: a new technique for high-accuracy molecular

docking,J Med Chem. 2006 Jun 1;49(11):3315-21.

[22]. Iztok Fister Jr., Xin-She Yang, Iztok Fister, Janez Brest, Dušan FisterBbb(2013), A Brief Review of

Nature-Inspired Algorithms for Optimization, Elektrotehni\v{s}ki vestnik, 80(3), 2013

[23]. Thomsen, R.(2007), Protein-Ligand Docking with Evolutionary Algorithms, Computational

Intelligence in Bioiformatics, IEEE press 2007, pp. 169-196

[24]. Shashi, Kusum Deep, V.K Katiyar, C.K Katiyar(2009),A state of art review on application of nature

inspired optimization algorithms in protein-ligand docking, Indian Journal of Biomechanics: Special

Issue (NCBM 7-8 March 2009)

[25]. Jiming Liu, K.C. Tsui,Toward nature-inspired computing, Communications of the ACM, Vol. 49 No.

10, Pages 59-64

AUTHOR

The Author is presently working as a BI developer for British Telecommunications Retail

Team in DSS platform and pursuing Jadavpur University-TCS collaborative Masters of

Engineering in Software Engineering.

Page 245: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

879 Vol. 7, Issue 3, pp. 879-886

HEAT TRANSFER ANALYSIS OF COLD STORAGE

Upamanyu Bangale and Samir Deshmukh Department of Mechanical Engineering, SGBA University, Amravati, India

ABSTRACT India is largest producer of fruit and vegetable in world scenario but availability of fruit and vegetable per

capita is significantly low because of post-harvest loses which is about 25% to 30% of production so the

requirement of refrigeration and air conditioning has been increased. Building or group of buildings with

thermal insulation and refrigeration system in which perishable food or products can be stored for various

length of time to slow down deterioration with necessary temperature and humidity is called as cold storage. In

cold storage refrigeration system brings down the temperature initially during start up but thermal insulation

maintains the temperature later on continuously. In this view, the simple methodology is presented to calculate

heat transfer by analytical method also attempt has been made to minimize the energy consumption by replacing

150 mm Expanded polystyrene (EPS) by 100 mm Poly Urethane foam (PUF) insulation. The methodology is

validated against actual data obtained from Penguin cold storage situated in Pune, India.

KEYWORDS: Cold storage, Steady state analysis, insulation, EPS, PUF

I. COLD STORAGE DESCRIPTION

Cold storage has floor plan area of 400 m2 and sidewall length of 14 m. The overall dimensions of

cold storage are 17m x 14m x 12m. The cold storage building is of three floors with each floor having

4 cold chambers of 8m*5m sizes operating at different temperature as per the requirements of

commodities. Ground and First floor called as Defrost zone where -20 0C temperatures is maintained

while second floor is called as Cold storage where 0 0C is maintained. The height of zone wall is taken

to be 4 m, leading to a total storage volume of 160 m3. Entire cold storage building is considered for

study of simulation where the actual product is stored (for either long or short term storage

periods) and is referred to as the “Defrost Zone” or “Cold Storage Zone”. Defrost zone and Cold

Storage zone having area of 8x5 m has with the height floor of 4m with area of 40 m2 and a volume of

160 m3.

The “Dock” is available immediate entrance of cold storage building which is a separate zone that

serves as the staging area for incoming and outgoing products. The office of a cold storage and ante

room for loading and unloading of the commodities is as shown in fig. 1.1. The condensing unit i.e.

evaporative cooled condenser with two fans and a pump is located outside the zone in still

atmosphere.

Page 246: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

880 Vol. 7, Issue 3, pp. 879-886

Fig 1.1 Cold storage floor plan

Under normal operating conditions, the Zone is at a lower temperature than the dock. The dock

temperature is maintained around 100C, while different set-point temperatures have been investigated

for the Zone depending on the optimum storage temperature for the various food products considered

during the course of study.

An important concept to be understood in the operation of cold storage warehouses is the way the

temperature of the products and the zone air is lowered. Since, the refrigeration equipment directly

cools the air in the zone; it is the zone air temperature that reduces first. As the stored product has a

higher thermal mass than the zone air, there is a time lag between when the air temperature is reduced

and when the product temperature is reduced over time as heat is extracted. Alternatively, when the

refrigeration equipment is shut-off, the air temperature shows a greater risk than the product

temperature in the same time interval. Adequate precaution and care still has to be taken in the design

of cold storage to make them energy efficient and to reduce the operational costs as much as possible.

1.1 Cold storage construction

In this simulation we assume that on an average, 60% of the cold storage floor area is covered with

pallets of stored product. The walls of the cold storage zones are modeled as a layer of solid concrete

block followed by three layers of EPS and a layer of Aluminum from outside to inside respectively.

The insulation thickness is different for the zone and the dock. For the zone walls, three different

values of insulation thicknesses were studied for the insulating material of EPS, resulting in different

thermal resistance values (R values) for each wall. Then 150 mm EPS insulation is compared with

100 mm insulation of PUF. While Surface Area of wall is shown in Table 1.2

Table-1.1 Surface Area of wall

NAME AREA(m2)

Ceiling 40

Wall between DF and Surrounding 32

Wall between DF and Loading Unloading 20

Wall between DF and Dock 32

Wall between DF and DF 20

Floor 40

The wall of cold chamber as shown in fig.1.2 has a layer of solid concrete block on external side with

20 cm of thickness facing towards the external surroundings, and 15 cm of expanded polystyrene with

1 mm layer of aluminum foil facing into the conditioned space or the cooler.

Page 247: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

881 Vol. 7, Issue 3, pp. 879-886

Fig.1.2 Wall of cold chamber

The roof of the dock and the zone is assumed to have the same construction as the zone walls. The

floor construction for the warehouse is common for both the dock and the zone space.

Table 1.2 Applied insulation thickness details

Application Area Thickness of Insulation (mm) U value

(W/m2K)

R value

(m2K/W)

Exposed walls 150 0.27 3.70

Intermediate walls and

Inter-floors 50 0.58 1.72

Exposed roofs 150 0.24 4.17

Floors 150 0.29 3.45

The floor has two layers, from outside to inside: 200 mm of concrete, and 150 mm of EPS insulation,

resulting in a floor with a total thickness of 350 mm , and a U-value of 0.686 W/m2-K. The

commonly used insulating materials for cold storage walls, floors and roofs and their details are given

in Table No. 1.3

1.2 Thermal insulation details

Table 1.4 Cold storage insulation characteristics [1]

Types of Insulation Material

ρ Density(Kg/m3) K value (W/m0C)

EPS 16 0.036

PUF 32 0.023

XPS 30-35 0.025

Phenolic foam 50 0.026

Mineral wool 48 0.033

Bonded fiber glass 32 0.033

II. HEAT TRANSFER CALCULATION

2.1 Average temperature calculation

Average Temperature is taken as 310C because on date 12 May 2014

Table 2.1 Average Temperature [2]

So average temperature on dated 12 May 2014 is

Time Temp

23 to 5 270C

5 to 11 240C

11 to 17 370C

17 to 23 360C

Total 124 0C

Page 248: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

882 Vol. 7, Issue 3, pp. 879-886

Total Temp./ 4= 124

4 = 310C

Cold storage capacity –1400 MT, Plant Size –17 m (L) x 14 m (W) x 12m (H),Average Outside

Temperature –31 0C, Operating Temperature –-20 0C

In Defrost and 0 0C in Cold Storage

Heat transfer is calculated by using Fourier’s law

Q=A (T1-T3) / (L1/k1)

Q=Heat transfer in Watt, A=Area (m2),(T1-T3)= Temperature difference (0C or K),

L1=Thickness of insulation (m), K1=Thermal Conductivity (W/mk)

Q=𝐴(𝑇1−𝑇3)

𝑙1

𝑘1

= 32(31−(−20)

0.15

1.14

= 9302.4W

While similar calculations are carried out by using above formula [3]

2.2 When existing model with EPS Insulation

Table 2.2 Heat transfer through existing Model

Page 249: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

883 Vol. 7, Issue 3, pp. 879-886

Page 250: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

884 Vol. 7, Issue 3, pp. 879-886

2.3 Recommended model

Table 2.3 Heat transfer through recommended Model

Page 251: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

885 Vol. 7, Issue 3, pp. 879-886

Total amount of heat transfer from existing model when EPS insulation of 150mm QEPS = 14928 Watt

Total amount of heat transfer from Recommended model when PUF insulation is of 100 mm

QPUF=12644 Watt

If we use PUF as Insulation then 14928-12644= 2283 Watt of heat transfer is restricted (i.e. 2.283

kW).

If plant runs for 24 hours, units saved will be 2.283 x 24 = 54.792 Units.

If cost of a unit of electricity is Rs 9.00, then saving per day =54.792 x 9 = Rs 493.128. Saving in

month = 493.128 x 30 = Rs 14,794

Saving in a year = Rs 14,794 x 12 = Rs 1,77,526.08

Table 2.4 Cost for EPS and PUF

(Cost values are given in Ref.4)

Pay Back Period = 9, 76,320 / 1, 77,526.08 = 5.49 years

Page 252: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

886 Vol. 7, Issue 3, pp. 879-886

Thus to recover the additional cost invested for PUF insulation, the plant has to operate 5.49

years, without any extra profit but with the replacement of EPS by PUF insulation the cold

storage plant can saves Rs 1,77,526.08.

III. CONCLUSION

In existing system of cold storage for external wall, insulation is 150 mm,for wall between two zone

and zone and dock Insulation thickness is 50 mm while in between zone and loading and unloading

thickness of insulation is 150 mm so the total amount of heat transfer is 14927.9W, but for 100 mm

PUF 12644.4W of heat transfer which is 2283.46W lesser than EPS. So saving in RS is 1,77,526.08

and payback period is Extra Investment/Saving=9,76,320/1,77,526.08=5.49years.

Hence it is concluded that the PUF is best suitable insulation material for a Cold storage.

(Cost values are given in Ref.4)

REFERENCES

[1]. Technical Standards and Protocol for cold chain in India,2010 pp-1-61.

[2]. www.yr. no/place/India/maharashtra/pune/long.html

[3]. Refrigeration and Air Conditioning, Third Edition by C.P. Arora, MC Graw Hill Education pp. 1-848.

[4]. Quotation of New Star Engineers

AUTHORS BIOGRAPHY

Upamanyu Bangale, Graduate in Mechanical engineering from SGBAU, Year 2010 and

Pursuing ME CAD/CAM from same University.

Samir Deshmukh, Doctorate from Amravati University, ME Thermal Coordinator and

Senior Lecturer in P.R.M.I.T & R, Amravati

Page 253: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

887 Vol. 7, Issue 3, pp. 887-895

LOCALIZED RGB COLOR HISTOGRAM FEATURE

DESCRIPTOR FOR IMAGE RETRIEVAL

K. Prasanthi Jasmine1; P. Rajesh Kumar2 1Research Scholar, 2Professor & Head

Department of Electronics and Communication Engineering

Andhra University, Visakhapatnam, Andhra Pradesh, India

ABSTRACT This paper proposes a new feature descriptor, localized color descriptor for content based image retrieval

(CBIR). The proposed method collects the local histograms from red (R), green (G) and blue (B) color spaces.

These local histograms are collected by dividing the images into subblocks (regions). The performance of the

proposed method is tested by conducting experiments on Corel-1000, natural benchmark database. The

performance of the proposed method is evaluated in terms of precision, recall, average retrieval precision

(ARP) and average retrieval rate (ARR) as compared to the global RGB histograms, global HSV histograms

and other existing features for image retrieval. The performance of the proposed method also tests with different

distance measures. The results after being investigated the proposed method shows a significant improvement as

compared to the other existing methods in terms of precision, recall, ARP and ARR on Corel-1000 database.

KEYWORDS: Color Features; Feature Extraction; Histogram; Image Retrieval

I. INTRODUCTION

A. Motivation

Recent years have seen a rapid increase in the size of digital image collections. Every day, both

military and civilian equipment generates gigabytes of images. A huge amount of information is out

there. However, we cannot access or make use of the information unless it is organized so as to allow

efficient browsing, searching, and retrieval. Image retrieval has been a very active research area since

the 1970s, with the thrust from two major research communities, database management and computer

vision. These two research communities study image retrieval from different angles, one being text-

based and the other visual-based.

The text-based image retrieval can be traced back to the late 1970s. A very popular framework of

image retrieval then was to first annotate the images by text and then use text-based database

management systems (DBMS) to perform image retrieval. Many advances, such as data modelling,

multidimensional indexing, and query evaluation, have been made along this research direction.

However, there exist two major difficulties, especially when the size of image collections is large

(tens or hundreds of thousands). One is the vast amount of labour required in manual image

annotation. The other difficulty, which is more essential, results from the rich content in the images

and the subjectivity of human perception. That is, for the same image content different people may

perceive it differently. The perception subjectivity and annotation impreciseness may cause

unrecoverable mismatches in later retrieval processes. Comprehensive and extensive literature survey

on CBIR is presented in [1]–[4].

Two major approaches, including spatial and transform domain-based methods can be identified in

CBIR systems. The first approach usually uses pixel (or a group of adjacent pixels) features like color

and shape. Among all these features, color is the most used signature for indexing. Color histogram

[5] and its variations [6] were the first algorithms introduced in the pixel domain. Despite its

Page 254: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

888 Vol. 7, Issue 3, pp. 887-895

efficiency, color histogram is unable to carry local spatial information of pixels. Therefore, in such

systems retrieved images may have many inaccuracies, especially in large image databases. For these

reasons, two variations called image partitioning and regional color histogram were proposed to

improve the effectiveness of such systems [7, 8]. Color Correlogram is a different pixel domain

approach which incorporates spatial information with color [9, 10]. Color spatial schemes like color

Correlogram offer more effectiveness in comparison with color histogram methods with little

efficiency reduction. Despite the achieved advantage, they suffer from sensitivity to scale, color and

illumination changes. Edge orientation Auto-Correlogram [11] was an effort to reduce the sensitivity

of color Correlogram method to color and illumination variations. Shape-based and color shape-

based systems using snakes [12], contour images, and other boundary detection methods [11, 13],

were also proposed as pixel domain methods.

B. Related Work

Second feature, texture features (transform domain) currently in use are mainly derived from multi-

scale approach. Liu and Picard [14] have used Wold features for image modeling and retrieval. In

SaFe project, Smith and Chang [15] used discrete wavelet transform (DWT) based features for image

retrieval. Ahmadian et al. used the wavelet transform for texture classification [16]. Do et al. proposed

the wavelet transform (DWT) based texture image retrieval using generalized Gaussian density and

Kullback-Leibler distance (GGD &KLD) [17]. Unser used the wavelet frames for texture calsification

and segmentation [18]. Manjunath et al. [19] proposed the Gabor transform (GT) for image retrieval

on Bordatz texture database. They have used the mean and standard deviation features from four

scales and six directions of Gabor transform. Kokare et al. used the rotated wavelet filters [20], dual

tree complex wavelet filters (DT-CWF), dual tree rotated complex wavelet filters (DT-RCWF) [21],

rotational invariant complex wavelet filters [22] for texture image retrieval. They have calculated the

characteristics of image in different directions using rotated complex wavelet filters. Birgale et al. [23]

and Subrahmanyam et al. [24] combined the color (color histogram) and texture (wavelet transform)

features for CBIR. Jhanwar et al. [25] have proposed the motif co-occurrence matrix (MCM) for

content based image retrieval. The MCM is derived from the motif transformed image which is

calculated by dividing the whole image into non-overlapping 2×2 pixel patterns. They also proposed

the color MCM which is calculated by applying MCM on individual red (R), green (G), and blue (B)

color planes.

The main contributions of this paper are summarized as follows. (a) This paper proposes a localized

RGB histogram based feature descriptor. These features are collected by dividing the image into

subblocks. (b) The performance of the proposed method is analyzed with different distance measures

on Core-1000 database in terms of precision, recall, average precision (ARP) and average retrieval

rate (ARR).

C. Organization of Manuscript

The organization of the paper as follows: In section I, a brief review of image retrieval and related

work is given. Section II, presents a concise review of Color spaces. Section III, presents the feature

extraction and similarity measure. Experimental results and discussions are given in section IV. Based

on above work conclusions are derived in section V.

II. COLOR SPACES

A. RGB Color Space

The predominant color representation used in Image retrieval is the RGB [13] color space

representation. In this representation, the values of the red, green, and blue color channels are stored

separately. They can range from 0 to 255, with 0 being not present, and 255 being maximal. A fourth

channel, alpha, also provides a measure of transparency for the pixel. The distance between two pixels

is measured using Eq. (1).

3

(255 Re ).(255 ).(255 )

255

d Green Blue (1)

Page 255: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

889 Vol. 7, Issue 3, pp. 887-895

Where ∆Red is the red channel difference, ∆Green is the green channel difference, and ∆Blue is the

blue channel difference between the two pixels being compared. This distance metric has an output in

the range of [0-1]. When used to compare pixels, a cut-off, usually between 0.6 and 0.8, is used to

decide whether the two pixels are similar to each other. Because the distance measure is based on the

absolute difference between pixel color values, the size of the neighbourhood of similarity for a given

cut-off value is the same regardless of the location of the pixel value being compared to in the RGB

color space.

Although this color space description does not always match the intuition for color similarity of

people, it does provide an easy method for pixel value comparison. In most cases it performs

agreeably well, as it can differentiate gross color differences. It is also computationally cheap as

compared to any other color space representations; because the Red-Green-Blue color channel

differentiation is used in most images formats and therefore is readily available.

B. HSV Color Space

The alternative to the RGB color space is the Hue-Saturation-Value (HSV) color space. Instead of

looking at each value of red, green and blue individually, a metric is defined which creates a different

continuum of colors, in terms of the different hues each color possesses. The hues are then

differentiated based on the amount of saturation they have, that is, in terms of how little white they

have mixed in, as well as on the magnitude, or value, of the hue. In the value range, large numbers

denote bright colorations, and low numbers denote dim colorations.

This space is usually depicted as a cylinder, with hue denoting the location along the circumference of

the cylinder, value denoting depth within the cylinder along the central axis, and saturation denoting

the distance from the central axis to the outer shell of the cylinder. This is depicted in Fig. 1(a) on the

left. This description, however, fails to agree with the common observation that very dark colors are

hard to distinguish, whereas very bright colors are easily distinguished into multiple hues. A system

more accurate in representing this is shown in Fig. 1(b) on the right, which is the model used for the

HSV color space representation in the Image retrieval system.

The computation of hue, saturation, and value can be performed as detailed below. In the text below,

as in the figures, h stands for hue, v stands for value, and s stands for saturation. The RGB colors are

similarly represented, with r for red, g for green, and b for blue. The remainder of the variables are

temporary computation results. All of these computations have to be computed on a pixel-by-pixel

basis.

(a) (b)

Fig. 1: Two HSV color space representations.

The cylindrical representation is a good approximation, but the conical representation results in better

performance.

max( , , ).v r g b (2)

(3)

Here, max denotes a function returning the maximum value among its arguments, and where min

denotes a function returning the minimum value among its arguments.

min( , , )v r g bs

v

Page 256: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

890 Vol. 7, Issue 3, pp. 887-895

max , , min , , 5 int

max , , min , , 1 int

max , , min , , 1 int

max( , , ) min( , , ) 3- int

max

if r r g b and g r g b then bpo

if r r g b and b r g b then gpo

if g r g b and b r g b then rpoh

if g r g b and r r g b then bpo

if b

, , min , 3 int

max , , min , , 5 int

r g b and r r g b then gpo

if b r g b and g r g b then rpo

(4)

Where

int .min( , , )

v rrpo

v r g b

(5)

int .min( , , )

v ggpo

v r g b

(6)

int .min( , , )

v bbpo

v r g b

(7)

The value computation, as defined in Eq. (2), maps to the range of values which r, g, and b can take.

III. FEATURE EXTRACTION

Each image from the database is analyzed using localized RGB histograms. Firstly, the RGB image is

divided into non-overlapped subblocks. Then the individual R, G and B histograms are calculated for

each subblock. Finally, the feature vector is constructed by concatenating all the histograms. The

algorithm of the proposed system is given bellow.

Algorithm:

Input: Query Image/Database Image; Output: Feature Vector

1. Load the color image.

2. Separate the R, G and B color spaces.

3. Quantize the each color space.

4. Divide the each color space into subblocks.

5. Calculate the histograms for each subblock.

6. Form a feature vector by concatenating the histograms.

A. Similarity Distance Measure

In the presented work four types of similarity distance metric ares used as shown follows.

Manhattan or L1 or city-block Distance

This distance function is computationally less expensive than Euclidean distance because only the

absolute differences in each feature are considered. This distance is sometimes called the city block

distance or L1 distance and defined as

( , ) ( ) ( )i jiD Q T f Q f T (8)

Euclidean or L2 Distance

The Euclidean distance is defined as:

1 22

( , ) ( ) ( )i jiD Q T f Q f T (9)

The most expensive operation is the computation of square root.

D1 Distance

, ,

1 , ,

( , )1

LgT i Q i

i T i Q i

f fD Q T

f f

(10)

Canberra Distance

Page 257: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

891 Vol. 7, Issue 3, pp. 887-895

, ,

1 , ,

( , )Lg

T i Q i

i T i Q i

f fD Q T

f f

(11)

where Q is query image, Lg is feature vector length, T is image in database; ,I if is thi feature of image

I in the database, ,Q if is thi feature of query image Q.

IV. EXPERIMENTAL RESULTS AND DISCUSSIONS

In this paper, retrieval tests are conducted on Corel-1K database and results are presented as follows.

Corel database [26] consists of large number of images of various contents ranging from animals to

outdoor sports to natural images. These images have been pre-classified into different categories each

of size 100 by domain professionals. Some researchers think that Corel database meets all the

requirements to evaluate an image retrieval system, due its large size and heterogeneous content. We

have collected 1000 images to form database Corel–1K. These images are collected from 10 different

domains namely Africans, beaches, buildings, buses, dinosaurs, elephants, flowers, horses, mountains

and food. Each category has NG (100) images with resolution of either 256×384 or 384×256. Fig. 5

shows the sample images of Corel–1K database (one image from each category).

In all experiments, each image in the database is used as the query image. For each query, the system

collects n database images X=(x1, x2, …, xn) with the shortest image matching distance computed

using (12). If the retrieved image xi=1, 2, …., n belongs to same category as that of the query image

then we say the system has appropriately identified the expected image else the system fails to find

the expected image.

The performance of the proposed method is measured in terms of precision, recall, average retrieval

precision (ARP) and average retrieval rate using Eq. (12) to Eq. (17).

.( ) 100

.

No of Relevant Images RetrievedPrecision P

Total No of Images Retrieved

(12)

1

11

1( )

N

i

Group Precision GP PN

(13)

1

11

1( )

j

Average Retrieval Precision ARP GP

(14)

( )Numberof relevant images retrieved

Recall RTotal Numberof relevant images

(15)

1

11

1( )

N

i

Group Recall GR RN

(16)

1

11

1( )

j

Average Retrieval Rate ARR GR

(17)

where 1N is number of relevant images and 1 is number of groups.

Page 258: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

892 Vol. 7, Issue 3, pp. 887-895

Fig. 2: Comparison of the proposed method with other existing methods in terms of: (a) ARP and (b) ARR on

Corel-1000 database.

Table I & Fig. 2 (a) and Table II & Fig. 2 (b) summarize the retrieval results of various methods in

terms of average retrieval precision and average retrieval rate respectively. Fig. 3 summarize the

performance of proposed method with different distance measures in terms of average retrieval rate.

From the Tables I to II and Fig. 6 the following points can be observed:

1. The average retrieval precision of proposed method (64.1% to 44.29%) is more as compared to

Jhanwar et al. (58.7% to 39.0%), RGB Hist. (62.14% to 41.76%) and HSV Hist. (63.7% to

42.6%).

2. The performance of the proposed method with d1 distance is more as compared to Canberra,

Euclidean and Manhattan distances in terms of average retrieval precision.

From Tables I to II, Fig. 2, Fig. 3 and above observations, it is clear that the proposed method is

outperforming the other existing techniques in terms of ARR and ARP. Fig. 4 illustrates the query

results of the proposed method on Corel-1000 database (top left image is the query image).

Page 259: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

893 Vol. 7, Issue 3, pp. 887-895

Fig. 3: Comparison of proposed method with various distance measures on Corel-1000 image database.

Table i: performance of various methods in terms of group precision on corel-1000 database.

Precision (%)

Jhanwar et al. RGB Hist. HSV Hist. PM

53.15 84.1 66.7 75.0

43.85 52.7 32.2 49.6

48.7 52.5 56.8 44.8

82.8 52 83.4 65.6

95 100 99.8 100.0

34.85 69 50.65 70.1

88.35 82.6 66.2 87.4

59.35 91.3 87.25 94.6

30.8 33.1 24.75 48.3

50.4 77 69.95 70.8

58.72 62.14 63.77 64.1

Table II: Performance of various methods in terms of Group Recall on Corel-1000 database.

Recall (%)

Jhanwar et al. RGB Hist. HSV Hist. PM

32.21 48.33 42.94 44.53

29.04 26.38 19.4 25.69

27.7 24.38 34.99 23.38

48.66 34.45 63.77 43.57

81.44 98.37 90.35 98.35

21.42 36.48 29.01 33.92

63.53 49.46 32.86 62.72

35.84 45.29 50.82 49.45

21.75 17.5 14.55 23.24

29.02 37.01 47.82 38.09

39.06 41.76 42.65 44.29

Page 260: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

894 Vol. 7, Issue 3, pp. 887-895

Fig. 4: Two query results of the proposed method on Corel-1000 image database.

V. CONCLUSIONS AND FUTURE WORK

A new image indexing and retrieval algorithm using localized RGB histogram is proposed in this

paper. The performance of the proposed method is tested by conducting experimentation on Corel-

1000 natural image database. The results after being investigated show a significant improvement in

terms of precision, recall, average retrieval rate and average retrieval precision as compared to other

existing techniques on Corel-1000 image database.

In this paper, we propose a new color feature for image retrieval. Further, the performance of the

CBIR system can be improved by integrating the proposed color feature with the texture features like,

local binary patterns (LBP), local ternary patterns (LTP), local derivative patterns (LDP), local tetra

patterns (LTrP), etc,.

REFERENCE

[1] Y. Rui and T. S. Huang, Image retrieval: Current techniques, promising directions and open issues, J.. Vis.

Commun. Image Represent., 10 (1999) 39–62.

[2] W.M. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain, Content-based image retrieval at the end

of the early years, IEEE Trans. Pattern Anal. Mach. Intell., 22 (12) 1349–1380, 2000.

[3] M. Kokare, B. N. Chatterji, P. K. Biswas, A survey on current content based image retrieval methods,

IETE J. Res., 48 (3&4) 261–271, 2002.

[4] Ying Liu, Dengsheng Zhang, Guojun Lu, Wei-Ying Ma, Asurvey of content-based image retrieval with

high-level semantics, Elsevier J. Pattern Recognition, 40, 262-282, 2007.

[5] M.J. Swain, D.H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7, pp.11–32, 1991.

[6] M. Flickner, H. Sawhney,W. Niblack, J. Ashley, Q. Hunag, B. Dom, M. Gorkani, J. Hafner, D. Lee, D.

Petkovic, D. Steele, P. Yanker, “Query by image and video content: the QBIC system,” IEEE Comput. 28

(9), pp.23–32, 1995.

[7] Carson, M. Thomas, S. Blongie, J.M. Hellerstine, J. Malik, Blobworld, “A system for region-based image

indexing and retrieval,” Proceedings of the SPIE, Visual Information Systems, Netherland, pp. 509–516,

1999.

[8] M. Stricker, A. Dimai, “Spectral covariance and fuzzy regions for image indexing,” Mach. Vis. Appl. 10,

pp. 66–73, 1997.

Page 261: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

895 Vol. 7, Issue 3, pp. 887-895

[9] J. Huang, S.R. Kumar, M. Mitra, W.J. Zhu, R. Zabih, “Image indexing using color Correlogram,”

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, pp. 762–768,

1997.

[10] J. Huang, S.R. Kumar, M. Mitra, W.J. Zhu, R. Zabih, “Spatial color indexing and applications,” Int. J.

Comput. Vis. 35 (3), pp. 245–268, 1999.

[11] F. Mahmoudi, J. Shanbehzadeh, A.M. Eftekhari-Moghadam, H. Soltanian-Zadeh, “Image retrieval based

on shape similarity by edge orientation auto-correlogram,” Pattern Recognition 36 (2), pp. 1725–1736,

2003.

[12] H.H.S. Ip, D. Shen, “An affine-invariant active contour model (AI-snake) for model-based segmentation,”

Image Vis. Comput. 16 (2), pp. 135–146, 1998.

[13] Sajjanhar, G. Lu, “A grid based shape indexing and retrieval method,” Special issue of Austral. Comput. J.

Multimedia Storage Archiving Systems 29(4), pp.131–140, 1997.

[14] Liu, F., Picard, R.W., 1996. Periodicity, directionality, and randomness: Wold features for image modeling

and retrieval. IEEE Trans. Pattern Anal. Machine Intell. 18, 722–733.

[15] J. R. Smith and S. F. Chang, Automated binary texture feature sets for image retrieval, Proc. IEEE Int.

Conf. Acoustics, Speech and Signal Processing, Columbia Univ., New York, (1996) 2239–2242.

[16] Ahmadian, A. Mostafa, An Efficient Texture Classification Algorithm using Gabor wavelet, 25th Annual

international conf. of the IEEE EMBS, Cancun, Mexico, (2003) 930-933.M.

[17] M. N. Do and M. Vetterli, “The contourlet transform: An efficient directional multi-resolution image

representation,” IEEE Trans. Image Process., vol. 14, no. 12, pp. 2091–2106, 2005.

[18] M. Unser, Texture classification by wavelet packet signatures, IEEE Trans. Pattern Anal. Mach. Intell., 15

(11): 1186-1191, 1993.

[19] S. Manjunath and W. Y. Ma, Texture Features for Browsing and Retrieval of Image Data, IEEE Trans.

Pattern Anal. Mach. Intell., 18 (8): 837-842, 1996.

[20] M. Kokare, P. K. Biswas, B. N. Chatterji, Texture image retrieval using rotated Wavelet Filters, Elsevier J.

Pattern recognition letters, 28:. 1240-1249, 2007.

[21] M. Kokare, P. K. Biswas, B. N. Chatterji, Texture Image Retrieval Using New Rotated Complex Wavelet

Filters, IEEE Trans. Systems, Man, and Cybernetics, 33 (6): 1168-1178, 2005.

[22] M. Kokare, P. K. Biswas, B. N. Chatterji, Rotation-Invariant Texture Image Retrieval Using Rotated

Complex Wavelet Filters, IEEE Trans. Systems, Man, and Cybernetics, 36 (6): 1273-1282, 2006.

[23] L. Birgale, M. Kokare, D. Doye, Color and Texture Features for Content Based Image Retrieval,

International Conf. Computer Grafics, Image and Visualisation, Washington, DC, USA, (2006) 146 – 149.

[24] Subrahmanyam, A. B. Gonde and R. P. Maheshwari, Color and Texture Features for Image Indexing and

Retrieval, IEEE Int. Advance Computing Conf., Patial, India, (2009) 1411-1416.

[25] N. Jhanwara, S. Chaudhuri, G. Seetharamanc, and B. Zavidovique, Content based image retrieval using

motif co-occurrence matrix, Image and Vision Computing 22, (2004) 1211–1220.

[26] Corel–1K image database. [Online]. Available: http://wang.ist.psu.edu/docs/rela-ted.shtml.

AUTHORS

K. prasanthi Jasmine, received her B.Tech in Electronics & Communication Engineering

and M.Tech in Digital Systems from Regional Engineering College(Now NIT), Warangal,

Andhra Pradesh, and Osmania University College of Engineering, Osmania University,

Andhra Pradesh, India in the years 2000 and 2003 respectively. Currently, she is pursuing

Ph.D from Andhra University, A.P, India. Her major fields of interest is Image Retrieval,

Digital image Processing and Pattern Recognition.

P. Rajesh Kumar received his M.Tech, Ph.D degrees from Andhra University,

Vishakhapatnam, India. He is currently working as professor, Department of Electronics &

Communication Engineering, Andhra University College of Engineering, Visakhapatnam,

Andhra Pradesh. He is also Assistant Principal of Andhra University college of

Engineering, Visakhapatnam, Andhra Pradesh. He has twenty years experience of teaching

undergraduate and postgraduate students and guided a number of post graduate thesis. He

has published twenty research papers in national and international journals and conferences.

Presently he is guiding twelve Research scholars. His research interests are digital signal and image processing,

computational intelligence, human computer interaction, radar signal processing.

Page 262: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

896 Vol. 7, Issue 3, pp. 896-904

WITRICITY FOR WIRELESS SENSOR NODES

M. Karthika1 and C. Venkatesh2 1PG Scholar, Erode Builder Educational Trust’s Group of Institutions,

Kangayam, Tirupur, Tamilnadu, India 2Professor- ECE, Erode Builder Educational Trust’s Group of Institutions,

Kangayam, Tirupur, Tamilnadu, India

ABSTRACT A major challenge in the wireless sensor networks is to develop a way of supplying power to sensor nodes in an

efficient and reliable manner. This paper explored a potential solution to this challenge by transferring energy

using Radio Frequency (RF) signals. The RF signal was generated using oscillator, amplified using power

amplifier, and transmitted through antenna. The wirelessly transmitted RF energy can be captured by the

receiving antennas, transformed into DC power by a rectifying circuit, and stored in a battery to provide the

required energy to the sensor node. Such a system has the advantage of eliminating battery replacement of the

sensor nodes.

KEYWORDS: Wireless Power Transmission, Wireless Electricity, Wireless Sensor Networks

I. INTRODUCTION

Energy constraints are widely regarded as a fundamental limitation of wireless sensor networks. For

sensor networks, a limited lifetime due to battery constraint poses a performance bottleneck and

barrier for large scale deployment.

Recently, wireless power transfer has emerged as a promising technology to address energy and

lifetime bottlenecks in a sensor network. As wireless sensor devices become pervasive, charging

batteries for these devices has become a critical problem. Existing battery charging technologies are

dominated by wired technology, which requires a wired power plug to be connected to an electrical

wall outlet. Wireless power transfer (WPT) achieves the same goal but without the hassle of wires.

A small, low-cost, wireless sensor node is important for sensing the environment. However, the need

for frequently replacing its battery has always been a problem, which has limited its use of WSNs.

WSNs based on energy harvesting are partly in practical use. The use of solar energy in energy

harvesting WSNs has increased for practical applications; this is because of the fact that solar panels

are easily available and they have a higher energy density as compared to other energy harvesting

techniques. This high energy density allows the development of smaller sensor nodes. However, solar

power strongly depends on sunlight and can therefore hardly harvest energy during the nighttime ant

the amount of harvested energy depends on the weather.

RF based Wireless Power Transfer is one of the key techniques used to solve this problem. This

system focuses on using an ambient RF field as an energy source to power wireless sensor nodes. The

use of this unutilized energy as a power source will not only reduce the battery replacement cost, but

also enable a long period operation in WSNs.

1.1 Related Works

In paper [1], an Improved Energy Efficient Ant Based routing Algorithm energy management

technique, which improves the lifetime of sensor network, was proposed. Using the harvested energy,

the time of charging the battery powering the sensor nodes drastically reduced, while requiring time

intervals of 91.9 hrs to recharge the battery. Paper [2] studied a power and data transmission system

for strain gauge sensors. Two kinds of receiver antennas (dipole and patch) are evaluated in terms of

Page 263: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

897 Vol. 7, Issue 3, pp. 896-904

return loss and peak received power. Results show that dipole antenna have a better impedance

matching with the conversion circuit than that of the patch antenna. The rectenna optimization,

channel modeling and design of a transmit antenna are discussed in paper [3]. Paper [5] reviewed the

history of wireless power transfer and described its recent developments. It showed how such

technologies can be applied to sensor networks and addresses its energy constraints. Paper [7]

presented the concept of transmitting power without using wires i.e., transmitting power as

microwaves from one place to another is in order to reduce the cost, transmission and distribution

losses. Paper [8] aimed at designing a prototype of wireless power transmission system. Detailed

analysis of all the system components has been discussed.

This research paper is organized as follows. A brief description of existing and proposed systems is

presented in Section 2. Section 3 discusses the transmitter and receiver circuits and the output of the

each circuit. The brief discussion of result is described in Section 4. Finally, some concluding remarks

have been highlighted in Section 5.

II. SYSTEM OVERVIEW

2.1 Existing System

Comparing with sensor node or battery replacement approaches, the wireless charging technology

allows a mobile charger to transfer energy to sensor nodes wirelessly without requiring accurate

localization of sensor nodes or strict alignment between the charger and nodes [10].

The system consists of

Figure 1. Energy Transfer through Moving vehicle charger

A moving vehicle charger (MVC): a mobile robot carrying a wireless power charger, a network of

sensor nodes equipped with wireless power receivers, and an energy station that monitors the energy

status of the network and directs the MVC to charge sensor nodes.

A network of sensor nodes equipped with wireless power receivers: Sensor nodes perform application

tasks such as environment monitoring, generate sensory data, and periodically report the data to the

sink. In addition, that also monitor the voltage readings of their own batteries, estimate energy

consumption rates, based on which derive their own lifetime, and then report the information to the

sink periodically.

An energy station: it is responsible for monitoring the energy status of sensor nodes, deciding the

power charging sequences to be executed by the mobile charger (as shown in the Figure 1).

Problem in Existing System:

The existing system has the overhead of controlling and maintaining the MVC. The MVC takes more

than 30% of energy for travelling between the sensor nodes and charging. The distance between MVC

and sensor node is too small (in 5-40cm).

2.2 Proposed System

Page 264: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

898 Vol. 7, Issue 3, pp. 896-904

To overcome the energy constraint problem in WSN, WiTricity (Wireless Electricity) is proposed.

Energy is transmitted from energy station to sensor node without wire connections. WiTricity is

directly transmitted from energy station to sensor nodes.WiTricity is effectively achieved by RF based

Wireless Power Transfer System. Wireless Power Transmission system using Radio frequency will be

designed for sensor nodes which require low power to operate. RF-based wireless power transmission

focus on passively powered sensor networks by improving the conversion efficiency. Attempting to

maximize the output power by designing efficient antennas. Antennas can be made more directional,

allowing longer distance power beaming.

2.2.1 Wireless Power Transfer System

Figure 2. Transmitter Block Diagram

The Wireless power transfer system consists of two parts, transmitter side and receiver side. The

block diagram of a typical transmitter unit of WPT system is shown in Figure 2. There is always a

large amount of signal power loss in the free space while the RF signal propagates through it. To

compensate this loss at the receiver side, the transmitter of the wireless power transfer system should

be capable of transmitting a high power. For this reason the transmitting antenna should have high

performance [6].

The receiver section contains the receiving antenna, rectifier, DC-DC converter and battery as shown

in Figure 3. The receiver’s main function is to receive the RF signal from the transmitter, convert it to

DC signal which is used to charge the connected device’s battery. A simple battery charging theory is

to run current through the battery, and apply a voltage difference between the terminals of the battery

to reverse the chemical process. In this paper, the current is obtained from the radio wave signal

coming from the antenna [8].

Figure 3. Receiver Block Diagram

III. SYSTEM IMPLEMENTATION

System implementation is based on designing the transmitter and the receiver. The transmitter part

consists of oscillator, power amplifier and antenna. The receiver part consists of receiving antenna,

rectifier, DC-DC converter and battery.

3.1 Oscillator

Page 265: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

899 Vol. 7, Issue 3, pp. 896-904

Figure 4. Oscillator Circuit Diagram

The oscillator circuit is based on the Hartley oscillato. The Hartley Oscillator is a particularly useful

circuit for producing good quality sine wave signals in the RF range. The Hartley design can be

recognized by its use of a tapped inductor. The frequency of oscillation can be calculated in the same

way as any parallel resonant circuit, using:

𝐟𝐫 =𝟏

𝟐𝛑√𝐋𝐂

Where L = L1 + L2.

12V power supply is given to oscillator circuit. 82pF capacitor and the 120uH inductor which have 25

turns with 6mm diameter are acting as a tank circuit. Tank circuit is the responsible for the oscillator

frequency. The center point of the inductor is given to the amplifier circuit. So that, with two

inductors and one capacitor, the oscillator circuit is working with the principle of Hartley oscillator.

The oscillator circuit frequency from this circuit is design for 56MHz with 2V amplitude. 1K resister

and 0.5nF capacitor is used to give the basing voltage to the transistor.

3.2 Pre-Amplifier

Output from the oscillator circuit that is, 56MHz with 2V amplitude is given as an input signal to the

power amplifier circuit. 12V power supply is given to the power amplifier circuit. BFW16A is a RF

transistor that used to operate on the center frequency of 100MHz. This amplifier circuit is considered

as a voltage divider amplifier because of 47K and 5.6K resister. The gain of 2 is achieved from this

amplifier circuit. BFW16A is a multi emitter silicon planner epitaxial NPN transistor with extremely

inter modulation properties and high power gain. For this reason, BFW16A was chosen.

Figure 5. Power Amplifier Circuit Diagram

Page 266: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

900 Vol. 7, Issue 3, pp. 896-904

Oscillator, Pre-Amplifier Output

In Figure 6, Channel 1 shows the output of the oscillator and the scale for the channel 1 is 5V. The

channel 2 shows the output of the amplifier and the scale for the channel 2 is 5V. The oscillator output

was measured as 5V and the amplifier output was measured as 13V.

Figure 6. Oscillator and Amplifier Output

3.3 Power Amplifier

A push pull amplifier is an amplifier which has an output stage that can drive a current in either

direction through the load. Advantages of push pull amplifier are low distortion, absence of magnetic

saturation in the coupling transformer core, and cancellation of power supply ripples which results in

the absence of hum. Push-Pull power amplifier is shown in Figure 7. MJE15032 and MJE15033 are

8.0 amperes, complementary silicon power transistors of 250 volts, 50 watts with high DC current

gain, high current gain − bandwidth product. For this reason, MJE15032 and MJE15033 were chosen.

Output from the pre-amplifier is given to the power amplifier with frequency of 62 MHz and voltage

of 6V.

Figure 7. Push-Pull Power Amplifier Circuit Diagram

Power Amplifier Output

Page 267: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

901 Vol. 7, Issue 3, pp. 896-904

The push-pull power amplifier circuit was shown in Figure 8. 10V was given to this circuit as the

input power supply. The output current and the voltage were measured as 300mA and 20V

respectively. The total power achieved from the circuit was 6W.

Figure 8. Power Amplifier Output

3.4 Antenna

The telescopic (whip) antenna is used as transmitting and receiving antenna as shown in Fig.9. The

whip antenna can be considered half of a dipole antenna, and like a vertical dipole has an

omnidirectional radiation pattern, radiating equal radio power in all azimuthal directions

(perpendicular to the antenna's axis), with the radiated power falling off with elevation angle to zero

on the antenna's axis.

Figure 9. Telescopic Antenna

3.5 Rectifier

Bridge rectifier is used to convert the RF energy to the dc voltage as shown in Fig.8. Its elements are

usually arranged in a multi element phased array with a mesh pattern reflector element to make it

directional. A simple rectifier can be constructed from a switching diode placed between antenna

dipoles. The diode rectifies the current induced in the antenna by the RF signal. Switching diodes are

used because they have the lowest voltage drop and highest speed and therefore waste the least

amount of power due to conduction and switching. Rectifier is highly efficient at converting RF

energy to electricity.

Page 268: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

902 Vol. 7, Issue 3, pp. 896-904

Figure 10. Rectifier Circuit Diagram

The telescopic antenna is used to receive the RF signal. This AC signal is given to the bridge rectifier

circuit. The bridge rectifier has two ceramic capacitors of value 0.1pF, two electrolytic capacitors of

value 50uF and four PN diodes of 1N4148. This has converted the signal from the antenna to the DC

voltage.

3.6 DC-DC Converter

The output voltage of the rectifying circuit is too low for charging a battery. A DC-to-DC voltage

converter will adapt the rectified voltage to a stable output voltage high enough for charging a

rechargeable battery [3].

Figure 11. Dc to DC Converter Circuit Diagram

DC to DC converter is used to boost up the DC signal to the fixed DC voltage. 1V DC voltage is

given to the DC to DC converter circuit and the output DC voltage is measured as 9V because of the

DC to DC module LT1073. The LT1073 is gated oscillator switcher. This type architecture has very

low supply current because the switch is cycled only when the feedback pin voltage drops below the

reference voltage.

DC to DC Converter Output

Page 269: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

903 Vol. 7, Issue 3, pp. 896-904

Figure 12. DC to DC Converter Output

Figure 12 shows the simulation output of the DC to DC converter module. 1V input was given to the

DC to DC converter and the output was measured as 9V. In practical experiment, 8V is measured

from the DC to DC converter circuit.

IV. RESULT

In this paper, transmitter produced the radio signal with the voltage of 20V and current 300mA. This

power was transmitter through the telescopic antenna. In receiver side, another antenna was place at

the distance of 2m from the transmitter to receive the radio signal and fed to the rectifier circuit. The

output of the rectifier was measured as 1.15V and that was given to the DC to DC converter. The

output of the DC to DC converter was measured as 8V.

V. CONCLUSION AND FUTURE WORK

WiTricity for wireless sensor nodes project propose a wireless charging system for sensor networks.

This project presents the design and implementation of oscillator, pre-amplifier, push-pull power

amplifier, rectifier and DC to DC convert. The transmitter produced the radio signal of 6W and

transmitted through the distance of 2m. The output of the rectifier circuit was measured as 1.15V.

The design of the antenna at the transmitter and the receiver plays an important role in transferring the

voltage level at 1V to the DC to DC converter. Also the design of either a Tor π network at both the

transmitter and the receiver end plays a major role in power transfer. Integration of all the stages leads

to impedance mismatch which can be overcome using EDA tools like Advance Wave Research,

HFSS etc. Very low power DC to DC convertor can be designed to generate a reasonable propagation

of voltage and current.

REFERENCES

[1]. A.M. Zungeru, L.-M. Ang, SRS. Prabaharan, K.P. Seng A.M. Zungeru, “Radio Frequency Energy

Harvesting and Management for Wireless Sensor Networks”, In: Energy Scavenging and Optimization

Techniques for Mobile Devices. V. Hrishikesh, and G -M. Mountean (Eds.) USA: Chapter 13, pp. 341-

367, CRC Press, Taylor and Francis Group, 2011. (ISBN-13: 978-1439859896).

[2]. Guocheng Liu1, Nezih Mrad, George Xiao, Zhenzhong Li, and Dayan Ban,” RF-based Power

Transmission for Wireless Sensors Nodes”, SMART MATERIALS, STRUCTURES & NDT in

AEROSPACE Conference, NDT in Canada 2011, 2 - 4 November 2011, Montreal, Quebec, Canada.

[3]. Hubregt J. Visser, “Indoor Wireless RF Energy Transfer for Powering Wireless Sensors”,

Radioengineering, Dec2012, Vol. 21 Issue 4, p963-973. 11p.

[4]. Ke Wu, Debabani Choudhury, Hiroshi Matsumoto,” Wireless Power Transmission, Technology, and

Applications”, Proceedings of the IEEE, Vol. 101, No. 6, June 2013.

Page 270: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

904 Vol. 7, Issue 3, pp. 896-904

[5]. Liguang Xie, Yi Shi, Y. Thomas Hou, and Wenjing Lou, “Wireless power transfer and applications to

sensor networks”, to appear in IEEE Wireless Communications Magazine (Accepted Dec. 2012, impact

factor: 2.575).

[6]. Md M. Biswas, Member, IACSIT, Umama Zobayer, Md J. Hossain, Md Ashiquzzaman, and Md Saleh,

“Design a Prototype of Wireless Power Transmission System Using RF/Microwave and Performance

Analysis of Implementation”, IACSIT International Journal of Engineering and Technology, Vol. 4, No.

1, February 2012.

[7]. M.Venkateswara Reddy, K.Sai Hemanth, CH.Venkat Mohan, “Microwave Power Transmission – A Next

Generation Power Transmission System” IOSR Journal of Electrical and Electronics Engineering (IOSR-

JEEE), e-ISSN: 2278-1676 Volume 4, Issue 5 (Jan. - Feb. 2013), PP 24-28.

[8]. Olakanmi O. Oladayo ,Departement of Electrical and Electronic Engineering, University of Ibadan,

Ibadan Nigeria, “A Prototype System for Transmitting Power through Radio Frequency Signal for

Powering Handheld Devices”, International Journal of Electrical and Computer Engineering (IJECE)

Vol.2, No.4, August 2012.

[9]. P. Powledge, J.R. Smith, A. Sample, A. Mamishev, S. Roy, "A wirelessly powered platform for sensing

and computation,” Proceedings of Ubicomp 2006: 8th International Conference on Ubiquitous

Computing. Orange Country, CA, USA, September 17-21 2006, pp. 495-506. [10]. Yang Peng, Zi Li, Wensheng Zhang, Daji Qiao, "Prolonging Sensor Network Lifetime Through Wireless

Charging", In Proc. IEEE RTSS'10, San Diego, CA, November 30 - December 3, 2010.

BIBLIOGRAPHY OF AUTHOR

M KARTHIKA received her B.Tech in Information Technology from Bannari Amman

Institute of Technology, Sathyamanagalam in 2010. She started his career as a Test

Engineer in Wipro Technologies, Chennai (2010 to 2012). She is presently a student of M.E

Computer and Communication Engineering in Erode Builder Educational Trust’s Group of

Institutions, Kangayam.

C. VENKATESH, graduated in ECE from Kongu Engineering College in the year 1988,

obtained his master degree in Applied Electronics from Coimbatore Institute of

Technology, Coimbatore in the year 1990. He was awarded PhD in ECE from Jawaharlal

Nehru Technological University, Hyderabad in 2007. He has a credit of two decade of

experience which includes around 3 years in industry. He has 18 years of teaching

experience. He is supervising 12 Ph.D., research scholars. He is a Member of IEEE, CSI,

ISTE, FIE and Fellow IETE. He is presently a Professor, Dean- Faculty of Engineering in

Erode Builder Educational Trust’s Group of Institutions.

Page 271: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

905 Vol. 7, Issue 3, pp. 905-910

STUDY OF SWELLING BEHAVIOUR OF BLACK COTTON SOIL

IMPROVED WITH SAND COLUMN

Aparna1, P.K. Jain2, and Rakesh Kumar3 1M. Tech. Student, Department of Civil Engineering M.A.N.I.T. Bhopal, (M.P) India.

2Professor, Department of Civil Engineering, M.A.N.I.T. Bhopal, (M.P) India. 3Assistant Professor, Department of Civil Engineering, M.A.N.I.T. Bhopal, (M.P) India.

ABSTRACT This paper presents the result of an experimental study conducted for evaluating the effect of size of the sand

column on swelling of expansive soil. The sand columns of diameters25mm, 37.5mm and 50mm were made in

black cotton soil test beds in a cylindrical mould of diameter 100mm. The test beds were prepared at different

water contents (14, 18, 22, 26,30,36,40 and 44% by weight of dry soil) keeping the dry density of the soil as

constant. The soil with sand column was submerged and the swelling of the composite material was observed.

The test results show that the presence of sand column in the expansive black cotton soil reduces the swelling.

The reduction in swelling depends on the size of the sand column and the initial moisture content in the soil. A

column of diameter 50mm reduces swelling more than the smaller ones. For 14% initial moisture content in the

black cotton soil, the stone columns of diameters 25mm, 37.5mm and 50mm have shown reduction in swelling

by 11.5%, 23% and 42% respectively in comparison to that exhibited by the raw soil. The soil with high initial

moisture content shows less swelling than those with low moisture content. Thus, by manipulating the initial

moisture content and the diameter of the sand column, the expansive soil reinforced with sand columns can be

made volumetrically stable.

KEYWORDS: Sand column, swelling, expansive soil, composite ground, ground improvement.

I. INTRODUCTION

Expansive soils are encountered in arid and semi-arid regions of the world, where annual evaporation

exceeds annual precipitation. In India, expansive soils cover about 20% of the total land area (Ranjan

and Rao 2005, Shelke and Murthy 2010). These soils increase in volume on absorbing water during

rainy seasons and decrease in volume when the water evaporates from them (Chen, 1988). The

volume increase (swell) if resisted by any structure resting on it; then vertical swelling pressure is

exerted by the soil on the structure. This pressure if not controlled, may cause uplifting and distress in

the structure (Shelke and Murthy 2010). The strength loss on wetting is another severe problem with

such soils. Due to this peculiar behaviour many civil engineering structures constructed on expansive

soils get severely distressed. Pavements are in particular, susceptible to damage by expansive soils

because they are lightweight and extend over large areas. Dwelling houses transferring light loads to

such soils are also subjected to severe distress. Similarly, earth structures such as embankments,

canals built with these soils suffer slips and damages (Mishra et al., 2008).

Soil stabilization techniques are widely used for stabilizing expansive soils. Physico-chemical

stabilization using lime, cement, fly ash, enzymes, and other chemicals control the swelling in

expansive soil (Lopez-Lara et al., 1999). In these techniques, uniform mixing of the stabilizers in soil

must be ensured else erratic results may come. Mechanical stabilization of soil (without altering

chemical properties) includes controlling compaction (Sridharan and Gurtug, 2004), pre-wetting

(Chen, 1988), mixing with sand (Sridharan and Gurtug, 2004; Mishra et al., 2008), using cohesive

non-swelling soil (Katti et al., 1983), EPS geofoam .(Shelke and Murthy 2010), reinforcing the soil

Page 272: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

906 Vol. 7, Issue 3, pp. 905-910

using geosynthetics (Sharma and Phanikumar, 2005; Ikizler et al., 2009) and by using polypropylene

fiber ( Muthukumar 2012) are used. Special foundation techniques such as lime under-reamed piles,

belled piers and granular pile anchors are also suggested (Phanikumar 1997). Recently Kumar and Jain 2013 and Kumar 2014 have demonstrated that the concept of granular pile,

that is popular in improving the weak marine clays, could be utilized to improve the load carrying

capacity of soft expansive black cotton soils too. In the granular pile, also known as stone column,

technique about 10 to 35% weak soil is removed and replaced with granular material in the form of

piles. Kumar (2014) performed model test in the laboratory on granular piles of sand constructed in

soft expansive soil of different UCS values. End bearing granular Piles were casted in the soil and the

load test were performed. The test results showed that load carrying capacity of a footing resting on

granular pile is significantly more than the corresponding value for the footing placed directly on the

soft soil bed. The increase in load carrying capacity is observed for all soil consistencies of the

expansive soil. It is concluded from his study that loss in strength and excessive settlement of the

expansive soil due to wetting could be minimized to a large extent by installation of granular piles in

the soil.

As reported above, besides the strength loss, the swelling and volume instability are the other severe

problems with these soils. The present work is an attempt to fill this gap. The size of the sand column

and the consistency of the soil play an important role in changing the behaviour of composite ground.

Hence these two aspects are varied in the present work and the influence of the sand column diameter

and the initial moisture content on the swelling of an expansive black cotton soil has been studied.

The details of the experimental program, results of the tests and the conclusions drawn from the study

are described below.

II. EXPERIMENTAL PROGRAM

The experiments were carried out in a cylindrical mould. The soil beds of black cotton soil were

prepared in the mould at a dry density of 15kN/m3. The initial mixing water content in the soil and the

diameter of the sand columns were the variable of the study. The swelling of composite soil (black

cotton soil reinforced with a sand column) on wetting was recorded. Soil beds were prepared with

14%, 18%, 22%, 26%, 30%, 36%, 40%, 44% of water content and for each test bed three diameters of

sand columns (25 mm, 37.5 mm, 50 mm) were installed. One series of swell measurements were

taken for black cotton soil beds prepared at the above water contents i.e. without the sand columns.

III. MATERIAL PROPERTIES

Two basic materials used for this study are: the black cotton soil representing the soft soil to be

improved and, the fine river sand as sand column forming material. The properties of these materials

are as follows:

(i) Black cotton soil: The black cotton soil was taken from NIT Bhopal campus. Its’ properties are

given in Table -1.

Table -1: Properties of Black Cotton Soil

Properties Values

Liquid limit (L.L.), % 54

Plastic limit (P.L.)% 29

Plasticity index (P.I.), % 25

Maximum dry density (MDD), kN/m3 15

Optimum water content (OMC), % 23.5

Differential free swell (DFS), % 40

Specific gravity (G) 2.64

Clay and silt content, % 95.0

Soil Classification (IS:1498-1970) CH, Clay of high

plasticity

(ii) Sand: Properties of the river sand used in the sand column are listed in Table -2.

Page 273: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

907 Vol. 7, Issue 3, pp. 905-910

Table -2: Properties of Sand

Properties Values

Particle size corresponding to 10% finer, D10, mm 0.32

Particle size corresponding to 20% finer, D20, mm 0.41

Particle size corresponding to 30% finer, D30, mm 0.44

Particle size corresponding to 50% finer, D50, mm 0.51

Particle size corresponding to 60% finer, D60, mm 0.54

Coefficient of curvature, CC 1.12

Coefficient of uniformity CU 1.69

Minimum dry density, ρmin, kN/m3 16.00

Maximum dry density, ρmax, kN/m3 17.15

Specific gravity, G 2.76

Brown’s suitability Number (Brown 1976) 8.87

Soil Classification (IS:1498-1970) SP (Poorly

graded sand)

IV. TEST SETUP AND PROCEDURE

A typical test arrangement with a single column of sand is shown in Fig. 1. A swelling mould of 100

mm diameter and 128 mm height with one collar and two porous plates was used. The soil was oven

dried and a predetermined amount of water was mixed and compacted in three layers to attain a dry

density of 15kN/m3. A porous plate with filter paper was placed below the soil sample. Then a hole of

required diameter was formed by using an auger and a casing pipe. The granular material (fine river

sand) was filled in the hole and compacted in layers to get the required density. One porous plate with

a filter paper is placed above the soil sample and the collar is fitted to the mould. Heave stake was

placed on the soil sample inside the swelling mould and a dial gauge was fixed on the top of the heave

stake to measure the swelling. This entire arrangement is placed inside a tank filled with water. The

swelling was monitored continuously by taking the dial gauge readings from time to time till dial

reading ceases to change. Tests were conducted for all the specimens prepared with different water

contents and with different sizes of the sand columns.

V. RESULTS AND DISCUSSIONS

The variation of swelling with respect to the time, for 14% water content in the soil, is plotted in Fig.

2. It was observed that swelling continue to occur nearly for about 96 hours, beyond which there is no

change in swelling. Similar trend is obtained for test beds of other initial water content in the soil.

Fig. 1 Experimental setup

Page 274: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

908 Vol. 7, Issue 3, pp. 905-910

The maximum swelling in each case was noted and its variation with the diameter of the sand column

is plotted (Fig. 3). It can be observed from this figure, that there is a decrement in swelling with the

increase in the diameter of the sand column. The sharp decrement in swelling is observed when the

test bed is prepared at lowest water content of 14%. In this case the 50mm diameter stone column

reduces swelling by 42% in comparison to the raw soil. The corresponding values for 37.5mm and

25mm diameter stone columns are 23% and 11.5% respectively.

The reason for observing reduction in swelling with installation of the sand columns is mainly due to

the replacement of expansive soil by non expansive sand. A large diameter column replaces more soil,

hence reduction in swelling is observed more than that by the smaller one for a particular value of the

initial water content in the soil.

Further, there is significant reduction in swelling with increase in the initial water content in the black

cotton soil. To show it, Fig. 4 is plotted. It can be noted from this figure that as moulding water

content (i.e. the initial water content) in the black cotton soil increases, the swelling decreases in all

Fig. 3 Effect of sand column diameter on swelling of expansive soil

Fig 2 Variation of swelling of soil with time at 14% water content

Page 275: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

909 Vol. 7, Issue 3, pp. 905-910

the cases. This observation is obvious because an expansive soil containing high initial water content

has little scope for imbibing more water on submergence as all its mineral particles are already

saturated and therefore very small, practically nil, swelling is observed at high initial moisture content

of 44%.

Fig 4 Variation of swelling with water content for raw soil and sand column reinforced soil

VI. CONCLUSIONS

The results of the testing program show that installation of sand column in expansive soil controls the

swelling effectively. The size of sand column and initial water content in the black cotton soil affect

the swelling behaviour. A large size sand column reduces swelling more than by a smaller one.

Swelling is also reduced with increase in water content. At 44% water content there is no swelling

found in the soil. The reduction in swelling is mainly due to replacement of expansive soil by non-

expansive sand and also because of presence of water in the soil. Thus if sand columns are installed in

expansive soils in wet condition maximum benefit in terms of volume stability can be achieved.

VII. SCOPE FOR FUTURE WORK

Installations of granular pile in soil are easy in comparison to other methods of soil improvements

such as lime or cement stabilization, where proper mixing of stabilizer with the soil, the depth of soil

to be treated poses practical difficulty. The use of sand column/ granular pile technique in swelling

soil is a relatively new area. The density, mineral characteristics of the expansive soil, and the

properties of the granular pile forming material are expected to influence performance of sand

columns in expansive soils. Future research in this area will pave the way to develop a design

methodology for mitigating the problems of expansive soil by sand columns with confidence.

REFERENCES

[1]. Brown, R.E. (1976). “Vibration compaction of granular hydraulic fills.” ASCE, National Water Resources

And Ocean Engineering Convention, pp. 1-30.

[2]. Chen, F.H. (1988). “Foundations on Expansive Soils.” Elsevier Scientific Publishing Co., Amsterdam.

[3]. Ikizler, S. B., Aytekin, M. and Vekli, M. (2009). “Reductions in swelling pressure of expansive soil

stabilized using EPS geofoam and sand.” Geosynthetics International, 16(3), 216–221.

Page 276: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

910 Vol. 7, Issue 3, pp. 905-910

[4]. I.S: 1498 – 1970 –“Classification and Identification of soils for General Engineering Purposes”.

[5]. Harishkumar. K and Muthukkumaran, K (2011). “Study on swelling soil behaviour and its improvements.”

International Journal of Earth Sciences and Engineering, ISSN 0974-5904, Volume 04, No 06 SPL, October

2011, pp. 19-25

[6]. Katti, R. K., Bhangle, E. S. and Moza, K. K. (1983). “Lateral pressure of expansive soil with and without

cohesive non-swelling soil layer applications to earth pressures of cross drainage structures of canals and key

walls of dams (studies of K0 condition).” Central Board of Irrigation and Power. Technical Report 32, New

Delhi, India.

[7]. Kumar, R. and Jain P. K. (2013). “Expansive Soft Soil Improvement by Geogrid Encased Granular Pile.”

Int. J. on Emerging Technologies, 4(1): 55-61(2013).

[8]. Kumar, R. (2014). “A Study on soft ground improvement using fiber-reinforced granular piles”, Ph. D.

thesis submitted to MANIT, Bhopal (India)

[9]. Lopez-Lara, T., Zepeta- Garrido, J. A. and Castario, V. M. (1999). “A comparative study of the

effectiveness of different additives on the expansion behavior of clays.” Electronic Journal of Geotechnical

Engineering, 4(5), paper 9904.

[10]. Mishra, A. K., Dhawan, S. and Rao, M. S. (2008). “Analysis of swelling and shrinkage behavior of

compacted clays”. Geotechnical and Geological Engineering, 26(3), 289–298

[11]. Muthukumar. M (2012). “Swelling pattern of polypropylene fiber reinforced expansive soils.”

International Journal of Engineering Research and Applications, Vol. 2, Issue 3, May-Jun 2012, pp.1385-1387

[12]. Phanikumar, B.R.(1997), “A study of swelling characteristics of and granular pile anchor foundation

system in expansive soils”. Ph.D. thesis, Jawaharlal Nehru Technological Univ., Hyderabad, India.

[13]. Ranjan, G and Rao, A.S.R. (2005), “Basic and applied soil mechanics”, New Age International (P) Ltd,

New Delhi pp 753.

[14]. Sharma, R. S. and Phanikumar, B. R. (2005). Laboratory study of heave behaviour of expansive clay

reinforced with geopiles. Journal of Geotechnical and Geoenvironmental Engineering, 131 (4), 512–520.

[15]. Shelke, A.P. and Murthy, D.S. (2010). “Reduction of Swelling Pressure of Expansive Soils Using EPS

Geofoam.” Indian Geotechnical Conference (2010).

[16]. Sridharan, A. and Gurtug, Y. (2004). “Swelling behavior of compacted fine-grained soils”. Engineering

Geology, 72(1-2), 9–18.

AUTHORS

Aparna was born in Lucknow (U.P.), India, in 1992. She received the Bachelor degree in

civil engineering from Babu Banarasi Das National Institute of Technology & Management,

in 2012 and she is currently pursuing the Master degree in geotechnical engineering from

Maulana Azad National Institute of Technology, Bhopal, which will complete in July, 2014.

Her research interests include geotechnical engineering.

Pradeep Kumar Jain was born in Jhansi (U.P), India, in 1964. He received the Bachelor

in civil engineering degree from the MITS, Gwalior, in 1986 and Master degree in

construction technology and management from MITS, Gwalior, in 1988. He has completed

his PhD from IIT Roorkee, Roorkee, in 1996. His research interests include geotechnical

engineering.

Rakesh Kumar was born in Gajrola (U.P), India, in 1977. He received the Bachelor degree

in civil engineering from the MMMEC, Gorakhpur, in 1999 and Master degree in

geotechnical engineering from IIT Roorkee, Haridwar, in 2001. He has completed his PhD

from Maulana Azad National Institute of Technology, Bhopal, in 2014. His research

interests include geotechnical engineering.

Page 277: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

911 Vol. 7, Issue 3, pp. 911-916

EFFECTIVE FAULT HANDLING ALGORITHM FOR LOAD

BALANCING USING ANT COLONY OPTIMIZATION IN CLOUD

COMPUTING

Divya Rastogi and Farhat Ullah Khan ASET, Amity University, Noida, India

ABSTRACT Cloud computing is an emerging technology in distributed computing. It is a collection of interconnected

virtual machines as to facilitate pay per use model as per the demand and requirement of the user. The primary

aim of cloud computing is to provide efficient access to remote and geographically distributed resources without

losing the property of reliability. In order to make these virtual machines hassle free, a distribution of work load

(Load balancing) is done properly. Since in cloud computing the processing is done on remote computers hence

there are more chances of errors, due to the undetermined latency and lose control over computing node. So it

is very necessary to maintain the reliability of remote computers. That is why it is mandatory to create a fault

tolerant infrastructure for cloud computing. This paper focuses on to balance the load of entire system while

trying to maintain the reliability of the system by creating it as a fault tolerant system. Our objective is to study

the existing ACO’s and to develop an effective fault tolerant system using ant colony optimization. Paper

describes an algorithm in order to make the system more reliable and fault tolerant.

KEY WORDS: Ant colony optimization (ACO), Cloud computing, fault tolerant, virtual machines.

I. INTRODUCTION

Internet and its technologies are fast growing and very popular now a days. With the grown popularity

of Internet cloud Computing became a hot topic of industry and academia as an emerging new

computing mechanism. It is supposed to provide computing as the utility to meet the everyday needs

of the general community [2]. Cloud computing delivers services such as infrastructure as a service

(IaaS), platform as a service (PaaS), and software as services (SaaS). In SaaS, software application is

provided to the user by the cloud provider. In PaaS an application development platform is provided

as a service to the developer to create a web based application. In IaaS computing infrastructure is

provided as a service to the requester in the form of Virtual Machine (VM). Above mentioned

services are made available on a subscription basis using pay-per-you-use model to customers,

regardless of their location.

Cloud Computing still under in its development stage and has many issues and challenges. One of the

major issues of cloud computing is fault management.

Fault is a situation where in the system deviates from its expected behavior. Fault management is to

manage the fault and to take the corrective action in order to make the system fault tolerant and

reliable. Fault tolerance or graceful degradation is the property that enables a system to continue

operating properly in the event of the failure of (or one or more faults within) some of its components

[6]. If it’s operating quality decreases at all, the decrease is proportional to the severity of the failure,

as compared to a naïvely designed system in which even a small failure can cause total breakdown.

Fault management is to manage the fault tolerance system without degrading the performance of the

system.

A fault managing mechanism enables the system to continue its intended operation, possibly at a

reduced level, rather than failing completely, when some part of the system fails.

Page 278: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

912 Vol. 7, Issue 3, pp. 911-916

The main aim of fault management is to provide a high reliable computing and the best system

throughput [3]. A fault manager handles the situations of failure and can take the corrective actions in

order to maintain the flow.

This paper presents an algorithm for fault management at the time of load balancing in ACO (Ant

colony optimization) system in cloud computing. Rest of the paper is organized is as follows: section

II discusses the literature review, section III contains information about the basics of ant colony

optimization, section IV discusses the proposed work, Section V contains the expected outcome and

last section concludes the paper with the future prospects.

II. LITERATURE REVIEW

In this section, we have described the related work of fault management in cloud computing

environment. The author of paper [3] modeled that the system tolerates the fault and makes the

decisions on the basis of reliability of the processing nodes named virtual machines. After every

computing cycle the reliability of virtual machines changes that’s why it is adaptive. But if a virtual

machine manages to produce a correct result within the time limit than the reliability increases but if it

fails then the reliability of virtual machine decreases. The authors of paper [14] proposed a low

latency fault tolerance (LLFT) system that provides fault tolerance for distributed applications within

a local-area network, by using a leader–follower replication strategy. It provides application-

transparent replication. Its novel system model enables LLFT to maintain a single consistent infinite

computation, despite faults and asynchronous communication. The authors of paper [15] gave a model

named as “Vega-warden”, uniform user management system which helps to supply a global user

space for different virtual infrastructure and application services in cloud computing environment.

This model is made for virtual cluster base cloud computing environment to resolve the usability and

security arise at the time of infrastructure sharing.

Fumia Machida [12] proposed a component base availability modeling frame work known as Candy,

which creates a comprehensive availability model in a semi automatic fashion and described by

system modeling language. The base of this model is the high availability assurance of cloud service.

Authors of paper [11] proposed a model that would help to overcome the limitation of existing on

demand service methodologies. To achieve the reliability and resilience they proposed an innovative

perspective on creating and managing fault tolerance. By this particular methodology user can specify

and apply the desire level of fault tolerance without requiring any knowledge about its

implementation. FTM architecture this can primarily be viewed as an assemblage of several web

services components, each with a specific functionality. Authors in paper [8] has given a model

named as Magi-Cube to provide high reliability and low redundancy for the storage architecture for

cloud computing. The system is built on the top of HDFS and uses HDFS as a storage system for file

read /write and metadata management. A file scripting and repair component is also built to work in

the back ground independently. This model Magi-Cube is constructed with the view that high

reliability and performance and low cost (space) are the 3 conflicting component of storage system.

Authors of paper [16] has modeled “FT-Cloud” that is a component ranking based frame work and its

architecture for building cloud application. FT-Cloud employs the component invocation structure

and frequency for identify the component. There is an algorithm to automatically determine fault

tolerance stately.

In [8], suggested Ant Colony Optimization (ACO) algorithm for load balancing. In this approach,

incoming ants update the entries in the pheromone table of a node. For instance, an ant traveling from

source to destination will update the corresponding entry in the pheromone table. If an ant is at a

choice point when there is no pheromone, it makes a random decision. Here tasks are assigned to

server with non-preemption scheduling so waiting time of high priority tasks are increased.

III. ANT COLONY OPTIMIZATION

Dorigo M. has introduced the ant algorithm based on the behavior of real ants in 1996[14][15], a new

heuristic algorithm for the solving the combinatorial optimization problems. ACO is inspired from the

ant colonies that work together in foraging behavior Investigations show that: Ant has the intelligence

system to find an optimal path from nest to source of food. On the way of searching, ants act as the

Page 279: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

913 Vol. 7, Issue 3, pp. 911-916

expert agents and in the direction of movement they lay some pheromone on the ground; while an

isolated ant runs into a previously set trail. The trailing ant can detect it and decide to follow it with

high probability. The probability of ant chooses a way is proportion to the concentration of a way’s

pheromone. To a way, the more ants choose, the way which has denser pheromone, and vice versa.

By this feedback technique, ant can find out an optimal way finally [14][15]. The ants work together

in search of new sources of food and simultaneously use the existing food sources to shift the food

back to the nest [14][15].

In Paper [8], author has suggested that each node in ant based control system which was designed to

solve the load balancing in cloud environment was configured with

1) Capacity that accommodates a certain.

2) Probability of being a destination.

3) Pheromone (or probabilistic routing) table.

Figure 1. Updation through incoming ants

As shown in figure 1[8], incoming ants update the entries the pheromone table of a node. For

instance, an ant traveling from (source) to (destination) will update the corresponding entry in the

pheromone table in. Consequently, the updated routing information can only influences the routing

ants and calls that have as their destination. However, for asymmetric networks, the costs may be

different. Hence, in this approach for updating pheromone is only appropriate for routing in

symmetric networks.

IV. PROPOSED WORK

As cloud computing is a distributed system. For a distributed system to work properly, it is necessary

that the system should be in safe state. In case of any fault, there should be technique to handle this

fault so that working of the system should not get affect.

In paper [9], authors did not explain any method or mechanism for fault handling. It does not provide

any solution in case of any failure or error in system. Here in this paper, we propose a model for fault

management.

As discussed before, ants follow the route according to the updated entries in pheromone table so it is

necessary to store the status of this pheromone trail. To make the ACO more reliable with the capacity

of fault tolerance, we construct the system with the following features:

(i) Each ant will be having a small memory element.

(ii) Every node in the system will have the knowledge about its neighbors.

(iii) Each node can flood the message in the network.

Memory element is installed in each ant at the time of the construction of ACO system and this will

work as knowledge base. Memory element will store the information about each movement of the ant

in the system (distance covered by each ant), its neighboring ant’s information and routing

information (Based on pheromone table). Knowledge base is used when a fault is generated and helps

to take the corrective actions.

A) Working model

Page 280: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

914 Vol. 7, Issue 3, pp. 911-916

Suppose there are N nodes in the system. Some of the nodes act as the source nodes (Surplus with the

workload) while other nodes work as destination node (Lightly loaded nodes). Our task is to provide a

fault management technique from source nodes to destination nodes. We are providing a model that

will improve the reliability of ACO system. Fault management is a two step process:

(i) Fault detection

(ii) Fault handling

(i) Fault Detection: for fault detection, a fault detector is installed into the system. This fault

detector will work on the concept of scholastic petri nets algorithm given by et al

Agrayasan. The partially stochastic Petri nets developed in this paper provide independent

behaviors to regions of the net that are not directly interacting. The diagnosis problem

addressed assumes a sequence of alarms. By generating the connecting tiles that match

the observed sequence of alarms and their causal dependence relations, we can find out

the fault generated in the system [16].

(ii) Fault Handling: To control the fault and to improve the reliability of the system, we

construct a modified algorithm of ACO with the implementation of checkpoints. Check

Points (CP) are those points from where backward error recovery is done in case of

complete failure of a system. Adding checkpoints helps to provide an automatic forward

recovery. If an ant fails to route the data properly at the time of load balancing system

will not fail. It will continue to operate with remaining nodes. This mechanism will work

properly until all the nodes fail.

B) Modified Algorithm

This algorithm will help to make the cloud environment more reliable. Here, in this algorithm,

whenever a request comes then the state of the system is checked. If the resource can be allocated

without any deadlock then resource is allotted, otherwise request is transferred to waiting state.

Start

N: = number of nodes

dis_cov:= 0 (Distance covered by the ant)

Set Checkpoint= 0 (covered distance is zero by far).

Do (Formation of load balancing using ACO)

Input processing node status

If input processing node status = pass then

Dis_cov= dis_cov+1

Set checkpoint

While (all nodes get balanced)

If (failure = yes) then

If (dis_cov<1) then

Roll back the process

Else if (dis_cov>1) then

Check for the next hop and nearest checkpoint.

Update the value according to the nearest checkpoint.

Else

Give a pass.

End

Here, in the above given algorithm we can see that every time an ant moves, the value is stored in

ant’s memory element. This covered distance shows the path value covered by individual ants in the

system from source to destination. If any fault occurs then the ant will look into its pheromone table

for the next hop and its distance to the net hope. The fault management system will check for the

nearest covered checkpoint and roll back the system to the covered checkpoint.

V. EXPECTED OUTCOME

The given algorithm will continuously allocate the resources as per the requests generated by the

client. Before the allocation this algorithm checks that the system is in safe state or not before any

allocation.

Page 281: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

915 Vol. 7, Issue 3, pp. 911-916

Suppose that there are N nodes acting as active (current nodes). At each movement of the ant, distance

is stored in the memory element of the ant. As the ant reaches to a checkpoint, the value of the

checkpoint is also stored.

Suppose there are N1, N2, N3, N4, N5 nodes in the system. On receiving the request server allocates

the request to any one of nodes as per the matrix provided in paper [9] [10]. The node with minimum

cost is selected. A fault may generate at time when the server is busy in balancing the load among the

nodes. Suppose server selects node N2 at any time t1 for handling request r1 (by watching the matrix

of load at all nodes), at the time of transferring the load N2 stops responding to the server. Now at this

stage the checkpoint and nearest neighbour is checked. As the nearest neighbour is N1 and N3 so the

load is transferred to any of them by considering their load matrices.

VI. CONCLUSION AND FUTURE WORK

Till now, there were many algorithms which can help to make the system fault tolerant but no

technique was provided to resolve the problem when fault occurs at the time of load balancing. In this

paper, we have given an algorithm that can deal with the situations of faults or failures. We tried to

make the system reliable with the implementation of this algorithm. In future we will try to calculate

the complexity of this algorithm so that a more optimized way can be find out.

REFERENCES

[1]. Ginu Chako, S. Induja, A. Abuthahir, S. Vidya, ”Improving Performance of Fault Tolerant Mechanism by

Optimal Strategy Selection in Cloud”, IJST, March 2014.

[2].G. Shobha, M. Geetha, R.C. Suganthe,” Load Balancing by Preemptive Task Scheduling Instigated Through

Honey Bee Behaviour in Cloud Datacenter, ICSCN, March 2014.

[3]. Shivam Nagpal, Praveen Kumar , “A Study on Adaptive Fault Tolerance in Real Time Computing,

International Journal of Advanced Research in Computer Science and Software Engineering(IJARCSSE),

Volume 3, Issue 3, March 2013.

[4].Prasinjit Kumar Patra, Haspreet Singh, Gurpreet Singh, “Fault Tolerance Techniques and comparative

Implementation in Cloud Computing”, IJCA, February 2013.

[5]. Anjali D. Meshram, A.S. Sanbare, S.D. Zade, “ Fault Tolerance Model for Reliable Cloud Computing”, Vol

1, issue 7, July 2013.

[6]. Ravi Jhawar, Vincenzo Piuri, Marco Santambrogio, “Fault Tolerance Management in Cloud Computing: A

System Level Perspective”, IEEE Systems Journal, Vol 7, No 2, June 2013.

[7] N.Chandrakala, Dr. P.Sivaprakasam, “Analysis of Fault Tolerance Approaches in Dynamic Cloud

Computing”, IJARCSSE, Volume 3, Issue 2, March 2013.

[8]. Qingqing Feng, Jizhong Han, Yun Gao, Dan Meng “Magicube: High Reliability and Low Redundancy

Storage Architecture for Cloud Computing” 2012 IEEE Seventh International Conference on Networking,

Architecture, and Storage.

[9]. Ratan Mishra, Anant Jaiswal,” Ant colony Optimization: A Solution of Load Balancing in cloud, IJWest,

Vol 3, No 2, April 2012.

[10]. Nidhi Jain Kansal, Inderveer Chana, “Cloud Load Balancing Techniques: A Step Towards Green

Computing”, IJCSI, Vol 9, Issue 1, No1, January 2012.

[11]. Ravi Jhawar, Vincenzo Piuri and Marco Santambrogio “A Comprehensive Conceptual System level

Approach to Fault Tolerance in Cloud Computing” IEEE.

[12]. Fumio Machida, Ermeson Andrade, Dong SeongKim and Kishor S. Trivedi “Candy: Component-based

Availability Modeling Framework for Cloud Service Management Using Sys-ML” 2011 30th IEEE

International Symposium on Reliable Distributed Systems.

[13]. Arvind Kumar et.al, Rama Shankar Yadav, Ranvijay, Anjali Jain, “Fault Tolerance in Real Time

Distributed System, IJCSE, Vol 3, No 2 February 2011.

[14]. W. Zhao, P. M. Melliar Smith, L.E. Moser, “ Fault Tolerance Middleware for Cloud Computing,” in

proceedings of the 2010 IEEE 3rd International Conference on Cloud Computing, Ser. Cloud ’10, Washington,

DC, USA:IEEE Computer Society , 2010, pp 67-74.

[15]. Jianlin, Xiaoyi Lu, Lin Yu, YongqiangZou and Li Zha “Vega Warden: A Uniform User Management

System for Cloud Applications “2010 Fifth IEEE International Conference on Networking, Architecture, and

Storage.

Page 282: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

916 Vol. 7, Issue 3, pp. 911-916

[16]. ZibinZheng, Tom Chao Zhou, Michel R. Lyu, and Irwin king “FT-Cloud: A Component Ranking

Framework for Fault-Tolerant Cloud Applications “2010 IEEE 21st International Symposium on Software

Reliability Engineering.

[17]. Armen Aghasarayan et.al, Eric Fabre, Albert Benvenister, “Fault Detection and Diagnosis in Distributed

Systems: An Approach by Partially Stochastic Petri Nets”, Discrete Event Dynamic Systems: Theory and

Application, 8, 203-231, 1998.

AUTHORS

Divya Rastogi is a student of M.Tech in Amity University. Divya also works as Assistant Professor for G.L.

Bajaj Institute of Technology & Management, Greater Noida. Her expertise includes Networking, Network

security and cloud computing.

Page 283: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

917 Vol. 7, Issue 3, pp. 917-922

HANDLING SELFISHNESS OVER MOBILE AD HOC NETWORK

Madhuri D. Mane and B. M. Patil P.G. Dept., MBES College of Engineering Ambajogai, Maharashtra, India

ABSTRACT Mobile Ad hoc Network (MANET) also called as mobile mesh network. It is a self-configuring network of mobile

devices connected by wireless links. Here we assume all mobile nodes cooperate fully with the functionalities of

the network. In reality, however, some nodes may selfishly decide only to cooperate partially, or not at all, with

other nodes [1]. In this paper we have to find selfish nodes and recover them. These selfish nodes could then

reduce the overall data accessibility and increase delay in the network. In this paper, we examine the impact of

these selfish nodes and reduce the delay.

KEYWORDS: MANET, Proactive, Reactive, selfish, SCF, eSCF, SCE-DS, SCF-CN.

I. INTRODUCTION

Wireless cellular system has been in use since 1980s. Wireless system operates with the aid of a

centralized supporting structure such as an access point. These access points assist the wireless users

to keep connected with the wireless system, when they roam from one place to other. In wireless

system the device communicate via radio channel to share resource and information between devices.

Due to presence of a fixed supporting structure, limits the adaptability wireless system is required

easy and quick deployment of wireless network. Recent advancement of wireless technologies like

Bluetooth, IEEE 802.11 introduced a new type of wireless system known as Mobile ad-hoc network

(MANETs) [13], which operate in the absence of central access point. It provides high mobility and

device portability’s that enable to node connect network and communicate to each other. It allows the

devices to maintain connections to the network as well as easily adding and removing devices in the

network. User has great flexibility to design such a network at cheapest cost and minimum time.

MANETs has shows distinct characteristics, such as:

Weaker in Security

Device size limitation

Battery life

Dynamic topology

Bandwidth and slower data transfer rate

Ad hoc routing protocols can be broadly classified as:

Proactive (table-driven)

Reactive (on-demand)

Proactive protocols maintained nodes in a MANET and keep track of routes to all possible destinations

so that the route is already known and can be immediately used, when a packet needs to be forwarded.

On the other hand, reactive protocols employ a lazy approach whereby nodes only discover routes to

destinations on demand, i.e., a node does not need a route to a destination until that destination is to be

the sink of data packets sent by the node[14].In MANET, mostly we assume that all mobile nodes

cooperate fully in the network functionalities. But some nodes decide not to cooperate at all. From Past

there is a tendency in the nodes in an ad hoc network to become selfish. The selfish nodes are reluctant

to spend their resources such as memory, battery power and CPU time for others but they are not

malicious nodes [14]. The problem may become complicated, when with the passage of time the nodes

have small amount of residual power and want to conserve it for their own purpose.

Page 284: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

918 Vol. 7, Issue 3, pp. 917-922

Figure 1. Selfishness in MANET

In general while handling Selfishness in MANET we have to improve data accessibility and reduce

query delay, i.e., query Response time, if the mobile nodes in a MANET together have sufficient

memory space to hold both all the replicas and the original data. For example, the response time of a

query can be substantially reduced, if the query accesses a data item that has a locally stored replica.

However, there is often a trade-off between data accessibility and query delay, since most nodes in a

MANET have only limited memory space [15]. For example, to reduce its own query delay a node

may hold a part of the frequently accessed data items locally. However, if there is only limited

memory space and many of the nodes hold the same replica locally, then some data items would be

replaced and missing. Thus, the overall data accessibility would be decreased. Hence, to maximize

data accessibility, a node should not hold the same replica that is also held by many other nodes [15].

In [1] it defines three types of behavioral states for nodes from the viewpoint of selfish replica

allocation:

Type-1 node: are non-selfish nodes i.e. the nodes hold replicas allocated by other nodes within

the limits of their memory space.

Type-2 node: called as fully selfish nodes. These nodes do not hold replicas allocated by other

nodes, but allocate replicas to other nodes for their accessibility.

Type-3 node: are partially selfish nodes. The nodes use their some part of memory space for

allocated replicas by other nodes. Their memory space may be divided logically into two parts

as selfish and public area. These nodes allocate replicas to other nodes for their data

accessibility.

The detection of the type-3 nodes is complex, because they are not always selfish. In some sense, a

type-3 node might be considered as non-selfish, since the node shares part of its memory space here

this is considered as (partial) selfish, because the node also leads to the selfish replica allocation

problem. Selfish and non-selfish nodes perform the same procedure when they receive a data access

request, although they behave differently in using their memory space [1].

II. LITERATURE REVIEW

MANETs rely on the cooperation of all the participating nodes. The more nodes cooperate to transfer

traffic, the more powerful a MANET gets. But supporting a MANET is a cost-intensive activity for a

mobile node. Detecting routes and forwarding packets consumes local CPU time, memory, network-

bandwidth, and last but not least energy [16]. Therefore there is a strong motivation for a node to deny

packet forwarding to others, while at the same time using their services to deliver own data. Some

resources, namely battery power (energy), are scarce in a mobile environment and can be depleted at

fast pace with the device utilization. This can lead to a selfish behavior of the device owner that may

attempt to take the benefit from the resources provided by the other nodes without, in return, making

available the resources of his own devices. In this scenario, open MANETs will likely resemble social

environments. A group of persons can provide benefits to each of its members as long as everyone

provides his contribution. For our particular case, each member of a MANET will be called to

forward messages and to participate on routing protocols. A selfish behavior threatens the entire

Page 285: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

919 Vol. 7, Issue 3, pp. 917-922

community. Optimal paths may not be available. As a response, other nodes may also start to behave

in the same way.

In the [1] existing strategy consists of three parts: 1) detecting selfish nodes, 2) building the SCF-tree,

and 3) replica allocation.

The reason is that without forming any group or engaging in lengthy negotiations each node can

detect selfish nodes and makes replica allocation at its own discretion [15].

1) Detecting Selfish Node

The notion of credit risk can be described by the following equation:

(1)

i.e., the expected risk is calculated by number of requests served by the node. And the expected value

is calculated by number of memory spaces shared by nodes. In the existing strategy, each node

calculates a CR score for each of the nodes to which it is connected [1]. The calculated CR value is

called as degree of selfishness. Degree of selfishness tells that the node is seems to be Selfish node.

Each node shall estimate the selfishness degree for all of its connected nodes based on the CR score.

They first describe selfish features that may lead to the selfish replica allocation problem to determine

both expected value and expected risk [15].

2) Building SCF-Tree

The SCF Tree build based on human friendship management in the real world, where each person

makes their own friends forming a web and manages friendship by their self. They do not have to

discuss these with others to maintain the friendship [1]. The decision is only at their discretion. The

main goal of the replica allocation techniques discussed is to reduce traffic overhead, achieving data

accessibility to maximum level. If this replica allocation technique can allocate replica without

considering with other nodes, it will decrease the traffic overhead.

3) Allocating Replica

A node allocates replica at every relocation period, after building the SCF-tree. Within its SCF-tree

[1] each node asks non selfish nodes to hold replica when it cannot hold replica in its local memory

space. Each node determines replica allocation individually without any communication with other

nodes, since the SCF-tree based replica allocation is performed in a fully distributed manner. At first,

a node determines the priority for allocating replicas. The priority is based on Breadth First Search

(BFS) order of the SCF-tree. The dotted arrow in represents the priority for allocating replica.

III. PROPOSED SYSTEM

Although network issues are important in a MANET, replica allocation is also crucial, since the

ultimate goal of using a MANET is to provide data services to users. A selfish node may not share its

own memory space to store replica for the benefit of other nodes. We can easily find such cases in a

typical peer-to-peer application. A selfish node may not share its own memory space to store replica

for the benefit of other nodes. We can easily find such cases in a typical peer-to-peer application.

They are based on the concept of a self-centered friendship tree (SCF-tree) and its variation to achieve

high data accessibility with low delay in the presence of selfish nodes. We first describe selfish

features that may lead to the selfish replica allocation problem to determine both expected value and

expected risk. Here we can consider threshold value as main part for detecting selfishness. If node

transfers the packet within the threshold value, then it is a non selfish. If exceeds the threshold value,

then it is a selfish node.

Credit risk value can be calculated by means of threshold value. For every node ,we can store the data

and sent to another node, If there is a selfish node present, then the data will be lost, At that time,

Neighboring node will have copy of that data, so we can come one step backward route the data to

the another path. As stated in [1] the detection of the type-3 nodes is complex, because they are not

always selfish. In some sense, a type-3 node might be considered as non-selfish, since the node shares

part of its memory space, due to this here we can consider only two types of nodes as selfish and non-

selfish. A selfish node can silently drop some or all of the data packets sent to it for further forwarding

even when no congestion occurs. Selfish node attack presents a new threat to wireless ad hoc

networks since they lack physical protection and strong access control mechanism. An adversary can

easily join the network or capture a mobile node and then starts to disrupt network communication by

Page 286: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

920 Vol. 7, Issue 3, pp. 917-922

silently dropping packets. Selfish node attack is a serious threat to the routing infrastructure of both

MANET and the Internet since it is easy to launch and difficult to detect. For this Session Key is used

for security purpose.

IV. SIMULATION

Simulations are made using NS-2 [17] simulation program that consists of the collection of all

network protocols to simulate many of the existing network topologies. The number of mobile nodes

is set to 50. Each node has its local memory space and moves with a velocity from 0 ~ 1 (m/s) over

1600X1000 meter flatland area. The movement pattern of nodes follows the random waypoint model,

where each node remains stationary for a pause time and then it selects a random destination and

moves to the destination [1]. After reaching the destination, it again stops for a pause time and repeats

this movement behavior. The radio communication range of each node is a circle with a radius of 1 _

19 (m). We suppose that there are 50 individual pieces of data, each of the same size. In the network,

node Ni (1 < i < 50) holds data Di as the original. The data access frequency is assumed to follow Zipf

distribution. And threshold value is considered as 4ns. The default relocation period is set to 256 units

of simulation time which we vary from 64 to 8,192 units of simulation time. Nodes were set to use

802.11 radios with 2 Mbps bandwidth and 250 meters nominal range. We considered only static

scenario so link breakage due to mobility is zero. The simulated time was 100 seconds. TABLE I

describes the simulation parameters.

Table 1. Simulation Parameters

Parameter (unit) Value (default)

Number of nodes 50

Number of data items 50

Radius of communication range (m) 1 ~ 19 (7)

Size of network (m) 1600 *1000

Percentage of selfish node (%) 0 ~ 100 (70)

Relocation period (ms) 64 ~ 8,192

Threshold (ns) 0 ~ 5 (4)

V. RESULT

Figure 2. Forwarding Data to handle selfishness

Page 287: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

921 Vol. 7, Issue 3, pp. 917-922

In Figure 2 we have to maintain the data flow, where node 10 becomes selfish. Due to this selfishness

behavior packet send over link of node 2-10 gets lost. To handle this situation link cut between these

nodes, node 2 send request to node 3 for further data delivery. Hence Selfishness can be handled.

Effect of handling Selfishness over MANET under NS2 simulator is given below.

Figure 3. Query delay with respect to relocation period

Figure 3 shows average query delay with respect to relocation period. As expected, Our Proposed

technique shows the best performance in terms of query delay, since most successful queries are

served by local memory space. While in [1] SAF shows best result which gives minimum delay. Our

graph shows better than this as it passes below to SAF technique. In above DCG technique shows the

worst performance. This can be explained as follows: the distance in average query delay counts

among group members in the DCG technique is longer than that in the SCF technique. Since most

successful queries are served by group members in these techniques, the long distance among group

members affects query delay negatively [1].

VI. CONCLUSION AND FUTURE WORK

In this paper, we have assumed an environment in which nodes simultaneously issue access requests

to correlated data items in MANETs. We have proposed method to handle selfishness condition that

is actually extensions of our previously proposed methods to adapt such an environment. The

simulation results show that the methods proposed in this paper which give lower query delay than the

corresponding methods used in [1]. The results also show that the proposed method is better than

DCG, SCF and SAF methods. Here delay is minimum than all this methods.

As part of our future work, we plan to address data replication in an environment where access

requests for correlated data items are issued with some intervals. And increase data accessesability.

We also plan to extend our proposed methods to adapt data updating.

REFERENCES

[1] Jae-Ho Choi, Kyu-Sun Shim, “Handling Selfishness in Replica Allocation over a Mobile Ad Hoc

Network”, IEEE Transactions on Mobile Computing, vol. 11, no. 2, pp 278-291, Feb 2012.

[2] Takahiro Hara And Sanjay Kumar Madria “Consistency Management Strategies For Data Replication In

Mobile Ad Hoc Networks”, IEEE Transactions On Mobile Computing, Vol. 8, No. 7, pp. 950-967, July

2009.

[3] Nikolaos Laoutaris, Orestis Telelis, Vassilios Zissimopoulos, And Ioannis Stavrakakis “Distributed Selfish

Replication” IEEE Transactions On Parallel And Distributed Systems, vol. 17, no. 12, pp.1401-1413, Dec

2006.

[4] Takahiro Hara and Sanjay Kumar Madria, “Data Replication for Improving Data Accessibility in Ad Hoc

Networks,” IEEE Transaction On Mobile Computing, vol. 5, no. 11, pp. 1515-1532, Nov. 2006

Page 288: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

922 Vol. 7, Issue 3, pp. 917-922

[5] Alessandro Mei, Luigi V. Mancini, and Sushil Jajodiasecure “Dynamic Fragment And Replica Allocation

In Large-Scale Distributed File Systems”, IEEE Transactions On Parallel And Distributed Systems, Vol.

14, No.9, September 2003.

[6] G. Tamilarasi, Devi Selvam “Allocation of Replicas by Solving Selfishness in MANET”, International

Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 1, January- 2013 ISSN: 2278-0181.

[7] Jim Solomon , Immanuel John “A Survey on Selfishness Handling In Mobile Ad Hoc Network”,

International Journal of Emerging Technology and Advanced Engineering, Volume 2, Issue 11, November

2012, ISSN 2250-2459.

[8] Shiow-yang Wu, Member, IEEE, and Yu-Tse Chang “A User-Centered Approach to Active Replica

Management in Mobile Environments”, IEEE Transactions On Mobile Computing, Vol. 5, No. 11,

November 2006.

[9] Martin Schutte “Detecting Selfish and Malicious Nodes in MANETs”, Seminar: Sicherheit in

Selbstorganisierenden Netzen, Hpi/University Potsdam, sommersemester 2006.

[10] Kashyap Balakrishnan, Jing Deng and Pramod K. Varshney “TWOACK: Preventing Selfishness in Mobile

Ad Hoc Networks”, IEEE Conference, 0-7803-8966-2/05, 2005.

[11] Prasanna Padmanabhan, Le Gruenwald, Anita Vallur and Mohammed Atiquzzaman “A survey of data

replication techniques for mobile ad hoc network databases”, The VLDB Journal, ISSN 1143–1164 May

2008.

[12] Takahiro Hara “Effective Replica Allocation in Ad Hoc Networks for Improving Data Accessibility”,

IEEE INFOCOM, 2001

[13] Y. Hu, A. Perrig and D. Johnson, “Ariadne: A Secure On-demand Routing Protocol for Ad Hoc Networks”,

in Proceedings of ACM MOBICOM’02, 2010.

[14] Devi Selvam and Tamilarasi. G, “Selfish Less Replica Allocation in MANET”, International Journal of

Computer Applications, Vol. 63– No.19, pp.33-37 February 2013.

[15] K.P.Shanmuga Priya and V.Seethalakshmi, “Replica Allocation In Mobile Adhoc Network For Improving

Data Accessibility Using SCF-Tree”, International Journal of Modern Engineering Research (IJMER),

Vol.3, Issue.2, pp-915-919 March-April. 2013.

[16] T.V.P.Sundararajan and Dr.A.Shanmugam, “Performance Analysis of Selfish Node Aware Routing

Protocol for Mobile Ad Hoc Networks”, ICGST-CNIR Journal, Volume 9, Issue 1, July 2009.

[17] “The network simulator - ns2,” http://www.isi.edu/nsnam/ns/.

AUTHORS

Madhuri D. Mane presently working P.G. student at the Department of Computer Science

& Engineering in MBES College of Engineering, Ambajogai, India. She completed her

Bachelor’s degree in Computer Science & Engineering Department from M.B.E. society’s

College of Engineering, Ambajogai under Dr. B.A.M.University, Aurangabad, India. She is

pursuing her Master’s Degree from the College of Engineering, Ambajogai. Her areas of

research interest include Computer Networks &Wireless Network.

B.M. Patil is currently working as a Professor in P.G. Computer Science & Engineering

Department in M.B.E. Society’s College of Engineering, Ambajogai, India. He received his

Bachelor’s degree in Computer Engineering from Gulbarga University in 1993, MTech

Software Engineering from Mysore University in 1999, and PhD Degree from Indian

Institute of Technology, Roorkee, 2011. He has authored several papers in various

international journals and conferences of repute. His current research interests include data

mining, medical decision support systems, intrusion detection, cloud computing, artificial

intelligence, artificial neural network, wireless network and network security.

Page 289: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

923 Vol. 7, Issue 3, pp. 923-929

A NEW APPROACH TO DESIGN LOW POWER CMOS FLASH

A/D CONVERTER

C Mohan¹ and T Ravisekhar2 ¹M. Tech (VLSI) Student, Sree Vidyanikethan Engineering College (Autonomous), Tirupati,

India 2Assistant Professor, ECE Department, Sree Vidyanikethan Engineering College

(Autonomous), Tirupati, India

ABSTRACT The present investigation proposes an efficient low power encoding scheme intended for a flash analog to

digital converter. The designing of a thermometer code to binary code is one of the challenging issues in the

design of a high speed low power flash ADC. An encoder circuit translates the thermometer code into the

intermediate gray code to reduce the effects of bubble errors. The implementation of the encoder through

pseudo NMOS logic is presented. To maintain the high speed with low power dissipation, CMOS inverter has

been used as a comparator and by adjusting the ratio of channel width and length, the switching threshold of

the CMOS inverter is varied to detect the input analog signal. To maintain the high speed with low power

dissipation, the implementation of the ADC through pseudo NMOS logic. The proposed ADC is designed using

micro technology in 5 V power supply using PSPICE tool.

KEYWORDS: – Analog to digital converter, Flash ADC, Pseudo NMOS logic, Pseudo Dynamic

CMOS logic, multi threshold CMOS inverters.

I. INTRODUCTION

1.1 Concept of ADC

The flash ADC is a fastest speed compared to other ADC architectures. Therefore, it is used for

high-speed and very large bandwidth applications such as radar processing, digital oscilloscopes,

and so on. The flash ADC is also known as the parallel ADC because of its parallel architecture.

Figure 1 illustrates a typical flash ADC block diagram. As shown in Fig. 1, this architecture needs 2n

-1 comparators for a n-bit ADC; for example, a set of 7 comparators is used for 3-bit flash ADC.

Each comparator has a reference voltage that is provided by an external reference source. These reference voltages are equally spaced by VLSB from the largest reference voltage to the smallest

reference voltage V1. An analog input is connected to all comparators so that each comparator output

is produced in one cycle. The digital output of the set of comparators is called the thermometer code

and is being converted to gray code initially (for minimizing the bubble errors) and further changed

into a binary code through the encoder [1]. However, the flash ADC needs a large number of

comparators as the resolution increases. For instance, a 6-bit flash ADC needs 63 comparators, but

1023 comparators are needed for a 10-bit flash ADC. This exponentially increasing number of

comparators requires a large die size and a large amount of power consumption [3].

The encoder is designed using pseudo NMOS logic style for achieving highest sampling frequency

of 5GS/s and low power dissipation.

Page 290: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

924 Vol. 7, Issue 3, pp. 923-929

Figure 1: Flash ADC Block Diagram.

1.2 Design of the Encoder

Conversion of the thermometer code output to binary code is one of the bottlenecks in high speed

flash ADC design [2]. The bubble error usually results from timing differences between clock and

signal lines and it is a situation where a ‘1’ is found above zero in thermometer code. For very fast

input signals, small timing difference can cause bubbles in the output code. Depending on the

number of successive zeroes, the bubbles are characterized as of first, second and higher orders. To

reduce the effect of bubbles in thermometer code, one of the widely used methods is to convert the

thermometer code to gray code [5, 6]. The truth table corresponding to 2 bit gray code is

presented in Table1.The relationship between thermometer code, gray code and binary code is

given below

G2=T1

G1 = T1 XOR T0

B0 = G2 XOR G1

B1 = G2

The equations for this encoder are derived from the truth table provided in Table 1. Table1. Gray Code Encoder Truth Table

T2 T1 T0 G1 G0 B1 B0

0 0 0 0 0 0 0

0 0 1 0 1 0 1

0 1 0 1 1 1 0

0 1 1 1 0 1 1

1 0 0 0 0 0 0

1 0 1 0 1 0 1

1 1 0 1 1 1 0

1 1 1 1 0 1 1

Page 291: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

925 Vol. 7, Issue 3, pp. 923-929

1.3 Implementation of Encoder

There are different logic styles to design the encoder. Generally the implementation will be do

using static CMOS logic style. The advantage of static CMOS logic style is have a lowest power

consumption with a lower speed. So for achieving a low power with high speed, other logic styles are

preferred. Here the design is implemented using logic style called pseudo NMOS logic [8].

The pseudo NMOS logic circuit consists of a PMOS transistor with gate connected to ground, a

bunch of NMOS transistors for the implementation of the logic style in the pull down network and

an inverter. For the implementation of a specific logic circuit with N inputs, pseudo NMOS logic re-

quires N+1transistors instead of 2N transistors in comparison with static CMOS logic. Pseudo

NMOS logic is an attempt to reduce the number of transistors with extra power dissipation and

reduced robustness.

Figure. 2 Schematic of two input AND Gate Using Pseudo NMOS Logic

The basic structure of two inputs and gate using pseudo NMOS logic style is shown in Fig. 2. The

PMOS transistor in the pull up network is connected to ground that will make the pull up network

to be pulled on all the time. The output will be evaluated conditionally depending upon the value

of the inputs in the pull down network. The inverter on the output transforms the inverted gate to

non inverted gate. Since the voltage swing on the output and the overall functionality of the gate

depend on the ratio of the NMOS and PMOS sizes, the transistor sizing is crucial in the

implementation design. The disadvantage with pseudo NMOS logic is that it has static power

consumption. (The power dissipation occurs when a direct current flows between VDD and ground.

That is when both pull up and pull down networks are switched on simultaneously). The nominal high output voltage of (VOH) of pseudo NMOS logic is VDD inverter is added at the output side.

This will improve the noise margin of the circuit. In spite of static power dissipation, the pseudo

NMOS logic consumes less amount of power because of the reduced number of transistors and the

absence of other components (resistors) used for the implementation in comparison with current mode

logic.

The transistor sizes are given in Table 2. Table 2. Transistor Sizes

(W/L) PMOS 300um/100um

(W/L) NMOS 120um/100um

Page 292: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

926 Vol. 7, Issue 3, pp. 923-929

1.4 Simulation Results

Figure3: simulation results for General ADC.

II. MODIFIED FLASH ADC

A traditional n-bit flash ADC architecture uses 2n resistors and 2n-1 comparators to convert an analog

signal to digital. This architecture has drawbacks like large input signal driving, high reference

accuracy, high driving reference voltage and circuit complexity [10], [11]. CMOS inverters have been

reported to be used in ADC designs [9]-[10]. In this work, this novel idea of employing CMOS

inverters instead of analog comparators is considered for a flash ADC. CMOS is a combination of an

n-MOSFET (NMOS) and a p-MOSFET (PMOS). CMOS inverter switching threshold (Vth) is a point

at which input is equal to output voltage (Vin= Vout) and in this region both PMOS and NMOS

always operate in saturation region. If the input arrives at a particular threshold voltage, then the

output state changes. Vth can be obtained as [9]-[10]

Page 293: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

927 Vol. 7, Issue 3, pp. 923-929

where k’n and k’p are constant trans conductance parameters. Vtn and Vtp are the threshold voltage

values of NMOS and PMOS respectively. As the voltages are constant, Vth depends on kn and kp

values which decide the transition point of CMOS inverter [11]. If the ratio of kn and kp is decreased,

the transited threshold voltage becomes high, otherwise, the transited inverter threshold voltage

becomes low. kn and kp can be controlled by adjusting the width (W) and length (L) of NMOS and

PMOS respectively. Based on this concept, various width/length ratio of CMOS inverters are

designed to change their threshold voltages.

Each CMOS inverter thus has a specified threshold depending upon this ratio. The W/L ratios are

define as Zn=Wn/Ln and Zp=Wp/Lp. By changing the ratio of Zn and Zp, we can obtain various

transit threshold voltages of CMOS inverters to quantize the input level. All inputs of CMOS inverter

are tied together to detect the analog input level. If the input arrives at a particular threshold voltage,

then the output state changes. The basic architecture of the proposed flash ADC is shown in Figure 3.

In a 4-bit ADC, 15 CMOS inverters are tied in parallel to detect the input signal level. The inverter

output is array from MSB to LSB. For LSB bit, the Zn/ Zp value should become small to increase the

threshold voltage and for MSB bit, the Zn/ Zp value should become large to decrease the threshold

voltage. The encoder is design in the same way of above using same technology.

Figure 4.The architecture of proposed ADC.

Table3. The transition point of 3 inverters in a flash ADC.

Page 294: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

928 Vol. 7, Issue 3, pp. 923-929

2.1 Simulation Results

Figure 5: simulation results for modified flash ADC.

III. SIMULATION RESULTS AND DISCUSSION

The results show that the new design consumes less power than the traditional flash adc[2]. The

power dissipation is reduced because of the usage of the reduced op-amps and resistors for the

implementation of the logic. In comparison with the traditional flash adc, the proposed ADC is less

cost. Table 4: Comparison with flash ADC

IV. CONCLUSION

Low power architecture for a 2-bit CMOS inverter based flash ADC is presented using 90nm

technology. The proposed ADC design can achieve very low power dissipation and compared with

the traditional flash ADC, this proposed method can reduce power consumption as well as uses

smaller silicon area. Hence the proposed ADC cost is reduced.

ACKNOWLEDGEMENTS

The authors would like to thank Mr.T.Ravisekhar, Department of ECE for her valuable guidance

Results Flash adc Cmos flash adc

Architecture Flash Flash

Resolution 2 bits 2bits

Technology Micro technology Micro technology

Sampling frequency 1Khz 1Khz

Vdd 5v 5v

Power consumption 1.88E+01 WATTS 1.59E+01 WATTS

Page 295: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

929 Vol. 7, Issue 3, pp. 923-929

REFERENCES

[1]. D.Lee, J.Yoo, K.Choi and J. Ghaznavi, “Fat-tree encoder design for ultrahigh speed flash analog to digital

converters” I proc. IEEE Midwest Symp. Circuits Syst, pp 233-236, Aug 2002.

[2]. S. Sheikhaei, S. Mirabbasi, A. Ivanov, “An Encoder for a 5GS/s 4bit flash A/D converter in 0.18um

CMOS”, Canadian Conference on Electrical and Computer Engineering, pp 698-701, May 2005.

[3]. R.Baker, H.W.Li, and D.E. Boyce, CMOS Circuit Design, Layout and Simulation. Prentice Hall 2000.

[4]. Sunghyun Park, Yorgos Palaskas, Ashoke Ravi, Ralph.E.Bishop, and Michael P. Flynn, “ A 3.5 GS/s 5-b

Flash ADC in 90nm CMOS”, IEEE Custom Integrated Circuits Conference 2006.

[5].Niket Agrawal, Roy Paily, “An Improved ROM Architecture for Bubble error Suppression in High Speed

Flash ADCs”, Annual IEEE Conference, pp 1-5,2005.

[6]. Mustafijur Rahman, K.L. Baishnab, F.A. Talukdar, “A Novel ROM Architecture for Reducing Bubble and

Meta-stability Errors in High Speed Flash ADCs”, 20th International Conference on Electronics,

Communications and Computer, pp 15-19, 2010.

[7].Vinayashree Hiremath, “ Design of High Speed ADC” , M.S. Thesis, Wright State University, 2010.

[8]. Jan M Rabaey, Anantha Chandrakasan, Borivoje Nikolic, “Digital Integrated Circuits, a design

perspective”, second edition, Prentice Hall 2011.

[9] S. Chang Hsia, Wen- Ching Lee, “A Very Low Power Flash A/D Converter Based on CMOS Inverter

Circuit,” IDEAS’05, pp. 107-110, July 2005

[10] A. Tangel and K. Choi, “CMOS Inverter as a comparator in ADC design,” in Proc. ICEEE, 2001,pp. 1-5

[11] S. M. Kang and Y. Leblebici, CMOS Digital Integrated Circuits Analysis and Design, Tata McGraw-Hill

Edition 2003.

Authors

C. Mohan received the B.Tech. degree in 2011 in Electronics and

Communication Engineering from VITT College of Engineering, Thanapalli. He is

presently pursuing M.Tech in VLSI in Sree Vidyanikethan Engineering College

(Autonomous), Tirupati and would graduate in the year 2014. His research interests

include Digital Logic Design, VLSI and FPGA.

T. Ravisekhar, M.Tech., is currently working as an Assistant Professor in ECE

Department of Sree Vidyanikethan Engineering College (Autonomous),Tirupati. She has

completed M.Tech in VLSI Design, in Satyabhama University and received B.Tech

from Sri Kalahastheeswara Institute of Technology, Srikalahasti. Her research areas are

RFIC Design, Digital Design and VLSI Signal Processing.

Page 296: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

930 Vol. 7, Issue 3, pp. 930-937

OPTIMIZATION AND COMPARATIVE ANALYSIS OF NON-

RENEWABLE AND RENEWABLE SYSTEM

Swati Negi1 and Lini Mathew2 1ME student, Department of E.E.E, Engineering, NITTTR, Chandigarh, India

2Associate Professor, Department of E.E.E, NITTTR, Chandigarh, India

ABSTRACT A Hybrid Renewable Energy System for rural electrification is a better solution to reduce environmental

pollution. Renewable energy systems offer power supply to those areas where grid electricity is not feasible or

cost of grid extension is relatively large. The main objective of this paper is to compare a non-renewable system

with a renewable energy system for remotely located site in Tirunelveli, Tamilnadu .The current system is

operated using a diesel generator and batteries and the proposed system integrates a hybrid combination of

wind and solar energy system into existing diesel generator and battery system to serve the same load. This

paper also performs cost optimization of the proposed system and the optimal cost analysis of HRES is done

using Hybrid Optimization Model for Electric Renewable (HOMER). The HOMER is a energy modelling

software for designing and analyzing hybrid power systems, which contain a combination of conventional

generators, wind turbines, solar photovoltaic’s, hydropower, batteries, fuel cells, hydropower, biomass and

other inputs. It is currently used all over the world by tens of thousands of people. The results show that the

proposed hybrid renewable energy system is more cost effective than the non-renewable system. The proposed

system significantly reduces the running time of diesel generator and this helps to reduce the emission level.

KEYWORDS: Hybrid Renewable Energy System, Non Renewable Energy System, Feasibility Study,

Photovoltaic, Wind Turbine Generator.

I. INTRODUCTION

Rapid depletion of fossil fuel resources on the world wide basis has necessitated an urgent search for

alternative energy sources to cater to present day’s demand. Another key reason to reduce our reliance

on fossil fuel is the growing evidence of the global warming phenomena. It is imperative to find

energy sources to cover the continuously increasing demand of energy while minimize the negative

environmental impacts [1].

Hybrid Renewable Energy System (HRES) is composed of one or more renewable sources combined

with conventional energy source, that works in stand alone or grid connected mode [2]. The most

popular alternative sources are wind and photovoltaic. Research indicates that hybrid

Wind/PV/battery system is a reliable source of electricity [3-5]. Since these sources are intermittent in

nature therefore, in most cases diesel generator and batteries are integrated for power storage

respectively [6].

The objective of this work is to optimize and compare a non-renewable energy system (existing

system) with a hybrid renewable energy system (proposed system) for a site in Tirunelveli. The

current system uses diesel generator and batteries for power generation and the proposed system adds

hybrid wind and solar system into the current system. HOMER is used to obtain the most feasible

configuration [7].

The paper is divided into four subsections. Section II describes the input variables like solar

irradiation, wind speed and load data. In section III the simulation of current system and proposed

system using HOMER as well as cost summary of the system components is explained. In section IV

Page 297: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

931 Vol. 7, Issue 3, pp. 930-937

comparison between current system and proposed system is explained and discussed. Section V gives

some of the future trends of hybrid renewable energy system.

II. SYSTEM DESCRIPTION

The design of HRES is based upon the certain important sensitivity variables to optimize the cost and

size effectively. Hence, before designing the system, certain parameters like solar irradiation, wind

speed and load profile must be evaluated. It is presented in the following sections.

2.1. Solar Radiation

The latitude and longitude of Tirunelveli are 8°.73´N and 77°.7´E respectively. The hourly solar

radiation is collected for year from NASA website [8]. The average solar radiation is 4.91kWh/m2/d.

Clearness index and average solar irradiation for a year are shown in Table I, while Figure 1 shows

the solar irradiation in a year produced by HOMER.

Figure.1. Monthly solar radiation

Table I Clearness Index and Average Daily Irradiation for a Year

Month Clearness Index Daily Radiation

(kWh/m2/d)

January 0.537 4.840

February 0.578 5.580

March 0.598 6.140

April 0.526 5.520

May 0.495 5.130

June 0.438 4.460

July 0.443 4.530

August 0.469 4.870

September 0.498 5.130

October 0.451 4.420

November 0.435 3.970

December 0.479 4.200

Average 0.495 4.895

2.2. Wind Speed Data

The second renewable source implemented in the system is wind. Wind data for this site is collected

for year from NASA website. The average wind speed is 5.17m/s. The monthly average speed is

shown in table II and figure 2 shows the wind speed in a year produced by HOMER.

Page 298: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

932 Vol. 7, Issue 3, pp. 930-937

Figure .2. Monthly wind speed

Table II Wind Speed for a Year

2.3. Load Profile

Load is an important consideration in any power generating system. In this case study, we have

considered a remote village in Tirunelveli, which lacks access to the utility grid. The measured

annual consumption is considered as 97kWh/d in the present study. Figure 3 shows monthly average

load profile. The peak load requirement decides the size of the system. Here peak load consumption is

9.7 KW.

Figure. 3. Load profile for a village in Tirunelveli

III. SYSTEM OPTIMIZATION

The non-renewable energy system (existing system) and the hybrid renewable energy system

(proposed system) are simulated in HOMER software.

3.1. Non-Renewable Energy System

Month Wind Speed (m/s)

January 5.010

February 3.810

March 3.690

April 4.030

May 5.820

June 7.330

July 6.460

August 6.460

September 5.800

October 4.530

November 4.000

December 5.090

Average 5.18

Page 299: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

933 Vol. 7, Issue 3, pp. 930-937

The existing system model, which consists of a diesel generator and batteries to power the load, has

been modelled using micro grid optimization software HOMER as shown in Figure 4.

Figure .4. Existing power system at Tirunelveli

3.2. Renewable Energy System

The proposed hybrid renewable energy system, which consists of existing power system, wind turbine

and photovoltaic, is shown in figure 5. The proposed system is going to reduce diesel fuel

consumption and associated operation and maintenance cost. In this system PV and wind turbines will

be the primary power source and diesel generator will be using as a backup for long term storage

system and batteries for storage system.

Figure .5. Proposed hybrid Power System for Tirunelveli

3.3. Homer Input Summary

Table III and IV give the summary of the costs, other technical details of the components and other

required parameters which are given as inputs to the HOMER hybrid model.

TABLE III Cost summary of the system components

Component Size Capital

Cost $

Replacemen

t cost $

O & M

Cost

PV system 10 kW 26700 20000 10$/year

Wind

Turbine

10kW 20000 20000 500$/year

Battery 360 Ah 450 440 10$/year

Generator 10kW 5500 5475 0.5$/hr

15kW 6600 6600 0.6$/hr

21kW 7500 7500 0.7$/hr

25kW 8000 8000 0.8$/hr

30kW 8800 8800 0.9$/hr

Converter 30kW 23000 23000 10$/year

Page 300: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

934 Vol. 7, Issue 3, pp. 930-937

Table IV specifications of the components used

PV System

Model Canadian SolarCS6P-240P

Peak Power 240W

Derating Factor 80%

Slope 6.81°

Azimuth 0°

Ground reflectance 20%

Temperature coefficient -0.43%/°C

Nominal operating temperature 45°C

Efficiency at standard test

condition

14.92%

Lifetime 20 years

Wind Turbine

Model BWC Excel-S

Rated Power 10kW

Hub height 15m

Lifetime 20 years

Battery

Nominal Voltage 6V

Nominal Capacity 360Ah

Lifetime throughput 1075kWh

Round trip efficiency 85%

Min. State of charge 30%

Float life 10 years

Maximum charge rate 1A/Ah

Maximum charge current 18A

Batteries per string 2(12 V DC bus)

Diesel Generator

Lifetime 25000

Minimum load ratio 50%

Fuel Diesel

Fuel cost $1.19

Converter

Lifetime 10 years

Efficiency 90%

Economics

Annual interest rate 5%

Project lifetime 20 years

IV. RESULTS AND DISCUSSION

Both the systems are simulated in HOMER software. The software finds the optimal results in each

case. Optimization result for non-renewable energy system is shown in Figure 6. As shown in the

figure the total Net Present Cost (NPC) is $339,909. Diesel generator burns 14,872L of fuel per year

and annual generator run time is 5291 hours. In twenty years, the diesel generator will burn 297,440L

of fuel. The probability of fuel prices increase is also high. The total cost is calculated with constant

price of fuel, which is $1.19 per litre. The total fuel cost during these 20 years will be 353,953.6$ and

the total cost for the whole system will be $693,862.6. Figure 7 shows the monthly average electric

production of the system which is totally produced by diesel generator.

Page 301: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

935 Vol. 7, Issue 3, pp. 930-937

Figure .6. Optimized result for the non-renewable energy system

Figure .7. Monthly average electric production for non-renewable energy system

The hybrid renewable energy system was also simulated in HOMER software with four sensitivity

variables. These variables are wind speed, solar irradiation, load, and diesel price. Figure 8 shows the

optimized results for the proposed system. The total Net Present Cost (NPC) is $270,514. The system

will consume only 3010 liters of diesel fuel per year and annual generator run time is expected to be

1034 hours. The lifetime of this system is 25 years, but 20 years life is used to make the comparison

between two systems. In twenty years the diesel generator will burn 60200L of fuel and it will cost

$71,638. The total cost of the system will be around $342,152 Figure 9 shows the monthly average

electric production of the system. Photovoltaic production is 29% with 13,482kWh/yr. Diesel

generator production is 18% with 8,731kWh/yr. Finally, wind turbine is expected to supply the rest of

the load which is 53% with 25,068kWh/yr.

Figure 8. Optimized result for the renewable energy system

Figure 9. Monthly average electric production for renewable energy system

The difference cost between two systems is $351,710.6 which is a very significant number for a small

system. Diesel generator run times are reduced and diesel generator in the proposed system will

Page 302: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

936 Vol. 7, Issue 3, pp. 930-937

produce only 18% of the total power production. Moreover, the reduction of yearly diesel fuel

consumption from 14,872L to 3010L has a large impact on the environment and it will reduce the net

cost of the system. Also, the diesel generator will require less maintenance and operation cost and

longer period of service before a replacement.

V. FUTURE TRENDS AND LIMITATIONS

The renewable technologies have come a long way in terms of research and development. However

there are still certain obstacles in terms of their efficiency and optimal use. Following are the

challenges faced by the designer.

The renewable energy sources, such as solar PV and FCs, need innovative technology to

harness more amount of useful power from them. The poor efficiency of solar is major

obstruction in encouraging its use.

The manufacturing cost of renewable energy sources needs a significant reduction because

the high capital cost leads to an increased payback time.

It should be ensured that there should be minimal amount of power loss in the power

electronic devices.

The storage technologies need to increase their life-cycle through inventive technologies.

These stand alone systems are less adaptable to load fluctuations. Large variation in load might

even lead to entire system collapse.

VI. CONCLUSIONS

The paper compares two different systems for providing uninterruptible power for a remote site. One

is the non-renewable energy system which consists of diesel generator and batteries and another is the

proposed system which is a combination of existing system and hybrid wind and PV system. HOMER

software is used for the comparison based on pre-feasibility study for each system. It is seen that the

proposed system will save extra cost associated with transporting diesel and maintenance. Analysis

indicates that renewable energy system will cost $351,710.6 less in its expected life than the existing

diesel generator system. Therefore a hybrid renewable energy based system is recommended for

Tirunelveli site.

REFERENCES

[1]. N.A.b.A. Razak,M.M.bin Othman, I. Musirin,"Optimal sizing and operational strategy of hybrid

renewable energy system using homer.”Power Engineering and Optimization

Conference(PEOCO),2010 4th International, pp.495-501, 23-24 June 2010.

[2]. Ahmad Rohani, Kazem Mazlumi, Hossein Kord,” Modeling of a hybrid power system for economic

analysis and environmental impact in HOMER.”Electrical Engineering (ICEE), 2010.

[3]. R. Ramakumar, I. Abouzahr, K. Ashenayi, “A knowledge-based approach to the design of integrated

renewable energy systems, “Energy Conversion, IEEE Transaction on, vol&, no.4, pp.648-659,

Dec1992.

[4]. R.Chedid, H.Akiki, S. Rahman,”A decision support technique for the design of hybrid solar-wind

power systems, “Energy Coversion, IEEE Transaction on, vol.13, no.1.pp.76-83, Mar 1998.

[5]. W.D. Kellogg, M.H. Nehrir, G. Venkataramanan and V.Gerez,”Generation unit sizing and cost analysis

for stand-alone wind, photovoltaic, and hybrid wind/PV systems, “Energy conversion, IEEE

Transaction on, vol.13, no.1.pp.70-75, Mar1998.

[6]. M.S. Hossan, M.M.Hossan, A.R.N.M.R. Haque,”Optimization and modelling of hybrid energy system

for off-grid electrification, “Environment and Electrical Engineering (EEEIC), 2011 10th International

Conferenceon, pp.1-4, 8-11 May2011.

[7]. HOMER, http://www.homerenergy.com/ (accessed 5.2.11).

[8]. http://eosweb.larc.nasa.gov/

Page 303: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

937 Vol. 7, Issue 3, pp. 930-937

AUTHORS BIOGRAPHY

Swati Negi is currently persuing M.E. Regular from National Instititute of Technical

Teachers’ Training & Research, Chandigarh Sec-26. She has completed her B-tech from

Graphic Era Institute of Technology, Dehradun. She has two year academic experience in

Graphic Era Institute of Technology, Dehradun. Her Interest areas are power electronics,

renewable technology.

Lini Mathew is presently working as Associate Professor in the Electrical Engineering

Department of National Institute of Technical Teachers Training and Research,

Chandigarh, India. She holds a Bachelor degree in Electrical Engineering from Kerala

University, and Masters and Ph.D from Panjab University. She has 28 years of experience

out of which 2 years are of industrial and the rest of teaching. She has guided more than 50

Master’s degree theses and has more than 50 articles in her credit. Her areas of

specialization are Digital Signal Processing, Power Systems, ANN and Fuzzy Logic,

Virtual Instrumentation, MATLAB etc.

.

Page 304: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

938 Vol. 7, Issue 3, pp. 938-947

A FEED FORWARD ARTIFICIAL NEURAL NETWORK BASED

SYSTEM TO MINIMIZE DOS ATTACK IN WIRELESS

NETWORK

Tapasya Pandit & Anil Dudy Deptt. of Electronics and Communication Engineering,

Baba Mastnath College of Engineering and Technology, Rohtak, Haryana, India

ABSTRACT Security has become more and more important in our life according to development. During the past years, the

concepts of security involved considering the process of assessing computer system, network and file, scanning,

analyzing system information from various areas, observing and analyzing both user and system activities to

identify possible security violations which include both intrusions (attacks from outsider) and misuse (attacks

from inside the organization ). A technology that is developed to assess the security of computer systems or

network is one of the most popular types of security management system for computers and networks which is

defined as intrusion detection system.

INDEX TERMS— Artificial Neural Network (ANN), Back Propagation Neural Network, Delay of Service

(DOS), Feed Forward Neural Network, Intrusion Detection System (IDS), Network Security

I. INTRODUCTION

The preservation of security has become more difficult by time because the possible technologies of

attack are becoming more superior. At the same time, less technical ability is required for the novice

snoopier because the verified past methods are easily accessed through the organization. The main

idea of protecting the information through the encrypted channel for data and also confirming the

identity of the connected device through the firewall, which will not accept any connection with a

stranger, firewalls do not provide full protection for the system (Rung-Ching , Kai-Fan and Chia-Fen

,2009). So, it is needed to extend the network security capabilities by complementing with other tools

or intrusion detection system (IDS is not a replacement for either a good antivirus program or

firewall). Since it is technically impossible to create computer systems (Hardware & Software)

without any defect or security failure, intrusion detection in computer system’s researches is

specifically regarded as important. IDS is a protective system that can detect disorders occurring on

the network. The procedure goes as intrusion detection can report and control occurred disorders

through steps including collecting data, seeking ports, controlling computers, and finally hacking. So,

intrusion detection can report control intrusion sabotage that composed of phases collecting data,

probing port, gaining computer’s control and finally hacking. In this paper, we consider some

different agents, each of which can detect one or two DOS attacks. These agents interact in a way not

to interfere each other. Parallelization Technology is used to increase system speed. Since the

designed agents act separately and the result of each agent has no impact on the others, we can run

each system on discrete CPUs (depending on how many CPUs are used in IDS computers) to speed

up the performance. The purpose of an Intrusion Detection system is not to prevent an attack, but

only to discover and possibly detect the attacks and to recognize security problems in system or

computer networks and also to report it to the system administrator[5]. Intrusion Detection systems

are generally used with Firewalls as their security complements. Detection of anomaly outside-in

traffics of the network and reporting it to the administrator, or preventing suspected contacts is the

other feature of IDS. IDS is capable of detecting attacks by both internal and external users [9].

Page 305: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

939 Vol. 7, Issue 3, pp. 938-947

Wang et al. have declared their ideas about intrusion detection and have explored different methods

of neural network. There are many different intelligent techniques for designing intrusion detection

systems, such as Machine learning, data mining, and fuzzy sets which are divided into two groups of

Fuzzy set and Fuzzy anomaly detection, to mention some. Neural network algorithms are also divided

into two groups of Supervised Learning and Unsupervised Learning [14].

II. INTRUSION DETECTION SYSTEM

Nowadays, Intrusion detection systems are most original and complete parts of a network monitoring

system. Intrusion detection system technologies relatively are new and promise us that we will do in

order to detect network intrusion that will help. Intrusion detection is the process in which events and

incidents on a system or network monitoring and monitoring of the network or system intrusion is

detected[1] .

The goal of intrusion detection is screening, evaluating and reporting of network activity. This system

acts on the data packets that have passed access control tool. Due to the reliability limitations,

internal threats, and the presence of required doubt and hesitation, the intrusion prevention system

should allow some cases of suspected attacks to pass in order to decrease the probability of false

detections (false positive). On the other hand, most of IDS methods are intellectual and use different

techniques to detect potential attacks, intrusions and abuses. Usually, an IDS uses the bandwidth in a

way that can keep on acting without making any effect on accounting and network architecture.

Intrusion detection systems (IDS), are responsible for identifying and detecting any unauthorized use

of the system, abuse or any damage caused by both internal and external users[4] .

Intrusion detection systems try to detect anomaly intrusions to the network by special algorithms

which can be divided into 3 categories of misuse-based, anomaly-based, and specification-based.

Analyzing the user’s behavior in the network, the anomaly-based system can find out the intrusions.

In anomaly-based method, an index of normal behavior is created. An abnormality may be an

indication of an intrusion. Indexes of normal behavior are created based on approaches like Neural

Networks, Machine Learning methods, and even life style safety systems. To detect anomalous

behaviors, normal behaviors should be identified and some specific patterns and rules should be

designed for them. The behaviors which follow these patterns are considered as normal and events

which show any deviation beyond the normal statistics of these patterns are detected as abnormal. It’s

extremely difficult to detect abnormal intrusions, because there is no consistent pattern to monitor

them. Usually an event which shows more than two deviations from the normal behavior is assumed

to be normal. According to the rapid expansion of networks over the past century, system protection

has become one of the most important issues in Computer Systems due to the existence of gaps in

most of the components of protection systems such as FIREWALL systems. In the last past years,

several research were proposed, developed and designed to set ideas based on several techniques to

design systems intrusion detection to protect the system, analyze and expect the behaviors of users.

Misuse intrusion detection is the process that searches attack patterns in the source of data to identify

instances of network attacks by comparing current activity against the estimated actions of an

intruder. Thus intrusion detection systems (IDS) are used as secondary computer systems protector to

identify and avoid illegal activities or gaps. The intrusion detection problem is considered as a pattern

recognition, and the artificial neural network must be trained to distinguish between normal and

unusual patterns (DoS, Prob., R2L, U2R)[2].

Unfortunately, unusual anomaly-based intrusion detections and IDSs of this kind cause many false

alarms (false positive) due to the fact that the behavior patterns of the users and the system are very

previous attacks), abnormal behavior detection methods can detect any kind of new attacks. In

misuse-based technique, usually known as signature-based detection, pre-designed intrusion

templates (signatures) are stored as law, in a way that each template contains different types of a

specific intrusion and once a template of this kind appears in the system, the intrusion occurring is

alarmed. Usually, in these methods, the detector has data bases of attack signatures or templates and

tries to detect patterns that are similar to those stored in its own data base. This kind of methods are

able to detect known intrusions, and if new attacks appear anywhere in the network, they are not able

to detect them. The administrator should continuously add the templates (patterns) of new attacks to

Page 306: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

940 Vol. 7, Issue 3, pp. 938-947

the intrusion detection system. One of the advantages of this method is the high accuracy applied in

detecting intrusions the templates of which have precisely given to the system .

III. DATASET

The Information System Technology Group at Massachusetts Institute of Technology – Lincoln

Laboratory, sponsored by Defence Advanced Research Project Agency (DARPA) and Air Force

Research Laboratory, has collected and evaluated the first standard corpora for evaluation of

computer network Intrusion Detection Systems. This is called the DARPA Intrusion Detection

Evaluation[7].

The data sets used in this research are the data sets from the 1998 DARPA Intrusion Detection

Evaluation Program. In the dataset, the following attacks are present according to the actions and

goals of the attacker. Each attack type falls into one of the following four main categories:

A. Probing

Probing is a class of attacks where an attacker scans a network to gather information or find known

vulnerabilities. An attacker with a map of machines and services that are available on a network can

use the information to look for exploits. There are different types of probes: some of them abuse the

computer’s legitimate features, some of them use social engineering techniques. This class of attacks

is the most commonly heard and requires very little technical expertise. Attacks used was IP

sweep,Mscan, Nmap, Saint and Satan[2].

B. Denial of Service Attacks

Denial of Service (DoS) is a class of attacks where an attacker makes some computing or memory

resource too busy or too full to handle legitimate requests, thus denying legitimate users access to a

machine . There are different ways to launch DoS attacks:

• Abusing the computers legitimate features.

• Targeting the implementations bugs.

• Exploiting the system’s misconfigurations.

DoS attacks are classified based on the services that an attacker renders unavailable to legitimate

users. An attack used was Apache2, Back, Mail bomb, Neptune, Ping of death, Process table, Smurf,

Syslogd and UDP storm[2].

C. User to Root Attacks

User to root exploits are a class of attacks where an attacker starts out with access to a normal user

account on the system and is able to exploit vulnerability to gain root access to the system. Most

common exploits in this class of attacks are regular buffer overflows, which are caused by regular

programming mistakes and environment assumptions. Attacks used was Perl and Xterm[2].

Remote to User Attacks

A remote to user (R2L) attack is a class of attacks where an attacker sends packets to a machine over

a network, then exploits machine’s vulnerability to illegally gain local access as a user. There are

different types of R2L attacks; the most common attack in this class is done using social engineering.

Attacks used was Dictionary, FTP-write, Guest, Imap, Named, Phf, Sendmail, Xlock and Xnsnoop[2].

IV. ARTIFICIAL NEURAL NETWORK

Artificial neural networks born after McCulloc and Pitts introduced a set of simplified neurons in

1943. These neurons were represented as models of biological networks into conceptual components

for circuits that could perform computational tasks. The basic model of the artificial neuron is

founded upon the functionality of the biological neuron[10]. By definition, “Neurons are basic

signaling units of the nervous system of a living being in which each neuron is a discrete cell whose

several processes are from its cell body. One can differentiate between two basic types of networks,

networks with feedback and those without it. In networks with feedback, the output values can be

traced back to the input values. However there are networks wherein for every input vector laid on

Page 307: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

941 Vol. 7, Issue 3, pp. 938-947

the network, an output vector is calculated and this can be read from the output neurons. There is no

feedback. Hence only, a forward flow of information is present. Network having this structure are

called as feed forward networks. There are various nets that come under the feed forward type of

nets. A multilayer feed forward back propagation network with one layer of z-hidden units. The Y

output unit has Wok bias and Z hidden unit has Vok as bias. It is found that both the output units and

the hidden units have bias. The bias acts like weights on connection from units whose output is

always 1. This network has one input layer, one hidden layer and one output layer. There can be any

number of hidden layers. The input layer is connected to the hidden layer and the hidden layer is

connected to the output layer by means of interconnection weights. The bias is provided for both the

hidden and the output layer, to act upon the net input to be calculated[14].

V. TRAINING ALGORITHM

The training algorithm of back propagation involves four stages[14], viz.

1. Initialization of Weights

2. Feed Forward

3. Back Propagation of errors

4. Updation of the weights and the biases.

During first stage which is the initialization of weights, some small random values are assigned.

During feed forward stage each input unit (Xi) receives an input signal and transmits this signal to

each of the hidden units Z1………Zp. Each hidden unit then calculates the activation function and

sends its signal Zj to each output unit. The output unit calculates the activation function to form the

response of the net for the given input pattern. During back propagation of errors, each output unit

compares its computed activation yk with its target value tk to determine the associated error for that

pattern with that unit. Based on the error, the factor δk is computed and is used to distribute the error

at output unit yk back to all units in the previous layer. Similarly factor δj is computed for each hidden

unit zj..During final stage, the weight and biases are updated using the δ factor and the activation .

x: input training vector[14]

x: (x1, ……….xi,…., xn)

t: Output target vector

t: (t1, ……….ti,…., tn)

δk =error at output unit yk

δj =error at hidden unit zj

ά= learning rate

Voj= bias on hidden unit j

zj= hidden unit j

wok=bias on output unit k

yk= output unit k.

The training algorithm used in the back propagation network is as follows. The algorithm is given

with the various phases:

D. Initialization of Weights

Step 1: Initialize weight to small random values.

Step 2: While stopping condition is false, do Steps 3-10.

Step 3: For each training pair do steps 4-9.

E. Feed Forward

Step 4: Each input unit receives the input signal xi and transmits this signals to all units in the layer

above i.e hidden units.

Step 5: Each hidden unit( zj, j=1,……,p) sums its weighted input signals.

z-inj=voj+Σxivij (1)

applying activation function

Zj=f(zinj) (2)

Page 308: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

942 Vol. 7, Issue 3, pp. 938-947

and sends this signal to all units in the layer above i.e. output units.

Step 6: Each output unit (yk) sums its weighted input signals.

y-ink=wok+Σzjwjk (3)

and applies its activation function to calculate the output signals.

Yk=f(y-ink) (4)

F. Back Propagation of Errors

Step 7: Each output unit receives a target pattern corresponding to an input pattern, error information

term is calculated as

δk=(tk-yk)f(y-ink) (5)

Step 8: Each hidden unit (zj) sums its delta inputs from units in the layer above

δ-inj=Σδjwjk (6)

The error information term is calculated as

δj=δ-injf(z-inj) (7)

G. Updation of Weight and Biases

Step 9: Each output unit (yk) updates its bias and weights (j=0,…..,p)

The weight correction term is given by

ΔWjk=άδkzj (8)

and the bias correction term is given by

ΔWok=άδk (9)

Wjk(new)= Wjk(old)+ΔWjk, Wok(new)=Wok(old)+ΔWok (10)

Each hidden unit (zj,j=1,…….p) updates its bias and weights (i=0,…..n)

The weight correction term

ΔVij=άδjxi (11)

The bias correction term

ΔVoj=άδj (12) Vij(new)=

Vij (old) + ΔVij , Voj(new)= Voj (old) + ΔVoj (13) Step 10: Test the stopping condition.

The stopping condition may be the minimization of the errors, number of epochs etc.

Fig. 1 Feed Forward Networks

Fig. 2 Back Propagation of Errors

VI. LEARNING

Learning is the process by which the neural network adapts itself to a stimulus and eventually it

produces desired output after making the proper parameter adjustments to itself. It is a continuous

Page 309: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

943 Vol. 7, Issue 3, pp. 938-947

classification process of input stimuli; when a stimulus appears at the input nodes of a network it

either recognizes it or it develops a new classification. During the process of learning the synaptic

weights are adjusted in response to the input so that its actual output response converges to the

desired or target output response. When the actual output response is same as the desired response

then the network is said to be learned or acquired knowledge. There are different learning methods

and each learning method is described by a set of equations.

A. Supervised Learning

Training or learning of a neural network is accomplished by presenting a sequence of training vectors

or samples or patterns each with an associated target output vector. The weights are adjusted

according to the learning algorithm. This process is known as supervised learning. Fig. 3 represents

supervised learning process.

Fig. 3 Supervised Learning

B. Unsupervised Learning

In an unsupervised training a sequence of input vectors is provided but no target vectors are

specified. The net modifies the weights so that most similar input vectors are assigned to the same

output unit or cluster unit. It arbitrarily organizes the patterns into categories[14]. Even though

unsupervised learning does not require a teacher or target output it requires guidelines to determine

formation of clusters. Its representation is shown in Fig.4.

Fig. 4 Unsupervised Learning

VII. THE PROPOSED METHOD

In this section, the implementation of the proposed intrusion detection system, implementation steps

and evaluation criteria are described.

Page 310: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

944 Vol. 7, Issue 3, pp. 938-947

H. Implementation

The network attacks can be divided in four groups of DoS, R2L, U2R, Probe. In the designed IDS, the

system can detect DoS -type attacks, in very high detection rate. In fact, this kind of IDS is

responsible for the detection attacks, which can be included in DoS category.

In order to design this type of IDS, we identified DoS attacks and designed a separate IDS for each

one to detect that specific attack. In general, considering the designed IDS, the system will detect

DoS attacks in the network (if there is any). The whole process of the system is shown in Figure 5.

Fig. 5.The Process of generating system of intrusion detection

As you see in figure 5, in order to train neural network and have a more qualified process, we made

some changes in database which as you see didn’t affect on the totality of data base. It is just done for

improving the function of neural network.

I. Evaluation Criteria

To measure and detect the efficiency of the designed IDSs or the exact degree of their assurance and

correctness the following criteria can be used[2]:

True negative = correctly detect the normal data

True positive = correctly detect the attack

False Positive = distinguish normal events as attacks

False negative = distinguish the incidents of attack as normal

TNR = TN / (TN + FP) = the total number of normal incidents that are correctly detected / the total

number of normal incidents that are detected as normal.

TPR = TP / (TP + FN) = the number of incidents of attack that are correctly detected / the total

number of incidents that are detected as attacks.

FNR = FN / (FN + TP) = the number of attack incidents that are detected as normal / the total number

of incidents that are detected as normal.

FPR = FP / (FP + TN) = the number of normal incidents which are detected as attack / the total

number of incidents which are detected as attack.

In this implementation, we used the TPR criterion and as you see in the chart above, the efficiency of

this implementation is approximately more than 98%.

Page 311: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

945 Vol. 7, Issue 3, pp. 938-947

VIII. RESULTS AND DISCUSSION

After executing the Program of Training of Neural Network values of y, W. W0, V, V0 are as under.

Y = 0.7593 0.9990 0.0949 0.9694 0.9234 0.1133 1.0000 1.0000 0.1660 0.8952

0.8883 1.0000 0.6756 1.0000 0.0000 0.0000 0.9569 0.0504

EPOCH = 990000

W =

[ 69.7724

-92.1417

75.1616 ]

WO =

[ 5.7396]

V =

[ -101.2418 -48.9699 238.5258

180.6523 215.8780 -0.1458

-245.0327 -305.1403 9.9378]

VO = [ 0.9382 4.7101 -1.8095

The Graphical representation of Epoch and Error is as shown in Fig. 6 which is plot between Epoch

Number and Error. As we know, when iterations increases error between target value and output

value decreases which is clear from Fig. 6.

Fig.6 Plot between Epoch Number and Error

Page 312: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

946 Vol. 7, Issue 3, pp. 938-947

Now, as we know as the iterations increases output value reaches to target value it is clearly shown in

Fig. 7 which is a plot between epoch and output value.

Fig. 7 Plot between Epoch Number and Output Value

Now Using these values of W, W0, V, V0 in the program of Forecasting of Attack or Normal Traffic

by Neural Network value of y forecasted is 1 which is clearly a attack. So this is an excellent

technique of forecasting of attack or normal traffic

IX. FUTURE SCOPE

As mentioned above, there has been a lot of research on intrusion detection, and also on the use of

neural networks in intrusion detection. As showed in this thesis, back propagation neural networks

can be used successfully to detect attacks on a network. The same experiments should also be

conducted with other types of neural networks to see if these types can improve the detection rate we

got from the experiments with a back propagation neural network.

X. LIMITATIONS

As for many studies; there are some different challenges viewed in the intrusion detection systems. In

this study, some limitations were faced. They can be summarized as follows:

1) Intrusion detection systems need a periodic update to the training set and profiles.

2) Using a static training data might become outdated and deficient for prediction.

3) The accuracy of classification for the data do not 100%.

XI. CONCLUSION

There are various techniques of Artificial Neural Network, which can be applied to Intrusion

Detection System. Each technique is suitable for some specific situation. BPNN is easy to implement,

supervised learning artificial neural network. Number of the epochs required to train the network is

high as compare to the other ANN techniques. But, detection rate is very high. BPNN can be used

when one wants to not only detect the attack but also to classify the attack in to specific category so

Page 313: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

947 Vol. 7, Issue 3, pp. 938-947

that preventive action can be taken. By combining the different ANN techniques, one can reduce the

number of the epochs required and hence can reduce the training time. As the DOS attack will be

resolved in this work, the throughput of the network will be improved and network delay will be

reduced. The work does not require any additional hardware and is software based. In the future this

system could be extended to an online system by little effort.

REFERENCES

[1] Bhavin Shah, Bhushan H Trivedi, “Artificial Neural Network based Intrusion Detection System”,

International Journal of Computer Applications” Volume 39– No.6, February 2012.

[2] Manoranjan Pradhan, Sateesh Kumar Pradhan, Sudhir Kumar Sahu, “Anomaly Detection using Artificial

Neural Network”,“International Journal of Engineering Sciences & Emerging Technologies, April 2012”.

[3] Zahra Moradi1, Mohammad Teshnehlab , “Intrusion Detection Model in MANETs using ANNs and

ANFIS”, 2011 International Conference on Telecommunication Technology and Applications, Singapore

[4] Mehdi MORADI and Mohammad ZULKERNINE, “A Neural Network Based Ssytem for Intrusion

Detection and Classification of Attacks”.

[5] Przemysław Kukiełka, Zbigniew Kotulski, “Adaptation of the neural network- based IDS to new attacks

detection”.

[6] M. Dondo and J. Treurniet, “Investigation of a Neural Network Implementation of a TCP packet Anomaly

Detection System”, Defence Research and Development Canada, May 2004.

[7] V.Sivakumar1,T.Yoganandh,R.Mohan Das, “Preventing Network From Intrusive Attack Using Artificial

Neural Networks”, International Journal of Engineering Research and Applications (IJERA), Vol. 2, Issue

2,Mar-Apr 2012, pp.370-373.

[8] Samaneh Rastegari, M. Iqbal Saripan and Mohd Fadlee A. Rasid, “Detection of Denial of Service Attacks

against Domain Name System Using Neural Networks”, IJCSI International Journal of Computer Science

Issues, Vol. 6, No. 1, 2009.

[9] S. Devaraju, S. Ramakrishnan, “ Detection of Accuracy for Intrusion Detection System using Neural

Network Classifier”, International Journal of Emerging Technology and Advanced Engineering (IJETAE).

[10] Afrah Nazir, “ A Comparative Study of different Artficial Neural Networks based Intrusion Detection

Systems” International Journal of Scientific and Research Publications, Volume 3, Issue 7, July 2013.

[11] Sudhakar Parate, S. M Nirkhi, R.V Dharaskar, “Application of Neural Forensics for detection of Web

Attack using Neural Network”, National Conference on Innovative Paradigms in Engineering and

Technology(NCIPET-2013).

[12] Przemysław Kukiełka, Zbigniew Kotulski, “Analysis of Neural Networks usage for detection of a new attack

in IDS”, Annales UMCS Informatica AI X, 1 (2010) 51-59.

[13] Tariq Ahamad and Abdullah Aljumah, “Hybrid Approach using intrusion Detection System”, International

Journal of Computer Networks and Communications Security, VOL. 2, NO. 2, FEBRUARY 2014, 87–92.

[14] Amit Garg and Ravindra Pratap Singh, “ Voltage Profile Analysis in Power Transmission System based on

STATCOM using Artificial Neural Network in MATLAB/SIMULINK”, International Journal of Applied

Information Systems(IJAIS), Foundation of Computer Science, New York, USA, Volume 6- No. 1,

September 2013.

AUTHORS BIOGRAPHY

Tapasya Pandit has completed her B-Tech in Electronics and Communication Engineering from BPR college

of Engineering, Gohana, Haryana, India and now pursuing her M-Tech in Electronics and Communication

Engineering from Baba Mastnath College of Engineering and Technology, Rohtak, Haryana, India. Her interests

include Intrusion Detection System and Artificial Neural Network.

Anil Dudy is working as Assistant Professor in Electronics and Communication Engineering Department in

Baba Mastnath College of Engineering and Technology, Rohtak, Haryana, India and also pursuing Ph.D in

Electronics and Communication Engineering from Baba Mast Nath University, Rohtak, Haryana, India.

Page 314: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

948 Vol. 7, Issue 3, pp. 948-956

IMPROVING PERFORMANCE OF DELAY AWARE DATA

COLLECTION USING SLEEP AND WAKE UP APPROACH IN

WIRELESS SENSOR NETWORK

1Paralkar S. S. and 2B. M. Patil 1P.G. Dept., M. B. E. Society’s College of Engineering. Ambajogai, Maharashtra, India

2M. B. E. Society’s College of Engineering, Ambajogai, Maharashtra, India

ABSTRACT Even though there are many advancement and development in wireless sensor network, it experiences frequent

wireless sensor network are battery-powered devices. Hence for lifetime of wireless sensor network energy

saving is rather crucial one. The objective of the current work is sleep and wake up method in which nodes can save

energy when it is sleep. When node wake up it sending or receiving data to or from another node. Sub cluster head

method which save average energy and average delay of network. In this paper simulation result shows that sub

cluster head method using sleep and wake up method which shows that network require minimum average

energy as well as average delay from all other networks.

KEYWORDS: Sleep and Wake up Approach, Wireless sensor Network.

I. INTRODUCTION

Wireless sensor networks consist of independent number of nodes, in nature these nodes are low cost,

small in size, low weight and battery-powered devices. Nowadays WSNs are used in different

locations for data transmission and data retrieve purpose. These thousand nodes are working

simultaneously for specific operation. In the sense that improving technologies now days WSNs

experiences to reduce delay and energy to increase network lifetime.

A wireless sensor network (WSN) used in different conditions such as temperature, sound, vibration,

pressure, pollutants and to send the collected data through the network to a centred location of the

network.

A wireless sensor network classified into flat, hierarchical, location based network structure. These

further classified into multipath-based, query-based, negotiation-based, QoS-based, and coherent

based depending on the protocol operation [1].

The current work concentrates on improving energy and delay required for the network. It focuses

when the data send from node to node it require energy otherwise it is to be in sleep mode which

saves energy of network. The delay and energy is minimized by using the method. This technique

improves energy and delay require for network in the wireless sensor network.

1.1 Design issues of a wireless sensor network

In wireless sensor network there are designs issues consider for configuring network. In that some

design issues are as follows:-

Fault Tolerance: Node failure problem is common problem in WNS. In the sense that these nodes

can fail due to hardware problems or physical damage. The protocols which is designed for the

network should be able to detect these failures as soon as possible and to be robust enough to

handle a relatively number of node failures while it maintaining functionality of the network. The

Page 315: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

949 Vol. 7, Issue 3, pp. 948-956

routing protocol designed in the sense that alternate paths are also available for rerouting of the

packets in network.

Scalability: Sensor networks are vary in scalable. The protocols are deployed in the networks to

maintain the performance.

Sensor Network Topology: WSNs can be evolved in many aspects, in that they are continue to be

networks with constrained resources in terms of energy, computing power, memory, and

communications capabilities. The network Topology Maintenance is to reduce energy

consumption in wireless sensor networks.

Power Consumption: In WNS size of the nodes limits can define the size of the battery. The

design of software and hardware it needs to carefully consider for the issues of energy use. The

required energy depends on the application; in some applications, nodes are to turn off when its

operation is over in network in order to conserve energy while in other applications it requires all

nodes operating continuously. In wireless sensor networks for wireless communications a lots of

the energy in a node is to be taken for network. For long transmission distance between number of

nodes and the base station is not to be considered.

II. LITERATURE SURVEY

In Heinzelmanet al. [2] have proposed LEACH a clustering algorithm. In networks using LEACH, In

LEACH it is a cluster-based protocol it select cluster head which collect data from nodes and send to

the base station. LEACH uses a TDMA/CDMA MAC to reduce collisions. In Lindsey and

Raghavendra [3] introduced algorithm PEGASIS, In PEGASIS, each node communicates only with a

nearest neighbor and takes turns transmitting to the base station, so that it reducing the energy

required for each round. In Tan and Körpeogˆlu [4] have proposed PEDAP which is define a

minimum spanning tree. In Fonseca et al. [5] have proposed the collection tree protocol . In A.

Manjeshwar and D. P. Agarwal [6,7] have proposed Threshold-Sensitive Energy Efficient Sensor

Network Protocol and APTEEN. TEEN protocol for reactive network. A CH sensor sends its

members a hard threshold and soft threshold value. APTEEN is a hybrid routing protocol. In Chi-

Tsun Cheng and Francis C. M. Lau [8] have proposed top down approach and bottom up approach. A

network consists of several clusters. In each cluster, one of the sensor nodes is work as a cluster

head(CH) and while other are cluster members (CM). In long distance transmission wireless sensor

nodes are involved in manner that energy required for network is reduced.

Top down approach: The top-down approach is a centralized control algorithm. The base station

is collect data from all sensor nodes in the network. After that the base station is then performing

further operation.

Bottom-Up Approach: The bottom-up approach is to join clusters of the same size together. It can

be implemented in either centralized or decentralized fashion.

In G. Lu, N. Sadagopan, B. Krishnamachari, and A. Goel [9] have proposed Sleep scheduling method

in which how to network life time increases and minimize end to end delay in network. In Marjan

Baghaie, and Bhaskar Krishnamachari [10] have proposed Delay Constrained Minimum Energy

Broadcast method. In this cooperative algorithm used for calculating delay and power efficiency and

after that compare the cooperative algorithm with smart non cooperative algorithm.

In P.M.Lokhande, A.P.Thakare [11] have proposed method anycast forwarding schemes to forward

the data packet to next hop node which minimizes the expected packet-delivery delays from the

sensor nodes to the sink node. In Abhay Raman et al. [12] have proposed the sleep-wake scheduling

protocol and the any cast packet-forwarding protocol not only to maximize the network lifetime but

also expected end to end packet-delivery delay. In Guofang Nan et al. [13] have proposed a coverage-

guaranteed distributed sleep/wake scheduling scheme. In this CDSWS network lifetime as well as

network coverage both considered and more than one node in cluster is active by using dynamic node

selection method. In Chih-Min Chao et al. [14] have proposed quorum-based MAC protocol that

enables sensor nodes to sleep longer under light loads which leads to save energy and transmission

latency minimum.

In Bo Jiang et al.[15] have proposed SSMTT method to support multiple target tracking sensor

networks which save the energy on proactive wake-up communication. In B. Chen, et al. [17] have

proposed Span to save the energy of the network increase system lifetime. Span is a distributed,

Page 316: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

950 Vol. 7, Issue 3, pp. 948-956

randomized algorithm in which nodes make decisions to sleep or to join a forwarding backbone as a

coordinator. In Abtin Keshavarzian et al. [18] have proposed an algorithm which consist of multi-

parent schemes tries to maintain the longer network lifetime while satisfying the latency constraints.

In Yuanyuan Zhou, Muralidhar Medidi [19] have proposed method in which is not only reduce the

end-to-end delay but also extend the network lifetime. In Arjan Durresi, Vamsi Paruchuri, Leonard

Barolli[20] have proposed DEAP method, DEAP is enables a flexible range of trade offs between the

packet delay and the energy use. DEAP improves the packet delay and system lifetime.

In B. Liang, J. Frolik, and X. S. Wang [21] have proposed the method in which predictive QoS

control strategy for wireless sensor networks. In Giuseppe Anastasi et al.[22] have proposed an

Adaptive Staggered sleep Protocol (ASLEEP) for efficient power management in WNS targeted to

periodic data acquisition in which dynamically adjusting the sleep scheduling of node without

affecting on the network. This protocol reduces the energy required for nodes so that network lifetime

increases. In VamsiParuchuri et al. [23] have proposed Random Asynchronous Wakeup(RAW), a

power saving technique for sensor networks that reduces energy consumption without significantly

affecting the latency or connectivity of the network. In which node local decides to sleep or to be

active.

In Shunfu Jin et al. [24] have proposed a sleep/wakeup protocol is introduced in IEEE 802.15.4,

handover ratio, cost function for minimizing the energy consumption of the sensor node. In S.Kavitha,

S.Lalitha[25] have introduced a protocol to make local monitoring parsimonious in its energy

consumption and to integrate it with any extant sleep-wake protocol in the network.InP. kaur ,A.

Nayyar [26] have proposed an energy efficient dynamic power management technique which shuts

down the sensor node when there is no work and wake them up when necessary which yields better

savings of energy and enhance lifetime. In Zhihui Chen, AshfaqKhokhar [27] have proposed protocol

is based on Time Division Multiple Access (TDMA) principle and are referred to as TDMA in which

a sensor utilizes its assigned slot only when it is sending or receiving information, otherwise its

receiver and transmitter are turned off .

In N. Shrestha et al. [28] have proposed SWAP to reduce the packet latency of delay sensitive

packets. To evaluate the energy efficiency and performance of the network. An active periods

overlap at least once within a cycle of the sleep and wake-up slots between two neighbour node swap

scheduling scheme is used to communicate between those two nodes. In A. Sharma et al. [29] have

proposed an energy measurement system based on a node current consumption usage. In Y. S. Bae

[30] have proposed a new RF wakeup sensor, which is a dedicated small RF module to check

potential communications by sensing the presence of a RF signal. With RF wakeup sensor each node

no longer requires duty cycling, eliminating both sleep delay and idle listening.

The rest of the paper is organized in the following manner; section 2 describes proposed method,

section 3 presents simulation and analysis, section 4 shows experimental results and finally the

conclusion for new proposed scheme.

III. SLEEP/WAKE UP METHOD

The source initiates a route discovery by broadcasting the RREQ packet then route get RREP from

nodes if source get RREP then source send data to the node. if source not get RREP then send RREQ

all other nodes. When nodes are ready to send/receive the data then it wake up otherwise node are in

sleep mode then it takes minimum energy required for the network. Cluster members can send data to

the cluster head which are near to them using sleep and wake up method. The cluster head forward

data to next level.

IV. SIMULATION AND ANALYSIS

All the work done in this paper has been implemented and validated in NS-2.34[16] and experimented

on Linux Red hat as Operating System, Network Simulator and NAM, AWK, XGRAPH.

4.1 Method and Implementation

Page 317: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

951 Vol. 7, Issue 3, pp. 948-956

As stated in Simulation and Analysis section, NS-2.34 and installed as extension class in NS. The

Topdown approach and bottom-up approach, SCHP and SCHP(SW) is further included in this

simulation for calculating average energy and average delay.

4.2 Work and Analysis

All the work done in this paper mainly focuses on sleep and wake up method using SCHP proposed

method in NS2, for which some value have set for set of parameters as shown in below table

Table 1. Simulation Parameters

Simulation Parameter Value

Simulation NS-2.34

Area 500*500 m

Number of Nodes 30

MAC 802.11

Queue type Queue/DropTail/PriQueue

Initial energy 90 joules

Queue length 200

4.3 Algorithm

Step 1:

Source sends RREQ to all the nodes.

Step 2:

if RREP send from node to start node then start node send data

Else RREQ update and retransmit to all other nodes.

For data transmission

Step 3:

If source equal to data received

Step 4:

If data received is not equal to RREQ then

Set flag 0

Step 5:

If flag 0 equal to 0 then Drop RREQ else

Transmit RREQ else if

Destination is equal to data received

Step 6:

If node_id RREQ is equal to node _id RREP

Set initialize RREP

For sleep RREP

Step 7:

If RREP is equal to received

Step 8:

If node_id received not equal to RREP

Set flag 1 else

Transmit RREP acknowledgement and data

Drop RREP else

Set flag 1

Step 9:

If node received not equal to source then

Transmit new and transmit RREP

Set flag 1 else

Set count if count is equal to 0

Transmit else

Stop RREP

Algorithm 1. Sleep and wake up method.

Page 318: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

952 Vol. 7, Issue 3, pp. 948-956

4.4 Procedure for subcluster head

1. Set cluster head and normal head

2. For set I equal to 0 then compare the value of i with value (nn) and increment the value of i &

for j equal to 0 then compare the value of j with value (nn) and increment the value of j and

set flag 0.

3. For l equal to 0 then l compare with count nodes then increment the value of l.

4. If j equal to normal nodes(l) then Set flag value equal to 0 and If flag not equal to 1 then

set expressions for temp1,temp2,temp3,temp4 as follows:

temp1 [expr c1 * r1 *[expr $clusterhead($j) - X1(j) ] (1)

for temp 2 the expression is defined as

temp2 [expr c2 * r2 *[expr $cluster_head3 - X1($j) ]] (2)

for temp 3 the expression is defined as

temp3 [expr 1 * r1 *[expr cluster_head2(j) - X2(j) ] (3)

for temp 4 the expression is defined as

temp4 [expr$c2 * r2 *[expr cluster_head4 - X2(j) ] (4)

5. if the value of xid1(j) is less than 0 or xid2(j) is less than 0 then set the value of xid1(j) as

follows:

set the value xid1(j) is equal to 1 (5)

set the value xid2(j) is equal to 1 (6)

6. if the value of xid1(j)is greater than 90 or the value of xid2(j) greater than 90) then set the

value of xid1(j) as follows:

set the value xid1(j)is equal to 90 (7)

set the value xid2(j) is equal to 90 (8)

7. if the value of xid1(j) is equal to X1(j) and the value of xid2(j) is equal to X2(j) then

set count [expr count + 1] and set the value X1(j) xid1(j) and set the value X2(j) xid2(j)

8. for the value of set k is equal to 0 then k is less than anchor and increament the value of k.

set the value

disx [expr CLUSTER_X1(k) - X1(j)] (9)

disy [expr CLUSTER_X2(k) - X2(j)] (10)

9. set disxsq, disysq, distsq, dist;

disxsq [expr disx * disx] (11)

disysq [expr disy * disy] (12)

distsq [expr disxsq + disysq] (13)

dist [expr sqrt(distsq)] (14)

10. if dist is less than equal to transrange then set disxsq, disysq, distsq, dist;

disx [expr initposX1(i) - X1(i)] (15)

disy [expr initposX2(i) - X2(i)] (16)

disxsq [expr disx * disx] (17)

disysq [expr disy * disy] (18)

distsq [expr disxsq + disysq] (19)

dist [expr sqrt(distsq)] (20)

Est [expr Est + dist ] (21)

normalnodes(countnodes) j and print the cluster head: normalnodes(countnodes) then

countnodes [expr countnodes + 1]

V. SIMULATION AND RESULTS

Following figure shows that the simulation results for energy spent and an average delay for nodes. In

simulation total 30 nodes were considered.

Page 319: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

953 Vol. 7, Issue 3, pp. 948-956

Figure 1. Calculating average energy spent between top down, bottom-up and schp

The energy spent for schp compare to topdown and bottom-up as shown in Figure 1. The X-axis

represents average energy and Y-axis represents nodes. Average energy spent by 30 nodes in schp is

requiring less than top down and bottom-up approach. As nodes increases average energy require for

nodes also increases. Schp takes minimum average energy as shown in figure 1. Low average energy

is better.

Figure 2. Calculating average delay between top down,bottom-up,schp

The average delay required for schp compare to topdown and bottom-up as shown in Figure 2. The X-

axis represents average delay and Y-axis represents nodes. As above figure shows that 30 nodes takes

average delay in schp is minimum as compare to topdown and bottom-up methods. Schp takes

minimum average delay as shown in figure 2. Low average delay is better.

Figure 3: Calculating average energy between topdown, bottom-up, schp, schp(sw)

Page 320: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

954 Vol. 7, Issue 3, pp. 948-956

The energy spent for schp(sw) compare to topdown and bottom-up, schp as shown in Figure 3. The X-

axis represents average energy and Y-axis represents nodes. In figure 30 nodes require average energy

calculating using sleep and wake-up method takes minimum as compare to topdown, bottom-up and

schp. In figure 3 shows that sleep and wake up method require average energy is minimum. Nodes

increases average energy also increases as shown in above figure. Low average energy is better.

The average delay for schp(sw) compare to topdown and bottom-up, schp as shown in Figure 4. The

X-axis represents average delay and Y-axis represents nodes. In figure 30 nodes require average delay

nodes increases average delay also increases. Sleep and wake-up method shows that network takes

minimum average delay than topdown, bottom-up and schp method as shown in figure 4. Low

average delay is better.

Figure 4: Calculating average delay between topdown, bottom-up, schp, schp(sw)

Figure 5. Calculating average energy spent between schp, schp(sw)

The energy spent for schp(sw) compare to schp as shown in Figure 5. The X-axis represents average

energy and Y-axis represents nodes. In figure 30 nodes require average energy sleep and wake-up

method require minimum as compare to schp method. Number of nodes increases average energy also

increases as shown in figure 5. Low average energy is better.

The average delay for schp(sw) compare to schp as shown in Figure 6. The X-axis represents average

delay and Y-axis represents nodes. In figure 6 shows that 30 nodes takes average delay sleep and

wake-up method takes minimum average delay than schp method. Low average delay is better as

shown in figure.

Page 321: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

955 Vol. 7, Issue 3, pp. 948-956

Figure 6. Calculating average delay between schp, schp(sw)

VI. CONCLUSION AND FUTURE WORK

Subclusterhead method which saves average energy and minimize average delay. In this paper helps

sleep and wake up method in WSN to minimize the average delay for increasing the performance of

the network and save the average energy which is minimum require for the network as compare to

subclusterhead method, topdown and bottom-up methods. In this paper sleep and wake up method

shows that minimum average energy and average delay required for network for increase the network

lifetime. The values derived by objective method can be tuned according to different methods

implementation and application.

REFERENCES

[1] J. N. Al-Karaki, the Hashemite universityA. E. Kamal, Iowa State University“Routing Techniques In

Wireless sensor network: a survay” IEEE Wireless Communications December 2004.

[2] W. B. Heinzelman, A. P. Chandrakasan, and H. Balakrishnan, “An application- specific protocol

architecture for wireless microsensor networks,” IEEE Trans. Wireless Commun., vol. 1, no. 4, pp. 660–

670, Oct. 2002.

[3] S. Lindsey and C. S. Raghavendra, “PEGASIS: Power-efficient gathering in sensor information systems,”

in Proc. IEEE Conf. Aerosp., Big Sky, MT, USA, vol. 3, pp. 1125–1130, ,Mar. 2002.

[4] H. Ö. Tan and Í. Körpeogˆlu, “Power efficient data gathering and aggregation in wireless sensor networks,”

ACM SIGMOD Record, vol. 32, no. 4, pp. 66–71, Dec. 2003.

[5] R. Fonseca, O. Gnawali, K. Jamieson, S. Kim, P. Levis, and A. Woo, “The collection tree protocol,” TinyOS

Enhancement Proposals (TEP), vol. 123, Dec. 2007.

[6] A. Manjeshwar and D. P. Agarwal, “TEEN: a Routing Protocol for Enhanced Efficiency in Wireless Sensor

Networks,” 1st Int’l. Wksp. on Parallel and Distrib. Comp. Issues in Wireless Networks and Mobile Comp.,

April 2001.

[7] A. Manjeshwar and D. P. Agarwal, “APTEEN: A HybridProtocol for Efficient Routing and Comprehensive

Information Retrieval in Wireless Sensor Networks,” Proc. Int’l. Parallel and Distrib. Proc. Symp., pp.

195–202 Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS.02)2002.

[8] C.T. Cheng, Member, IEEE, C. K. Tse, Fellow, IEEE, and F. C. M. Lau, Senior Member, IEEE “A Delay-

Aware Data Collection Network Structure for Wireless Sensor Networks” IEEE Sensor Journal, Vol. 11,

No. 3, March 2011.

[9] G. Lu, N. Sadagopan, B. Krishnamachari, and A. Goel, “Delay Efficient Sleep Scheduling in Wireless

Sensor Networks,” Proc. 24th IEEE Int’l Conf. Computer Comm., pp. 2470-2481, Mar. 2005.

[10] M. Baghaie, and B. Krishnamachari, “Delay Constrained Minimum Energy Broadcast in Cooperative

Wireless Networks”, IEEE International Conference on Computer Communication. 2008.

[11] P.M.Lokhande, A.P.Thakare “An Efficient Scheduling Technique for the Improvement of WSN with

Network Lifetime & Delay Constraint” International Journal of Recent Technology and Engineering

(IJRTE) ISSN: 2277-3878, Volume-2, Issue-1, March 2013.

[12]A.Raman, A. Kr. Singh, A. Rai “Mininize delay and maximize lifetime for wireless sensor network with

anycast”International Journal of Communication and Computer Technologies Volume 01 – No.26, Issue: 04

April 2013

Page 322: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

956 Vol. 7, Issue 3, pp. 948-956

[13] G. Nan, G. Shi, Z. Mao and M. Li “CDSWS: coverage-guaranteed distributed sleep/ wake scheduling for

wireless sensor networks”, EURASIP Journal on Wireless Communications and Networking 2012.

[14] C.M. Chao1 and Y.W. Lee, “A Quorum-Based Energy Saving MAC Protocol Design for Wireless Sensor

Networks”, IEEE, 2009.

[15] B. Jiang, B. Ravindran and H. Cho “Energy Efficient Sleep Scheduling in Sensor Networks for Multiple

Target Tracking”,2009.

[16]http://nsnam.isi.edu/nsnam/index.php/Downloading_and_installing_ns-2.

[17] B. Chen, K. Jamieson, H. Balakrishnan, and R. Morris, \Span: An energye _cient coordination algorithm

for topology maintenance in ad hoc wireless networks," in Mobile Computing and Networking, pp. 85-96,

2001.

[18] A. Keshavarzian, H. Lee, L. Venkatraman “Wakeup Scheduling in Wireless Sensor Networks” Proc.

Seventh ACM Int’l Conf. Mobile Ad Hoc Networking and Computing, pp. 322-333, May 2006.

[19] Y. Zhou, M. Medidi “Sleep-based Topology Control for Wakeup Scheduling in Wireless Sensor Networks”

Washington State University, Pullman, WA 99163 USA

[20] A. Durresi, V. Paruchuri, L. Barolli “Delay-Energy Aware Routing Protocol for Sensor and Actor

Networks” Proceedings of the 2005 11th International Conference on Parallel and Distributed Systems

(ICPADS'05)2005.

[21] B. Liang, J. Frolik, and X. S. Wang, “A predictive QoS control strategy for wireless sensor networks," in

Second IEEE International Conference on Mobile Ad Hoc and Sensor Systems (MASS), November 2005.

[22] G. Anastasi, M. Conti, M. D. Francesco “Extending the Lifetime of Wireless Sensor Networks through

Adaptive Sleep” IEEE Transactions on IndustrustrialInformatic.

[23] V. Paruchuri, S. Basavaraju, A. Durresi, R. Kannan and S.S. Iyengar “Random Asynchronous Wakeup

Protocol for Sensor Networks” Louisiana State University Department of Computer Science Baton Rouge,

LA 70803, USA Proceedings of the First International Conference on Broadband Networks

(BROADNETS’04)2004.

[24] S. Jin, W. Yue and Q. Sun “Performance analysis of the sleep/wakeup protocol in a wireless sensor

network” ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 5(B), pp. 3833-3844, May 2012.

[25]S.Kavitha, S.Lalitha “Sleep scheduling for critical event monitoring in wireless sensor ” International

Journal of Advanced Research in Computer and Communication Engineering Vol. 3, Issue 1, January 2014.

[26]P. kaur ,A. Nayyar “ Conceptual representation and Survey of Dynamic Power Management (DPM) in

Wireless Sensor Network” International Journal of Advanced Research in Computer Science and Software

Engineering Volume 3, Issue 3, March 2013.

[27] Z. Chen, A. Khokhar “Self Organization and Energy Efficient TDMA MAC Protocol by Wake Up For

Wireless Sensor Networks” IEEE University of Illinois at Chicago 851 S Morgan St., Chicago, IL, USA

2004.

[28] N. Shrestha, J. H. Youn, and N. Sharma “A Code-Based Sleep and Wakeup Scheduling Protocol for Low

Duty Cycle Sensor Networks” Journal of Advances in Computer Networks, Vol. 2, No. 3, September 2014.

[29] A. Sharma, K. Shinghal, N. Srivastava, R. Singh “Energy Management for Wireless Sensor Network

Nodes” International Journal of Advances in Engineering & Technology, Vol. 1, Mar 2011.

[30] Y. S. Bae, S. H. Lee, B. J. Park and L. Choi “ RF Wakeup Sensor for Wireless Sensor Networks”

International Journal of Multimedia and Ubiquitous Engineering Vol. 7, No. 2, April, 2012.

AUTHORS BIOGRAPHY

Paralkar S.S. he is completed his Bachelor’s degree in Computer Science & Engineering

Department from BMIT, Solapur under solapur university, Solapur, India. he is pursuing his

Master’s Degree from the College of Engineering, Ambajogai. His areas of research interest

include Computer Networks &Wireless Sensor Network.

B.M. Patil is currently working as a Professor in P.G. Computer Science & Engineering

Department in M.B.E.Society’s College of Engineering, Ambajogai, India. He received his

Bachelor’s degree in Computer Engineering from Gulbarga University in 1993,MTech

Software Engineering from Mysore University in 1999, and PhD Degree from Indian

Institute of Technology, Roorkee, 2011. He has authored several papers in various

international journals and conferences of repute. His current research interests include data

mining, medical decision support systems, intrusion detection, cloud computing, artificial

intelligence, artificial neural network, wireless network and network security.

Page 323: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

957 Vol. 7, Issue 3, pp. 957-966

IMPROVED NEW VISUAL CRYPTOGRAPHIC SCHEME USING

ONE SHARED IMAGE

Gowramma B.H1, Shyla M.G1, Vivekananda2 1PG Student, 2Assistant Professor

Department of Computer Science and Engineering,

Adichunchanagiri Institute of Technology, Chikmagalur, Karnataka, India

ABSTRACT Visual cryptography scheme is a cryptographic technique which allows visual information (e.g. printed text,

handwritten notes, and picture) to be encrypted in such a way that the decryption can be performed by the

human visual system, without the aid of computers. The Improved Visual Cryptography Scheme (VCS) is differs

from any existing visual cryptography schemes; this scheme encodes one binary secret image into one shared

image with innocent-looking. The secret image is visually revealed by superimposing the portion of the share

and its other portion together. The Improved VCS has the advantages of resisting geometry distortion, easy

recognition and management, the size of the revealed image is same size as that of original secret image.

KEYWORDS: Visual cryptography scheme, geometric distortion.

I. INTRODUCTION

Digital information and data are transmitted more often over the Internet now than ever before. The

availability and efficiency of global computer networks for the communication of digital information

and data have accelerated the popularity of digital media. Digital images, video, and audio have been

revolutionized in the way they can be captured, stored, transmitted, and manipulated, and this gives

rise to a wide range of applications in education, entertainment, media and military, as well as other

fields. Computers and networking facilities are becoming less expensive and more widespread.

Creative approaches to storing, accessing and distributing data have generated many benefits for the

digital multimedia field, mainly due to properties such as distortion-free transmission, compact

storage and easy editing.

Visual Cryptography Scheme (VCS) was introduced by Naor and Shamir in 1995. The basic concept

of the conventional (k, n)-threshold VCS[1] is that one binary secret image is encoded into n random-

looking images called shares or shadows which are then distributed to n corresponding participants.

Any k or more participants print their shares on transparencies and superimpose the transparencies

together; the secret image is visually reconstructed. However, any k-1 or fewer participants cannot get

any information about the secret image.

Many visual cryptography schemes (VCS)[2][3][4] have been proposed to meet different

requirements. Various k-out- of-n VCS were also proposed by different researchers to support

variable threshold numbers, in which the secret image is split into n share images and the secret can

be retrieved by the cooperation of at least k of them. If the number of shares stacked is less than k, the

original image is not revealed. The other schemes are 2-outof- n and n-out-of-n VCS. In the 2-out-of-

n scheme n shares will be produced to encrypt an image, and any two shares must be stacked to

decrypt the image. In the n-out-of-n scheme, n shares will be produced to encrypt an image, and n

shares must be stacked to decrypt the image. If the number of shares stacked is less than n, the

original image is not revealed. Increasing the number of shares or participants will automatically

Page 324: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

958 Vol. 7, Issue 3, pp. 957-966

increase the level of security of the encrypted message. The researchers also extended visual

cryptography schemes to support grayscale images and color images.

A novel EVCS [5] using one shared image encodes one secret image into one innocent-looking share. The secret image is visually revealed by copying the share, shifting the copy and superimposing the

shifted copy and the original share together.

In this paper, an improved VCS using one shared image is proposed. The proposed scheme encodes

one secret image into one innocent-looking share. The secret image is visually revealed by

superimposing the portion of the share and its other portion together. The self-decrypt characteristic

of the proposed scheme make it more robust against geometry distortion, while conventional VCSs

and EVCSs are not. In section 2, we discuss the related literature with respect to visual cryptography schemes. The section

3 discusses about general access structure. The section 4 discusses about the proposed scheme. The

section 5 gives results and discussions of the proposed scheme. The section 6 gives conclusion and

future enhancement of the work.

II. LITERATURE SURVEY

Moni Naor and Adi Shamir [1], introduces a new type of cryptographic scheme, which can decode

concealed images without any cryptographic computations. The scheme is perfectly secure and very

easy to implement. They extend it into a visual variant of the k out of n secret sharing problem, in

which a dealer provides a transparency to each one of the n users; any k of them can see the image by

stacking their transparencies, but any k - 1 of them gain no information about it.

The main results of [1] includes practical implementations of a k out of n visual secret sharing scheme

for small values of k and n, as well as efficient asymptotic constructions which can be proven optimal

within certain classes of schemes.

P. Tsai and M. Wang [3], proposes an improved (3, 3)-visual secret sharing scheme, which can be

used to embed three secret messages into three shares and improve security. First of all, the fist main

share image is resulted randomly and other two share images are based on the first share image and

the two coding tables designed by them. In the conventional (3, 3)-visual secret sharing scheme, it is

usually to embed one confidential messages, the proposed conventional (3, 3)-visual secret sharing

scheme has been extended to encrypt three secret images. It is also provides increased security.

Rijmen and Preneel [11] have proposed a visual cryptography approach for color images. In their

approach, each pixel of the color secret image is expanded into a 2×2 block to form two sharing

images. Each 2×2 block on the sharing image is filled with red, green, blue and white (transparent),

respectively, and hence no clue about the secret image can be identified from any one of these two

shares alone. Rijman and Preneel claimed that there would be 24 possible combinations according to

the permutation of the four colors. Because human eyes cannot detect the color of a very tiny

subpixel, the four-pixel colors will be treated as an average color. When stacking the corresponding

blocks of the two shares, there would be 242 variations of the resultant color for forming a color

image.

L. Duo and D. Yi-Qi [4], they proposed a new scheme on hiding a secret image into single shadow

image. It can also be used as a technique of digital watermarking. Differ from any existed methods of

visual hiding schemes using one secret image, the new scheme based on the rotation of the shadow

image. In this scheme, the single shadow image acted both encoder and decoder function. Therefore

the secret image can be recovered by stacking the shadow image and the shadow image after rotated

90 degree anticlockwise. The new method has the property of anti-compression, anti- distortion and

anti-shrink. Furthermore, the new scheme can use the shadow image sufficiently.

Xiaotian Wu and Wei Sun [5], they proposed a novel extended visual cryptography scheme using one

shared image. Differ from any existing visual cryptography scheme, the proposed scheme encodes

one secret image into one shared image with innocent-looking. The secret image is visually revealed

by copying the share, shifting the copy and superimposing the shifted copy and the original share

together. Extended visual cryptography scheme using one shared image has the advantage of resisting

geometry distortion. Comparing to the two existing one share based VCSs [4], the share generated by

the proposed scheme is easy to recognize and manage. The innocent-looking of the share make it

draw less attention from attackers.

Page 325: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

959 Vol. 7, Issue 3, pp. 957-966

III. GENERAL ACCESS STRUCTURE

An access structure is a rule [6], which defines how to share a secret. The most familiar examples are

(n, n) and (t, n) threshold access structures. A (t, n) threshold access structure rules that any t or more

out of n participants can cooperate to reveal the secret image and any less than t participants together

get nothing about the secret image. Obviously, a (n, n) threshold access structure is one instance of the

(t, n) threshold access structure. It demands all participants to cooperate for a secret recovery and

hence nothing can be seen even if only one attendee is absent. It is easy to know that a (t, n) threshold

access structure is tolerant because the final secret still can be restored from the other t shares even

though up to (n – t) shares are corrupted.

However, threshold access structure is only one special case of the so-called general access structure.

Usually, a general access structure is denoted as Γ = {A0, A1}, where A0 and A1 are sets of subsets

of all participants and A0∩A1=. Furthermore, A0 denotes a collection of forbidden sets and A1

denotes a collection of qualified sets. It is easily known that stacking all the shares held by the

participants of any qualified set can recover the secret image; but stacking all the shares held by the

participants of any forbidden set cannot reveal any information about the secret image. For example,

in a system with four participants, we let A1 = {{1, 2}, {2, 3}, {3, 4}, {1, 2, 3}, {1, 2, 4}, {1, 3, 4},

{2, 3, 4}, {1, 2, 3, 4}}, which implies that A0 = {{1}, {2}, {3}, {4}, {1, 3}, {1, 4}, {2, 4}}.

Therefore, we can learn that stacking share 1 and share 2 can recover the secret image; however,

stacking share 1 and share 4 can reveal nothing about the secret image.

It is easy to know that a general access structure should follow the monotone property: if γ ∈ A1 and

γ ⊆γ′, then γ ′∈ A1; if λ ∈A0 and λ⊇λ′, then λ ′∈ A0. So we can learn that the fact {1, 2} ∈ A1 implies

it is truly sure that {1, 2, 3} ∈ A1, {1, 2, 4} ∈A1 and {1, 2, 3, 4} ∈A1; the fact {1, 4} ∈ A0 implies

that {1} ∈ A0 and {4} ∈ A0. Furthermore, we also can let A1 − = {{1, 2}, {2, 3}, {3, 4}} and A0+ =

{{1, 3}, {1, 4}, {2, 4}} to represent above mentioned A1 and A0 respectively in terms of the

monotone property. In fact, A1− is usually named the family of minimal qualified sets and A0+ is the

family of maximal forbidden sets. In many situations, it is more convenient to refer to them instead of

A1and A0.

IV. THE PROPOSED SCHEME

In this section, an improved VCS using one shared image is proposed. The binary secret image is

encoded into one share with innocent-looking [5]. The secret image is visually revealed by

superimposing the portion of the share and its other portion together. The decryption of the proposed

scheme is very simple. By selecting upper most portion of share and lower most portion of the share

and superimposing them together gives the revealed secret image and exor-ing the each pixel in

blocks of size 2x2 from a revealed secret image to improve the reconstructed image.

4.1. Encryption Process

In encryption process binary image with m x n pixels is considered as the secret image. Then, a cover

image with innocent-looking is selected. The cover image has to be larger than the secret image. Here,

the size of the cover image is assumed to be M x N, where M >m, N>=n or M >=m, N>n. The

encryption of the proposed scheme is divided into two parts. Part 1 of the encryption algorithm

encodes Area 1 of the share, Part 2 encodes Area 2. Area 1 and 2 of the share are demonstrated in

Figure 1.The share is generated based on the secret image and cover image. Each pixel of secret

image is expanded into a 2x2 subpixel block. Therefore, the size of the output share is 2M x2N.

The basis matrices of the proposed scheme used in the encryption are illustrated in Table 1. Matrices

used in Area 2 are generated based on the three conditions of EVCS [7]. In the basis matrices, 1

represents black pixel and 0 represents white pixel.

Page 326: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

960 Vol. 7, Issue 3, pp. 957-966

Figure 1. Area1 and Area2 of the share

Table 1. Basis matrices of the evc

Area Of The

Share

Basis Matrices

Area1 Tw =[0 0 1 1]

Tb = [0 1 1 1]

Area 2

Twww = 0 0 1 1

1 0 1 0

Twwb = 0 0 1 1

1 1 0 0

Twbw = 0 0 1 1

1 0 1 1

Twbb = 0 0 1 1

1 1 1 0

Tbww = 1 0 1 1

0 0 1 1

Tbwb = 1 1 0 0

0 0 1 1

Tbbw = 1 0 1 1

1 0 1 1

Tbbb = 0 1 1 1

1 1 1 0

In encryption algorithm, step 2-7 encode Area 1 of the share. Step 8-13 encodes Area 2 of the share.

Area 1 is firstly encoded, then Area 2. In Area 1, the subpixel block is determined by the

corresponding pixel in cover image. But in Area 2, the subpixel block is determined by cover image

pixel, secret image pixel and the corresponding subpixel block in Area 1.

2n 2N

2m

The Revealed Secret

Image

2M

Area 1

Area 2

Page 327: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

961 Vol. 7, Issue 3, pp. 957-966

Algorithm for encryption process

Input: Secret image S (m x n pixels), cover image C (M x N pixels)

Output: Innocent-looking share SH (2M x 2N pixels)

Step 1: Compute p=M-m and q=N-n.

Step 2: For i=1 to p and for j=1 to N, repeat step 3-4.

Step 3: Obtain the corresponding basis matrix TC(i, j) from Table1 Area 1 according to

C(i,j).

Step 4: Randomly permute the columns of the determined basis matrix and assign the

elements of the permuted basis matrix to the pixels SH (2i-1, 2j-1), SH (2i, 2j-1),

SH(2i-1,2j), SH(2i,2j) of the share.

Step 5: For i = p+1 to M and for j=1 to q , repeat step 6- 7.

Step 6: Obtain the corresponding basis matrix TC(i, j) from Table 1 Area 1 according to

C(i, j).

Step 7: Randomly permute the columns of the determined basis matrix and assign the

elements of the permuted basis matrix to the pixels SH (2i-1, 2j-1), SH (2i, 2j-1),

SH(2i-1,2j), SH (2i, 2j) of the share.

Step 8:For i =p+1 to M and for j=q+1 to N, repeat step 9-13.

Step 9:Find out the corresponding basis matrices TS(i,j) C(i,j),D TS(i,j) D,C(i,j) fro

Table.1 Area 2 according to C(i, j) and S(i, j) where D can be black or white

Step10: Permute the columns of the corresponding basis matrices TS(i,j)C(i,j),D and TS(i,j)

D,C(i,j) for all possible. All these basis matrices and their permuted matrices form a

matrix set MS.

Step 11: Fetch one matrix from MS. Check the first row of this matrix is equal to

SH(2(i-p)-1,2(j-q)-1),SH(2(i-p),2(j-q)-1),SH(2(i -p)-1,2(j-q)),SH(2(i-p),2(j-q)) or not.

If yes, put the second row of this matrix into the candidate vector set CVS . If no,

move to the second row and conduct the above processing.

Step 12: Repeat step 11 until all the matrices in MS are checked and the candidate vector set

CVS is formed.

Step 13: Randomly choose one vector in CVS and assign this vector to the pixels

SH(2i-1,2j-1) , SH(2i,2j-1) , SH(2i-1,2j), SH(2i,2 j) of the share.

Step 14: Output the innocent-looking share SH.

The vital parts of encryption algorithm are finding the corresponding basis matrices and forming the

candidate vector set when Area 2 is encoding. For instance, assume that pixels in C(i, j) and S(i, j) are

1 and 0, respectively. The obtained basis matrices in Table.1 Area 2 are

Twbw = 0 0 1 1

1 0 1 1

Tbww = 1 0 1 1

0 0 1 1

Tbbw = 1 0 1 1

1 0 1 1

If the corresponding four pixels SH(2(i � p)�1,2(j� q) �1) , SH(2(i � p),2( j � q) �1) , SH(2(i � p)

�1,2( j � q)) , SH(2(i � p),2( j � q)) in Area 1 of the share are 1, 0, 0, 1, respectively. The formed

candidate vector set is {1011,1101}.

Then, one vector is randomly selected from the candidate vector set and assigned to the pixels

SH(2i �1,2 j �1) , SH(2i,2 j �1) , SH(2i �1,2 j) , SH(2i,2 j) of the share.

Let us consider an example

S be secret image of size 2x2 and C be cover image of size 3x3

S=

1 1

1 0

Page 328: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

962 Vol. 7, Issue 3, pp. 957-966

C =

The generated share is of size 6x6.

Share=

4.2. Decryption Process

The decryption process of the proposed scheme is very simple. By selecting upper most portion of

share and lower most portion of the share and superimposing them together gives the revealed secret

image and exor-ing the each pixel in blocks of size 2x2 from a revealed secret image to improve the

reconstructed image.

The decryption process of the proposed scheme is illustrated in Figure 2. Superimposing the

uppermost portion of the share of size 2mx2n is selected i.e plain area in the below figure and

lowermost portion of the share of size 2mx2n is selected i.e shaded area in the below figure reveals

the secret image.

Figure 2: Decryption Process

Considering the above mentioned example, the share generated in the encryption process is

1 1 1

0 1 0

1 1 1

1 1 1 1 1 1

1 0 1 0 0 1

1 1 0 1 0 1

0 0 1 1 0 1

1 1 0 1 0 1

1 0 1 1 1 0

2M

2m

2N

2n

Page 329: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

963 Vol. 7, Issue 3, pp. 957-966

Share=

The uppermost portion of the share is =

The lowermost portion of the share is=

Superimposing the 2 portions of the share reveals the secret image of size 2mx2n and exor-ing the

each pixel in blocks of size 2x2 from a revealed secret image and then negating it improves the

reconstructed image quality and also reduces its size to original secret image.

Revealed secret image=

Final revealed image=

V. RESULTS AND DISCUSSION

To demonstrate the effectiveness of the proposed scheme, examples are illustrated in this section. The

secret image with 200x200 pixels is shown in figure 3 and cover image with 249x260 pixels is shown

in Figure.4. The innocent-looking share of size 498x520 shown in Figure.5 Figure.6 shows the

superimposed image of size 400x400. The final reconstructed image after xor-ing the each pixel in

1 1 1 1 1 1

1 0 1 0 0 1

1 1 0 1 0 1

0 0 1 1 0 1

1 1 0 1 0 1

1 0 1 1 1 0

1 1 1 1

1 0 1 0

1 1 0 1

0 0 1 1

0 1 0 1

1 1 0 1

0 1 0 1

1 1 1 0

1 1 1 1

1 1 1 1

1 1 0 1

1 1 1 1

1 1

1 0

Page 330: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

964 Vol. 7, Issue 3, pp. 957-966

blocks of size 2x2 is shown in Figure 7. The PSNR value between secret image and reconstructed

image is 11.6052 dB and histogram error between secret image and final reconstructed image is

0.00078408. The histogram error of superimposed image and secret image is 0.1185.The histogram

error of proposed VCS is smaller than the histogram error of the existing VCS it shows that the

proposed scheme gives more similar image of secret image.

Figure 3. Secret image of size 200x200

Figure 4. Cover image of size 249x260.

Figure 5. Share image of size 498x520.

Page 331: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

965 Vol. 7, Issue 3, pp. 957-966

Figure 6. Superimposed image of size 400x400.

Figure 7. Final secret image of size 200x200.

VI. CONCLUSION

In this paper, an Improved VCS using single shared image is presented. The proposed scheme hides

one secret image into one innocent-looking share. The secret image is visually revealed by

superimposing the portion of the share and its other portion together. The proposed scheme has the

advantage of resisting geometry distortion and the revealed secret image size is same as that of

original secret image. Comparing to the two existing one share based VCSs, the share generated by

the proposed scheme is easy to recognize and manage. The innocent-looking of the share make it

draw less attention from attackers and image quality is also improved.

VI. FUTURE ENHANCEMENT

Several future enhancements can be made to this VCS. This proposed scheme can be enhanced by

using the morphological operations and using filters to remove the noise in the image. The proposed

system is designed to run on single host machine, the work can be extended to run on multiple host

machines.

REFERENCES

[1]Moni Naor and Adi Shamir, “Visual Cryptography”, advances in cryptology– Eurocrypt, pp 1-12, 1995.

Page 332: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

966 Vol. 7, Issue 3, pp. 957-966

[2] H. Wu and C. Chang, “Sharing visual multi-secrets using circle shares,” Computer Standards & Interfaces,

vol. 28, no. 1, pp. 123– 135, 2005.

[3]P. Tsai and M. Wang, “An (3, 3)-visual secret sharing scheme for hiding three secret data,” in Proceedings of

the 2006 Joint Conference on Information Sciences, 2006.

[4]L. Duo and D. Yi-Qi, “A New Visual Hiding Scheme Using One Secret Image,” Chinese Journal Of

Computers, vol. 32, no. 11, pp. 2247–2251, 2009.

[5]Xiaotian Wu and Wei Sun “A Novel Extended Visual Cryptography Scheme Using One Shared Image” In

ICITIS, pp.216-220, 2010.

[6]G. Ateniese, C. Blundo, A. Santis, and D. Stinson, “Extended capabilities for visual cryptography,”

Theoretical Computer Science, vol. 250, no. 1-2, pp. 143–161, 2001.

[7]Carlo Blundo, Alfredo De Santis, Douglas R. Stinson, “On the contrast in visual cryptography schemes”,

Journal of Cryptology, 12: pp. 261-289, 1999.

[8]C.C. Wu, L.H. Chen, “A Study On Visual Cryptography”, Master Thesis, Institute of Computer and

Information Science, National Chiao Tung University, Taiwan, R.O.C., 1998.

[9]E. Verheul and H. V. Tilborg, ”Constructions And Properties Of K Out Of N Visual Secret Sharing

Schemes.” Designs, Codes and Cryptography, 11(2) , pp.179–196, 1997.

[10]Tzung-Her Chen, Kai-Hsiang Tsao, and Kuo-Chen Wei, “Multi-Secrets Visual Secret Sharing”,

Proceedings of APCC2008, IEICE, 2008.

[11] V. Rijmen, B. Preneel, “Efficient color visual encryption for shared colors of Benetton”, Eurocrypto’96,

Rump Session, Berlin, 1996.

[12] Rafel C. Gonzalez, ‘Digital image processing’. Englewood cliffs: Pearson education, 2002.

AUTHORS

Gowramma B H, is currently Pursuing 4th Semester, Master of Technology in Computer

Science and Engineering at AIT, Chickmagalur. She has completed her Bachelor of

Engineering from PES Institute of Technology and management, Shivamogga. She had

published a paper. Her areas of interests include visual cryptography and Information

Security. Shyla M G, is currently Pursuing 4th Semester, Master of Technology in Computer Science

and Engineering at AIT, Chickmagalur. She has completed her Bachelor of Engineering

from Adichunchanagiri Institute of Technology, Chikmagalur. She had published a paper.

Her areas of interests include image processing and Information Security. Vivekananda is presently working as assistant professor in the Department of Computer

Science and Engineering, Adichunchanagiri Institute of Technology, Chikmagalur,

Karnataka, India. He had 9 years of teaching experience. He obtained his Master of

Technology degree from SJCE Mysoor under Visveswaraya Technological University,

Belgaum. His research interests include Image Stegonography, Visual Cryptography, and

Language Processing.

Page 333: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

967 Vol. 7, Issue 3, pp. 967-973

SEGMENTATION OF BRAIN TUMOUR FROM MRI IMAGES BY

IMPROVED FUZZY SYSTEM

Sumitharaj.R, Shanthi.K Assistant Professor, Department of ICE,

Sri Krishna College of Technology, Coimbatore, India

ABSTRACT The objective of this work is to detect the brain tumour by using symmetric analysis. In this paper the brain

tumour is first detected and then it is segmented. The brain tumour is quantified by using MRI of an image. The

problem of segmenting tumour by using MRI is considered by multiple step. Segmentation of images is a

significant position in the region of image processing. It becomes more significant while using the medical

images; Magnetic Resonance (MRI) Imaging gives more and perfect information during medical examinations

than that of the other medical images such as ultrasonic, CT images and X-rays. Here we propose a novel

method of segmenting brain tumour. An image with the brain tumour is first used and then from that image the

brain tumour is detected and segmented. A segmentation method which is used to segment the tumour from a

particular area. In the first step, we propose a tumour image to detect .then it is to be segment the tumour. In

some process the morphological analysis is used as an image processing tools for sharpening the regions and

filing the gaps for binarised image.

KEY WORDS: MRI image, Segmentation, Tumour detection, Fuzzy system.

I. INTRODUCTION

A technique in which the image is digitized in mathematical operation, it is to enhance the image with

recognition task. An image which is used in several parts of the region such as pattern recognition, TV

screens, 2D, 3D images. An image can be processed optically or an image can be processed optically

or digitally with a computer. Image processing is a well diversified field using in such of medical,

defense, quality control and in entertainment. Brain tumour is defined as an abnormal growth of cells

within the brain or the central spinal canal. Brain tumours include all tumours inside the central spinal

canal. The uncontrolled growth of cell division in brain of neueron,glial cell, myelin ,blood vessels, in

the cranial nerves, in the brain envelopes. Any brain tumour is inherently serious and life threatening

because of its invasive and infiltrative character in the limited space of the intracranial cavity.

However, brain tumours (even malignant ones) are not invariably fatal. Brain tumours or intracranial

neoplasm can be cancerous (malignant) or non-cancerous (benign); however, the definitions of

malignant or benign neoplasm differs from those commonly used in other types of cancerous or non -

cancerous neoplasm in the body. Level depends on the combination of factors like the type of tumour,

its location, its size and its state.

While in earlier period the brain tumour is detected by using the CT scans. In CT the not only the

brain tumour part detects the tumour portion whole area will be detected. But in MRI the tumour

region only will be detects. The primary brain tumours that commence within the brain. An inferior

or metastatic brain tumour takes place when cancer cells extend to the brain from a primary cancer in

a different component of the body. Brain tumours are classified as primary and second tumour.

Page 334: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

968 Vol. 7, Issue 3, pp. 967-973

Primary brain tumours are the tumours that originated in the brain from where they occur. They are

benign that is they are non-cancerous cells it do not attack the neighbor cells. But the secondary type

that which affects the whole region of the brain. Most frequently cancers that increase to the brain to

reason secondary brain tumours begin in the lumy, breast, and kidney or from melanomas in the skin.

A systematic brain tumour using MRI image of brain is used by type II fuzzy system. The proposed

type II fuzzy it has two modules. On this preprocessing and the other is segmentation method .In

preprocessing method a rule base is developed. In segmentation method the PCM is analyzed.

II. DESCRIPTION – EXISTING METHODOLOGY

In the existing methodology, the brain tumour can be diagnosed by CT scan, ultrasonic, MRI scan.

While dealing with CT scans the whole area of brain part is detected. But in MRI the brain tumor part

only that get detects. In olden technique the brain tumour is diagnose with the whole region. So we

cannot segment the tumor with accurate .For these reasons, the brain tumor is detected by MRI .while

using MRI the brain tumor is accurately detected and can be segmented with accurate measurements.

The Segmentation of an image entails the division or separation of the image into regions of similar

attribute. The ultimate aim in a large number of image processing applications is to extract important

features from the image data, from which a description, interpretation, or understanding of the scene

can be provided by the machine. Segment the brain to remove non-brain data. However, in

pathological cases, standard segmentation methods fail, in particular when the tumour is located very

close to the brain surface.

Therefore we propose an improved segmentation method, relying on the approximate plane. In this

fuzzy system there are two types: Type I and type II fuzzy system. The type I fuzzy system the

framework of fuzzy sets, systems, and relations is very useful to deal with the absence of sharp

boundaries of the sets of symptoms, diagnoses, and phenomena of diseases. However, there are many

uncertainties and vagueness’s in images, which are very difficult to handle with Type-I fuzzy sets.

These fuzzy sets are not able to model such uncertainties directly because their membership’s

functions are crisp. But in Type-II fuzzy sets are able to model such uncertainties as their membership

functions are themselves fuzzy. Therefore, Type-II fuzzy logic systems have the potential to provide

better performance. For these reasons, Type-II fuzzy modeling is used. The proposed Type-II expert

system has been tested and validated to show its accuracy in the real world. The results show that the

proposed system is superior in recognizing the brain tumour and its grade than Type-I fuzzy expert

systems.

In this paper first the image is preprocessed and then it is segmented. In segmentation method the

image is first calculated and then it is segmented through symmetric analysis methods or it varies

upon the rule base methods. In pre-processing some fundamental image enhancement and noise

lessening procedure are applied. The noise is reducing by the conversion of gray scale image. Then

this gray scale image pass in to the filter. We use here a high pass filter imfilter in Matlab to filter an

image, replaces each pixel of the image with a weighted average of the surrounding pixels. The

weights are determined by the values of the filter, and the number of surrounding pixels is determined

by the size of the filter used. Then the gray image and filtered image are merged together to enhanced

the image quality. Here we use Median filtering which is a nonlinear operation often used in image

processing to reduce "salt and pepper" noise. A median filter is more effective than convolution when

the goal is to simultaneously reduce noise and preserve edges. Then we convert the filtered image into

binary image by the thresholding method which computes a global threshold that can be used to

convert intensity image to a binary image with normalized intensity value between 0 and 1.

III. MATERIALS AND METHODS

The basic concept to detect tumour, is the component of the image hold the tumour generally has

extra concentration then the other segment and we can guess the area, shape and radius of the tumour

in the image. We calculate the area in pixel. Noise existing in the image can decrease the capability of

region growing filter to grow large regions or may result as a fault edges.

Page 335: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

969 Vol. 7, Issue 3, pp. 967-973

Figure 1aTumourwith noise Figure 1bTumour without noise

The figure 1a is tumour with noise and figure 1b is tumour without noise. The basic type names which

are Anaplasia, Neoplasia, and Necrosis. More generally a neoplasm may cause release of metabolic

end products. Table 1Tumour types

Anaplasia Loss of differentiation of

cells

Neoplasia Uncontrolled division of

cells

Necrosis Premature death of cells

In the type II fuzzy system, the figure 1b astrocytoma tumour is took for an example. The

astrocytoma tumour that is graded into four types. GradeI, GradeII, GradeIII and Grade IV.These

grades that are tabulated with the diagnosis list. Table 2 Grade types

Grades % of brain tumour Survival of years

Grade I 1.8 91

Grade II 1.3 67

Grade III 4.3 46

Grade IV 22.6 9

Astrocytoma can be subdivided into four grades: Grade I (Pilocytic Astrocytoma), Grade II (Diffuse

Astrocytoma), Grade III (Anaplastic Astrocytoma), and Grade IV (Glioblastoma Multiforme).These

grading systems have been shown with survival. So the survival ranges are more than 5 years for

grade II, between 2 and 5 years for grade III, and less than 1 year for Grade IV.

3.1 Proposed methodology- type II fuzzy system

In this paper, the type II fuzzy system is proposed. The fuzzy is divided into two types. One is type I

and type II fuzzy system. The type I fuzzy system that has the membership functions of crisp data sets

.so it cannot be used for the sets and algorithms. But the type II fuzzy system, the data sets and

membership functions are fuzzy. In any uncertain vagueness or images the type II fuzzy is used.

Figure 2 Fuzzy System Classifications

Step 1-- defining the initial membership functions: For tuning the parameters of initial interval Type-

II membership functions, the Cluster the output data.

In this process, the CSF is cerebrospinal fluid is the black area, so the membership function is black

intensity. On the other hand, the Abnormality is the brightest part of MRI, so its initial membership

Fuzzy system

Type I fuzzy Type II fuzzy

Membership

functions are crisp

Membership

functions are fuzzy

Page 336: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

970 Vol. 7, Issue 3, pp. 967-973

function must include the white intensities. The White Matter (WM) is bright and the Gray Matter

(GM) is gray, but their intensity levels are very similar. Table 3: Normal tissue in brain image

CSF dark

GM gray

WM bright

CSF: cerebrospinal fluid; GM: gray matter; WM: white matter

Step2--Tuning the parameters of Type-II membership functions: Tuning the parameters of a Type I

membership function is possible formula. However, the output of a Type-II membership function

cannot be represented. Given an input and output training pair (xi, µij), an interval Type-II fuzzy is

designed so that the error function (êij).

𝑒𝑖𝑗 =1

2(𝜇𝑗(𝑥𝑖) − 𝜇𝑖𝑗)2

Where 𝜇j (xi) is the real membership value of ith data in jth class and 𝜇ij is the calculated membership

value of ith data in jth class.

3.2approaches for segmenting tumour image

In processing the tumour method, it consists of certain blocks of systematic images. Here the salt and

pepper noise is added.

Figure 3Flow Chart of Proposed Segmenting Tumour

In this system first the image is stated as a tumour image, then the image gets binarised .The binarised

image that gets filtered, and the added image is involved to get the segmented image.

3.3Algorithm for detecting brain tumour

Input: MRI of brain image.

Output: Tumour portion of the image.

Step1:- Read the input color or grayscale image.

Step2:- Converts input colour image in to grayscale image this is done by forming a weighted sum of

each three (RGB) component, eliminating the saturation and hue information while retaining the

luminance and the image returns a gray scale color map.

Step3:- Resize this image in to 200 × 200 image matrix.

Step4:- Filters the multidimensional array with the Multidimensional filter. Each element of the

output an integer or in array, then output elements that exceed the certain range of the integer type is

shortened, and fractional values are rounded.

Step5:- Add step2, step4 image and a integer value 45 and pass it in to a median filter to get the

resultant enhanced image.

Step6:- Computes a global threshold that can be used to convert an intensity image (Step5) to a binary

image with a normalized intensity value which lies in between range 0 and 1.

Step7:- Compute watershed segmentation by Matlab Command watershed (step6 image).

Input image

Tumour image

Binarised image

Edge image

Segmented image

Page 337: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

971 Vol. 7, Issue 3, pp. 967-973

Step8:-Compute the morphological operation by two Matlab Command imerode and imdilate and

strel with arbitrary shape.

Step9:- Store the size of the step 8 image into var1 and var2 i.e. no. Of rows and column in pixels by

[var1 var2] =size (step8 image)

Step10:- Convert in to binary image and traces the exterior boundaries of objects, as well as

boundaries of holes inside these objects, in the binary image and into an RGB color image for the

purpose of visualizing labeled regions.

Step11:- Show only tumour portion of the image by remove the small object area.

Step12:- Compute edge detection using sobel edge detection Technique

IV. RESULTS

In this paper an interactive segmentation method that enables users too quickly and efficiently

segment tumours in MRI of brain. The tumour is a general problem in medical field. In the tumour

process there are primary and secondary tumours .while in the secondary tumour type many variety of

tissue is mostly depends on the primary one. So based on these types of sets we can detect or segment

the tumour.

Figure 4 Tumour image

The tumour image that gets binarised. The salt and pepper noise is added When the median filter is

used the image of noise that is reduced.

Figure 5 binary image

By adding the median filter, theedge image gets extracted to optimise the featured data .

Figure 6 Edge image

Finally the tumour that is segmented

Page 338: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

972 Vol. 7, Issue 3, pp. 967-973

Figure 7 Segmented image

By using the type II fuzzy system ,the tumour image is segmented and the denoised threshold image is

obtained.then the tumour part is clustered .

Figure 8 Denoised threshold image

The threshold image that gets denoised. Fuzzy clustering are more advantageous over crisp clustering

.the total vector to a given class is not required in each iteration. The thresholding method is used to

recognize the characteristics of the pixels values.

Figure 9 clustered tumour image

V. CONCLUSION

In this project an iterative method is to enable the tumours by detecting and segmenting the tumour

from MRI image. Segmentation is used to segment the particular area or part of the image. In this

paper the goal is to detect and segment the tumour from MRI of brain. The type II fuzzy system is

used in this paper .Rather comparing with Type I fuzzy system, the type II is used because if there is

any uncertainties or vagueness in images the type I fuzzy has a membership function of crisp. But in

type II fuzzy the membership functions are fuzzy.

ACKNOWLEDGEMENTS

I would like to thank my Head of the Department,my institution,my parents and my collegues.

REFERENCES

[1] Sudipta Roy, Samir K. Bandyopadhya,” Detection and Quantification of Brain Tumor from MRI of Brain

and it’s Symmetric Analysis”,in International Journal of Information and Communication Technology

Research,Volume 2 No. 6, June 2012 .

[2]T.Logeswari and M. Karnan, “An improved implementation of brain tumour detection using segmentation

based on soft computing” Journal of Cancer Research and Experimental Oncology Vol. 2(1) pp. 006-014,

March, 2010.

0 50 100 150 200 2500

50

100

150

200

250

Page 339: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

973 Vol. 7, Issue 3, pp. 967-973

[3]M.H Fazel, M. Zarandi, M.Izadi, “Systematic image processing for diagnosing brain tumor: A type II fuzzy

expert system approach”,Applied Soft Computing 11 (2011) 285–294.

[4]J. J. Corso, E. Sharon, and A. Yuille, “Multilevel Segmentation and Integrated Bayesian Model Classification

with an Application to Brain Tumour Segmentation,” in Medical Image Computing and Computer Assisted

Intervention, vol. 2, 2006, pp. 790–798.

[5]A.W. Chung Liew, H. Yan, Current methods in the automatic tissue segmentation of 3D magnetic resonance

brain images, Current Medical Imaging Reviews2 (2006).

[6] Dou, W., Ruan, S., Chen, Y., Bloyet, D., and Constans, J. M. (2007), “A framework of fuzzy information

fusion for segmentation of brain tumour tissues on MR images”, Image and Vision Computing, 25:164–171.

[7] Hassan Khotanlou, Olivier Colliot and Isabelle Bloch, “Automatic brain tumour segmentation using

symmetry analysis and deformable models”, GET-EcoleNationale Superior des Telecommunications, France.

[8]T.Logeswari and M.Karnan, “An Improved Implementation of Brain Tumour Detection Using Segmentation

Based on Hierarchical Self Organizing Map”, International Journal of Computer Theory and Engineering, Vol.

2, No. 4, August, 2010,pp.1793-8201.

[9] R. Rajeswari and P. Anandhakumar, “Segmentation and Identification of Brain Tumour MRI Image with

Radix4 FFT Techniques”, European Journal of Scientific Research, Vol.52 No.1 (2011), pp.100-109.

[10] P.Narendran, V.K. Narendira Kumar, K. Somasundaram, “3D Brain Tumours and Internal Brain Structures

Segmentation in MR Images”, I.J. Image, Graphics and Signal Processing, 2012, 1, 35-43.

[11] T. Logeswari and M. Karnan, “An improved implementation of brain tumour detection using segmentation

based on soft computing” Journal of Cancer Research and Experimental Oncology Vol. 2(1) pp. 006-014,

March, 2010.

[12]E. Nasibov, G. Ulutagay, A new unsupervised approach for fuzzy clustering, Fuzzy Sets and Systems 158

(2007) 2118–2133.

[13]M.H. Fazel Zarandi, M. Zarinbal, I.B. Turksen, Type-II possibilistic C-mean clustering, IFSA-USEFLAT

(2009) 30–35.

[14] R.N. Strickland, Image Processing Techniques for Tumour Detection, Marcel-Dekker, 2002.

[15] R. Krishnapuram, J.M. Keller, A possibilistic approach to clustering, IEEE Transactions on Fuzzy Systems

1 (1993).

[16] J.M. Mendel, R.I. John, F. Liu, Interval Type-2 fuzzy logic systems made simple,IEEE Transactions on

Fuzzy Systems 14 (2006) 808–821.

ABOUT AUTHOR

Sumitharaj.R, Assistant professor of Dept. Instrumentation and control engineering, Sri

Krishna college of technology, Coimbatore. She received M.E in the Dept. of electrical and

electronics engineering in the specialization of control and instrumentation engineering,

Anna University, regional center, Coimbatore. She received B.E in the Dept of Electronics

and instrumentation engineering, Bannari Amman institute of technology, sathyamangalam.

Shanthi.K, Assistant professor of Dept. Instrumentation and control engineering, Sri

Krishna college of technology, Coimbatore. She received M.E in the Dept. of electronics

and communication engineering in the specialization of communication systems, Anna

university regional center, Coimbatore. She received B.E in the Dept of Electronics and

instrumentation engineering, Maharaja Engineering College, Coimbatore

Page 340: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

974 Vol. 7, Issue 3, pp. 974-979

IMPLEMENTATION OF CLASSROOM ATTENDANCE SYSTEM

BASED ON FACE RECOGNITION IN CLASS

Ajinkya Patil1, Mrudang Shukla2 1Mtech (E&TC), 2Assisstant Professor

Symbiosis institute of Technology, Pune, Maharashtra, India

ABSTRACT The face is the identity of a person. The methods to exploit this physical feature have seen a great change

since the advent of image processing techniques. The attendance is taken in every schools, colleges and

library. Traditional approach for attendance is professor calls student name & record attendance. It takes some

time to record attendance. Suppose duration of class of one subject is about 50 minutes & to record attendance

takes 5 to 10 minutes. For each lecture this is wastage of time. To avoid these losses, we are about use

automatic process which is based on image processing. In this novel approach, we are using face detection &

face recognition system. This face detection differentiates faces from non-faces and is therefore essential for

accurate attendance. The other strategy involves face recognition for marking the student’s attendance. The

Raspberry pi module is used for face detection & recognition. The camera will be connected to the Raspberry pi

module. The student database is collected. The database includes name of the students, there images & roll

number. This raspberry pi module will be installed at the front side of class in such a way that we can capture

entire class. Thus with the help of this system, time will be saved. With the help of this system, it is so convenient

to record attendance. We can take attendance on any time.

KEYWORDS: Viola Jones algorithm, PCA, LDA, Image processing, Raspberry pi

I. INTRODUCTION

Organizations of all sizes use attendance systems to record when student or employees start and stop

work, and the department where the work is performed. Some organizations also keep detailed

records of attendance issues such as who calls in sick and who comes in late. An attendance system

provides many benefits to organizations. There was a time when the attendance of the students and

employees was marked on registers.

However, those who have been a part of the classes when attendance registers were used know how

easy it was to abuse such a method of attendance and mark bogus attendances for each other. Of

course, technology had to play its role in this field just as well as it has done in other fields. The

attendance monitoring system was created and it changed the way attendances were marked. The

attendance monitoring system has made the lives of teachers and employers easier by making

attendance marking procedure a piece of cake

When it comes to schools and universities, the attendance monitoring system is a great help for

parents and teachers both. Parents are never uninformed of the dependability of their children in the

class if the university is using an attendance monitoring system. The registers could easily be

exploited by students and if information was mailed to the parents, there were high chances that mails

could be made to disappear before parents even saw them. With the monitoring system in place, the

information can easily be printed or a soft copy can be sent directly to parents in their personal email

accounts.

The system started with two basic processes - Manual processes and Automatic processes. Manual

processes are eliminated as the staff needed to maintain them. It is often difficult to comply with

regulation, but an automated attendance system is valuable for ensuring compliance with regulations

regarding proof of attendance.

Page 341: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

975 Vol. 7, Issue 3, pp. 974-979

II. HISTORY

Naveed Khan Balcoh proposes that students attendance in the classroom is very important task and if

taken manually wastes a lot of time. There are many automatic methods available for this purpose i.e.

biometric attendance. All these methods also waste time because students have to make a

queue to touch their thumb on the scanning device. This work describes the efficient

algorithm that automatically marks the attendance without human intervention. This attendance

is recorded by using a camera attached in front of classroom that is continuously capturing images

of students, detect the faces in images and compare the detected faces with the database and mark

the attendance. The paper review the related work in the field of attendance system then describes the

system architecture, software algorithm and results. ErikHjelmas face detection is a necessary first-

step in face recognition systems, with the purpose of localizing and extracting the face region from the

background. It also has several applications in areas such as content-based image retrieval, video

coding, video conferencing, crowd surveillance, and intelligent human–computer interfaces. However,

it was not until recently that the face detection problem received considerable attention among

researchers. The human face is a dynamic object and has a high degree of variability in its appearance,

which makes face detection a difficult problem in computer vision. A wide variety of techniques have

been proposed, ranging from simple edge-based algorithms to composite high-level approaches

utilizing advanced pattern recognition methods.

III. METHODOLOGY

For this system we are using a two-step mechanism. First comes to be face detection then followed by

face recognition. For face detection we are using Viola Jones face detection algorithm while for face

recognition we are using hybrid algorithm from PCA and LDA.

1) Viola-Jones algorithm

There are three major blocks in Viola-Jones algorithm; Integral Images, Ada-Boost Algorithm and

Attentional cascade. The integral image computes a value at each pixel for example (x,y) that is the

sum of the pixel values above to the left of (x,y). This is quickly computed in one pass through the

image. Viola jones algorithm uses Haar like features. This is nothing but scalar product between the

image & some haar like structures. Feature is selected through adaboost. Ada-Boost provides an

effective learning algorithm and strong bounds on generalization performance. The overall form of the

detection process is that of a degenerate decision tree, what we call a “cascade”. A positive result

from the first classifier triggers the evaluation of a second classifier which has also been adjusted to

achieve very high detection rates. A positive result from the second classifier triggers a third

classifier, and so on. A negative outcome at any point leads to the immediate rejection of the sub-

window. The cascade training process involves two types of tradeoffs. In most cases classifiers with

more features will achieve higher detection rates and lower false positive rates. At the same time

classifiers with more features require more time to compute. In principle one can use following stages.

Figure 1: Cascade classifier

i) the number of classifier stages, ii) the number of features in each stage, and iii) the threshold of

each stage, are traded off in order to minimize the expected number of evaluated features.

Unfortunately finding this optimum is a tremendously difficult problem. In practice a very simple

framework is used to produce an effective classifier which is highly efficient. Each stage in the

Page 342: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

976 Vol. 7, Issue 3, pp. 974-979

cascade reduces the false positive rate and decreases the detection rate. A target is selected for the

minimum reduction in false positives and the maximum decrease in detection. Each stage is trained by

adding features until the target detection and false positives rates are met these rates are determined by

testing the detector on a validation set. Stages are added until the overall target for false positive and

detection rate is met.

2) Flow chart

Figure 2: face detection & recognition

The flowchart is shown in fig.no.2

2.1) Histogram Normalization

Captured image sometimes have brightness or darkness in it which should be removed for good

results. First the RGB image is converted to the gray scale image for enhancement. Histogram

normalization is good technique for contrast enhancement in the spatial domain.

2.2) Noise Filtering

Many sources of noise may exist in the input image when captured from the camera. There are many

techniques for noise removal. Low pass filtering in the frequency domain may be a good choice but

this also removes some important information in the image. In our system median filtering in is used

for the purpose of noise removal in the histogram normalized image.

2.3) Skin classification

This is used to increase the efficiency of the face detection algorithm. Voila and Jones algorithm is

used for detection..

the images of faces and then applied on the class room image for detection of multiple faces

in the image.

2.4) Face Detection:

Haar classifiers have been used for detection. Initially face detection algorithm was tested

on variety of images with different face positions and lighting conditions and then

algorithm was applied to detect faces in real time video. Algorithm is trained for the

images of faces and then applied on the class room image for detection of multiple

faces in the image.

Page 343: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

977 Vol. 7, Issue 3, pp. 974-979

Figure 3: Face Detection

2.5) Face Recognition and Attendance After the face detection step the next is face recognition. This can be achieved by cropping the first

detected face from the image and compare it with the database. This is called the selection of region

of interest. In this way faces of students are verified one by one with the face database using the Eigen

Face method and attendance is marked on the server.

The system consists of a camera that captures the images of the classroom and sends it to the image

enhancement module. After enhancement the image comes in the Face Detection and

Recognition modules and then the attendance is marked on the database server. At the time of

enrollment templates of face images of individual students are stored in the Face database. Here all

the faces are detected from the input image and the algorithm compares them one by one with the face

database. If any face is recognized the attendance is marked on the server from where anyone can

access and use it for different purposes. This system uses a protocol for attendance. A time table

module is also attached with the system which automatically gets the subject, class, date and time.

Teachers come in the class and just press a button to start the attendance process and the

system automatically gets the attendance without even the intensions of students and teacher. In this

way a lot of time is saved and this is highly securing process no one can mark the attendance of other.

Attendance is maintained on the server so anyone can access it for it purposes like administration,

parents and students themselves. Camera takes the images continuously to detect and recognize all

the students in the classroom. In order to avoid the false detection we are using the skin

classification technique. Using this technique enhance the efficiency and accuracy of the detection

process. In this process first the skin is classified and then only skin pixels remains and all

other pixels in the image are set to black, this greatly enhance the accuracy of face detection

Page 344: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

978 Vol. 7, Issue 3, pp. 974-979

process. Two databases are displayed in the experimental setup Figure. Face Database is the

collection of face images and extracted features at the time of enrollment process and the second

attendance database contains the information about the teachers and students and also use to mark

attendance.

Figure 4: Face recognition System in Class

IV. FUTURE SCOPE

For security reasons, we can use detection & recognition system. To identify culprits on bus stations,

railway stations 7 other public places, we can use this system. This will be helping hand to the police.

In this system, we will use GSM module. Suppose if culprit is detected, then detected signal can be

transmitted using GSM module to the central control room of police station. With the help of ISDN

number of GSM, culprit surviving area will be recognized

V. CONCLUSION

We come to know that there are wide range of methods such as biometric, RFID based etc. which are

time consuming and non-efficient. So to overcome this above system is the better and reliable solution

from every perceptive of time and security. Thus we have achieved to develop a reliable and efficient

attendance system to implement an image processing algorithm to detect faces in classroom and to

recognize the faces accurately to mark the attendance.

REFERENCES

[1]. Arulogun O. T., Olatunbosun, A., Fakolujo O. A., and Olaniyi, O. M. RFID-Based Students

Attendance Management System. International Journal of Scientific & Engineering Research Volume

4, Issue 2, February-2013.

[2]. Chitresh Saraswat and Amit Kumar. An Efficient Automatic Attendance System using Fingerprint

Verification Technique. International Journal on Computer Science and Engineering Vol. 02, No. 02,

2010, 264-269.

Page 345: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

979 Vol. 7, Issue 3, pp. 974-979

[3]. Jobin J., Jiji Joseph, Sandhya Y.A, Soni P. Saji, Deepa P.L.. Palm Biometrics Recognition and

Verification System. International Journal of Advanced Research in Electrical, Electronics and

Instrumentation Engineering Vol. 1, Issue 2, August 2012.

[4]. Nirmalya Kar, Mrinal Kanti Debbarma, Ashim Saha, and Dwijen Rudra Pal. Study of Implementing

Automated Attendance System Using Face Recognition Technique. International Journal of Computer

and Communication Engineering, Vol. 1, No. 2, July 2012.

[5]. Yogesh Tayal, Ruchika Lamba, Subhransu Padhee. Automatic Face Detection Using Color Based

Segmentation. International Journal of Scientific and Research Publications, Volume 2, Issue 6, June

2012.

[6]. Omaima N. A. AL-Allaf. Review Of Face Detection Systems Based Artificial Neural Networks

Algorithms. The International Journal of Multimedia & Its Applications (Ijma) Vol.6, No.1, February

2014.

[7]. Deepak Ghimire and Joonwhoan Lee. A Robust Face Detection Method Based on Skin Color and

Edges. J Inf Process Syst, Vol.9, No.1, March 2013.

[8]. Sanjay Kr. Singh, D. S. Chauhan, Mayank Vatsa, Richa Singh. A Robust Skin Color Based Face

Detection Algorithm. Tamkang Journal of Science and Engineering, Vol. 6, No. 4, pp. 227-234 (2003).

[9]. Viola P. & Jones J...Robust Real-Time Face Detection. International Journal of Computer Vision 57(2),

137–154, 2004.

[10]. Mayank Agarwal, Nikunj Jain, Mr. Manish Kumar and Himanshu Agrawal. Face Recognition Using

Eigen Faces and Artificial Neural Network. International Journal of Computer Theory and Engineering,

Vol. 2, No. 4, August, 2010.

[11]. Bayan Ali Saad Al-Ghamdi, Sumayyah Redhwan Allaam. Recognition of Human Face by Face

Recognition System using 3D. Journal of Information & Communication Technology Vol. 4, No. 2,

(Fall 2010) 27-34.

[12]. Hafiz Imtiaz and Shaikh Anowarul Fattah. A Face Recognition Scheme Using Wavelet - Based

Dominant Features. Signal & Image Processing: An International Journal (SIPIJ) Vol.2, No.3,

September 2011.

[13]. Jagadeesh H S, Suresh Babu K, and Raja K B. DBC based Face Recognition using DWT. Signal &

Image Processing: An International Journal (SIPIJ) Vol.3, No.2, April 2012

[14]. Aruna Bhat. Medoid Based Model For Face Recognition Using Eigen And Fisher Faces. International

Journal of Soft Computing, Mathematics and Control (IJSCMC), Vol. 2, No. 3, August 2013.

[15]. Kandla Arora. Real Time Application of Face Recognition Concept. International Journal of Soft

Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue-5, November 2012.

AUTHORS BIOGRAPHY

Ajinkya Patil pursuing the M.Tech. degree in Electronics & Telecommunication

Engineering from Symbiosis International university, Pune. His research interests include

Image processing, Robotics, Embedded systems.

Mrudang Shukla is assistant professor at Symbiosis Institute of Technology in Electronics

and Telecommunication department. His research interest is in image processing and

defining vision and path for automated vehicle in agricultural field.

Page 346: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

980 Vol. 7, Issue 3, pp. 980-990

A TUNE-IN OPTIMIZATION PROCESS OF AISI 4140 IN RAW

TURNING OPERATION USING CVD COATED INSERT

C. Rajesh Department of Mechanical Engineering, Srikalahasteeswara Institute of Technology,

Srikalahasthi -517640, Andhra Pradesh, India

ABSTRACT In today's rapidly changing scenario in every manufacturing industry, aims at producing a large number of

quality products within relatively lesser time. This invites optimization methods in metal cutting processes,

considered to be a vital tool for continual improvement of output quality in products. In order to maximize the

gains from raw turning operation of AISI 4140 alloy steel, an accurate model of process must be constructed. In

this paper, an approach which incorporates Surface roughness, Material removal rate & Power consumption is

presented for optimizing the cutting parameters in finish turning. And a tutorial is presented on constructed

optimization process describing with Taguchi method, Artificial Neural networks and genetic algorithm (GA)

developed specifically for problems with multiple objectives by using specialized fitness function and GONN’s is

adapted for better acceptance. ANOVA are used to analyze the effect of cutting parameters on the quality

characteristic of machined work piece. The approach is suitable for fast determination of optimum cutting

parameters during machining, where there is not enough time for deep analysis.

KEYWORDS: CVD coated insert, AISI 4140 alloy steel, Taguchi, ANOVA, ANN, GA, GONNs

I. INTRODUCTION AND LITERATURE SURVEY

The recent developments in science and technology have put tremendous pressure on manufacturing

industries. The manufacturing industries are trying to decrease the cutting costs, increase the quality

of the machined parts and machine more difficult materials. Machining efficiency is improved by

reducing the machining time with high speed machining. When cutting ferrous and hard to machine

materials such as steels, cast iron and super alloys, softening temperature and the chemical stability of

the tool material limits the cutting speed. High speed machining has been the main objective of the

Mechanical Engineering through ages. The trend to increase productivity has been the instrumental in

invention of newer and newer cutting tools with respect to material and designs [1].

The development of machining technologies and practices over recent years has meant that designs

that were difficult to manufacture can now be produced relatively easily. Also, tolerances and the

resultant component alterations that were only a short time ago only achievable by the most highly

perfected facilities can now be attained by much more ubiquitous equipment. Generally the industries

having metal cutting operations have been suffering from various big problems since the optimum

operating conditions for the machine tools cannot be easily achieved. The Industrial practitioners and

researchers have been dealing with this area to overcome such problems [2].

In machining, determining optimal cutting conditions or parameters under the given machining

situation is difficult in practice. Conventional way for selecting these conditions such as speed and

feed rate has been based upon data from machining handbooks and/or on the experience and

knowledge on the part of operator. Turning is the first most common method for cutting and

especially for the finishing machined parts. Turning is a cutting operation in which the part is rotated

as the tool is held against it on a machine called a lathe [4]. The raw stock that is used on a lathe is

usually cylindrical, and the parts that are machined on it are rotational parts. In a turning operation, it

is important task to select cutting parameters for achieving high cutting performance. Cutting

parameters are reflected on surface roughness, surface texture and dimensional deviations of the

Page 347: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

981 Vol. 7, Issue 3, pp. 980-990

product (Nalbant, 2006). Surface finish obtained in manufacturing processes mainly depends on the

combination of two aspects: the ideal surface finish provided by marks that manufacturing process

produces on the surface and the actual surface finish which is generated taking into account

irregularities and deficiencies that may appear in the process, changing manufacturing initial

conditions (Arbizu and Perez, 2002).

The surface texture of an engineering component was very important. The surface of every part

includes some type of texture created by any combination of the following factors; the microstructure

of the material, the action of the cutting tool, cutting tool instability, errors in tool guide ways and

deformations caused by stress patterns in the component. It was affected by the machining processes

by changes in the conditions of either the component or machine (Abouelatta and Madl, 2000).

A machined surface carries a lot of valuable information about the process including tool wear, built-

up edge, vibrations, damaged machine elements etc. Consequently, a machined surface is a replica of

the cutting edge which carries valuable information related to the tool condition i.e., sharpness or

bluntness (Kasim et. al. 2006). (Chien & Chou, 2001) applied neural networks for modeling

surface roughness, cutting forces and cutting-tool life and applied a GA to find optimum cutting

conditions for maximizing the material removal rate under the constraints of the expected surface

roughness and tool life [7]. (Cus & Balic, 2003) also applied GA for optimizing a multi-objective

function based on minimum time necessary for manufacturing, minimum unit cost and minimum

surface roughness. All the process models applied in his research were empirical formulas from

machining handbooks which were fitted through regressions [11]. More complex models have also

been applied for surface roughness and tool wear modeling to optimize off-line cutting parameters.

(Zuperl & Cus, 2003) also applied and compared feed- forward neural networks for learning a multi-

objective function similar to the one presented in (Cus & Balic, 2003). It is nearly impossible to

discuss all the works related to Taguchi methods. We have tried to mention the main articles that

discuss the pros and cons of Taguchi’s contributions. There are several other papers that are listed in

the Bibliography but specifically not discussed here.

Organization of the work:

Step1: Design of Experiments

Step2: Taguchi Method Optimization

Step3: Test Analysis of variance

Step4: Multi-objective optimization - GA

Step5: Artificial Intelligence (AI): Simulation of ANN

Step6: Hybridization: GA + ANN = GONNs

Step7: Is the objective satisfies?

Step8: No, Tune-in optimum process

Step9: yes, Store the optimum solution

Step10: Compare the optimum solutions and report the best solution

II. MATERIALS AND EXPERIMENTATION

Machine: The experiment was carried out on the precision centre lathe (PSG A141) which enables

high precision machining and production of jobs. The main spindle runs on high precision roller

taper bearings and is made from hardened and precision drawn nickel chromium steel.

Technical Specifications are: centre height: 177.5mm, main motor power: 3hp, 30 longitudinal and

transverse feeds.

Work piece: Work piece of standard dimensions was used for machining: work piece diameter:

50mm, work piece length: 300mm (approx.).

The Cutting tool insert: The cutting selected for machining of AISI 4140 alloy steel was Chemical

Vapor Deposition coated cermet insert of 0.4 and 0.8 mm nose radii, 2 - 15 micro-m thick. Coated tips

typically have lives 10 times greater than uncoated tips. Common coating materials include titanium

nitride, titanium carbide and aluminum oxide, because it had been found that CVD tool is best choice

for machining of CRMOs due to its high wear resistance.

Lathe Tool Dynamometer: The instrument used for the measurement of cutting force was multi-

component force indicator. This instrument comprises independent DC excitation supply for feeding

Page 348: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

982 Vol. 7, Issue 3, pp. 980-990

strain gauge bridges, signal processing systems to process and compute respective force values for

direct independent display. Instrument operates on 230V, 50Hz AC mains.

Surface Roughness measurement: The instrument used to measure surface roughness was

SURFTEST (Mitutoyo Make). For a probe movement of 5mm, surface roughness readings were

recorded at three locations on the work piece and the average value was used for analysis.

Table 1: Process variables and their limits

Fig 1: Experimental Setup

Table 2: Chemical composition of AISI 4140 [3]

Chemical Composition of AISI 4140

C Si Mn S P Cr Ni Mo Fe

0.20(max) 0.35(max) 0.50-1.00 0.040 0.040 0.75-1.25 1.00-1.50 0.08-0.15 Balance

In this experiment, in order to investigate the surface roughness of the machined work piece and

material removal rate, power consumption during cutting, a CVD coated tool was used. A view of the

cutting zone and Experimental setup is shown in Fig. 1. The surface roughness of the finished work

surface was measured with the help of a surface roughness tester and material removal rate, Power

consumption are calculated as below type.

The working ranges of the parameters for subsequent design of experiment, based on Taguchi’s L27

Orthogonal Array (OA) design have been selected. In the present experimental study, spindle speed,

feed rate and depth of cut have been considered as Process variables. The process variables with their

units (and notations) are listed in Table 1.

2.1 Experimental procedure

Turning is a popularly used machining process. The lathe machines play a major role in modern

machining industry to enhance product quality as well as productivity. In the present work, three

levels, three factors and twenty seven experiments are identified. Appropriate selection of orthogonal

array is the first step of Taguchi approach. According to Taguchi approach L27 orthogonal array has

been selected. Cutting tests were carried out on lathe machine under dry conditions. A pre-cut with a 1

mm depth of cut was performed on work piece of actual turning. This was done in order to remove the

rust layer or hardened top layer from the outside surface and to minimize any effect of in homogeneity

on the experimental results. After that, the weight of each samples have been measured accurately

with the help of a high precision digital balance meter. Then, using different levels of the process

parameters have been turned in lathe accordingly. Machining time, cutting forces for each sample has

been calculated accordingly. After machining, weight of each machined parts have been again

measured precisely with the help of the digital balance meter. Then surface roughness has been

measured precisely with the help of a portable Mitutoyo SJ-201P surftest. The results of the

experiments have been shown in table 3.a, 3.b & 3.c.

2.2 Calculation of the Material removal rate

Material removal rate (MRR) has been calculated from the difference of weight of workpiece before

and after experiment by using the following formula.

Factors Units Low Medium High

Speed rpm 740 580 450

Feed mm/rev 0.09 0.07 0.05

Depth of

cut mm 0.25 0.20 0.10

Page 349: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

983 Vol. 7, Issue 3, pp. 980-990

MRR = W i – W f mm3 / min

ρ* t

Where, Wi is the initial weight of work piece in g; Wf is the final weight of work piece in g; t is the

machining time in minutes; ρ is the density of alloy steel (7.8 x 10-3 g/mm3).

2.3 Calculation of the power consumed

The cutting force system in 3D-turning consists of three forces: Fc is the largest force that accounts for

99% the power required, Fa requires very small power because feed rates are very small, and Fr the

radial force contributes very small also because velocity in the radial direction is negligible. Tool

Dynamometer is used for determining forces

Ignoring the thrust and radial forces, the total input power to cutting is given by:

PC = Fc * V

V= Volume of MR / t * Fc * Fa

III. DATA COLLECTION

The results of the experiments have been shown in Table 3 (a) to (c). Analysis has been made based

on those experimental data in the following session. Optimization of surface roughness, material

removal rate and power consumption of the cutting tool has been made by Taguchi method and

Genetic Algorithm coupled with Regression analysis, Analysis of Variance as well as Entropy

concept. Confirmatory tests have also been conducted finally to validate optimal results.

Table 3.a: Experimental Results L27 Orthogonal array

Table 3.b: Measurement of Material removal rate

Speed Feed DOC Surface Roughness, Ra(µm) Speed Feed DOC

1 1 1 3.7624 2.8209 2 2 3

1 1 2 2.5408 4.2722 2 3 1

1 1 3 2.6463 1.9945 2 3 2

1 2 1 4.0304 1.9765 2 3 3

1 2 2 2.2705 3.1275 3 1 1

1 2 3 2.4472 1.9613 3 1 2

1 3 1 4.3392 2.2436 3 1 3

1 3 2 2.3151 3.5635 3 2 1

1 3 3 2.5792 2.0062 3 2 2

2 1 1 4.1644 2.8849 3 2 3

2 1 2 2.3815 3.0145 3 3 1

2 1 3 2.936 2.258 3 3 2

2 2 1 4.3271 2.883 3 3 3

2 2 2 2.6254 -- -- -- --

Sampl

e

Numb

er

Material

removal

rate

mm3/min

Sampl

e

Numb

er

Material

removal

rate

mm3/min 1 0.6234 15 0.1417

2 0.2564 16 0.6594

3 0.2685 17 0.3846

4 0.7792 18 0.1948

Page 350: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

984 Vol. 7, Issue 3, pp. 980-990

Table 3.c: Measurement of Power consumed

IV. DATA ANALYSES

Experiment was conducted to assess the effect of Spindle speed, feed rate and depth of cut on the

surface finish and machine power consumption. Table 3 illustrates the experimental results of Ra,

MRR and PC.

A. Taguchi method

Taguchi Method is developed by Dr.Genichi Taguchi, a Japanese quality management Consultant, He

has developed both the philosophy and methodology for the application of factorial design

experiments that has taken the design of experiments from the exclusive world of the statistician and

brought it more fully into the world of manufacturing. His contributions have also made the

practitioner’s work simpler by advocating the use of fewer experimental designs, and providing a

clearer understanding of the nature of variation and the economic consequences of quality engineering

in the world of manufacturing and uses a statistical measure of performance called Signal-to-Noise

(S/N) ratio. Taguchi methods seek to remove the effect of noises, he pointed out that the key element

for achieving high quality and low cost is parameter design. The S/N ratio takes both the mean and

5 0.3550 19 0.1699

6 0.3568 20 0.1687

7 0.8727 21 0.0886

8 0.3550 22 0.3108

9 0.4918 23 0.3422

10 0.5647 24 0.2331

11 0.5273 25 0.1626

12 0.5367 26 0.1553

13 0.5648 27 0.1608

14 0.1417 -- --

Sample

Number

Power

consumed

KW

Sample

Number

Power

consumed

KW

1 9.8829 15 3.3306

2 3.1600 16 11.2389

3 10.5607 17 3.7078

4 20.8746 18 5.0348

5 4.6130 19 3.6868

6 10.1059 20 3.3031

7 15.3542 21 2.1369

8 3.6462 22 6.5768

9 11.7180 23 4.2897

10 17.9277 24 5.5792

11 9.6427 25 2.9818

12 5.0271 26 1.9107

13 13.8077 27 3.1278

14 1.4118 -- --

Page 351: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

985 Vol. 7, Issue 3, pp. 980-990

the variability into account. The ratio depends on the quality Characteristics of the product/process to

be optimized.

The optimal setting is the parameter combination, which has the highest S/N ratio. The standard S/N

ratios generally used are as follows: - Nominal is Best (NB), Lower the Better (LB) and Higher the

Better (HB). Taguchi approach has potential for savings in experimental time and cost on product or

process development and quality improvement. Quality is measured by the deviation of a functional

characteristic from its target value. Through parameter design, levels of product and process factors

are determined, such that the product’s functional characteristics are optimized and the effect of noise

factors is minimized.

Taguchi’s ideas can be distilled into two fundamental concepts:

(a) Quality losses must be defined as deviations from targets, not conformance to specifications

(b) Quality is designed, not manufactured, into the product.

Main effect plot:

The analysis is made with the help of a software package MINITAB 16. The main effect plots are

shown inFig.2, 3 and 4.These show the variation of individual response with the three parameters i.e.

speed, feed, and depth of cut separately. In the plots, the x-axis indicates the value of each process

parameter at three level and y-axis the response value. Horizontal line indicates the mean value of the

response. The main effects plots are used to determine the optimal design conditions to obtain the

optimum surface roughness. Fig.2 shows the main effect plot for surface roughness.

Fig.2 Main effect plot for Ra Fig.3 Main effect plot for MRR Fig.4 Main effect plot for PC

According to main effect plots for S/N ratio Fig. 3, the optimal conditions for minimum Ra, maximum

MRR and minimum PC are:

Table 4: Optimal turning conditions

Responses Best-Levels

Ra 3-1-2

MRR 1-1-1

Power Consumed 3-1-2

B. Mathematical correlation

The linear polynomial models are developed using commercially available Minitab 16 software for

various turning parameters. The predictors are speed, feed and depth of cut. Linear regression

equations are used to develop a statistical model with an objective to establish a correlation between

the selected turning parameters with the quality characteristics of the machined work piece. The

regression equation for Surface roughness (Eq. 1) (Ra) = 1.07 + 0.218 *(S) - 0.023 *(F) + 0.277 *(D);

(Eq. 2) Material removal rate (MRR) = 1.67 x 10-5 + 1.76 *(S) + 0.110 *(F) + 0.586 *(D) and (Eq. 3)

Power consumption (PC) = 3.04 x 10-5 + 1.79 *(S) - 0.409 *(F) + 0.175 *(D)

Where Ra denotes the arithmetic average roughness-height (μm), D the depth of cut (mm), S the

spindle speed (rpm), F the feed (mm/rev), R the coefficient of regression.

Page 352: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

986 Vol. 7, Issue 3, pp. 980-990

C. Analysis of variance (ANOVA)

Analysis of Variance (ANOVA) is a powerful analyzing tool which is used to identify significant of

each parameter on output response. This is achieved by separating total variability of the S/N ratio,

which is measured by the sum of squared deviation (SST) from total mean S/N ratio into contribution

by each design parameters and the error. The F-value of each design parameter is a ratio of mean

squared deviations to the mean squared error. The Minitab 16 Software is used to identify various

terms in ANOVA. The table5, 6, and 7 shows the ANOVA for surface roughness, MRR and Power

consumption. Study of ANOVA table for a given analysis helps to determine which of the factors

need control and which do not. The ANOVA table shows effect individual effect of each parameter

and interaction effect of each parameter on output response.

ANOVA Terms & Notations: D.F =Degree of Freedom, S.S = Sum of Squares, M.S = Mean Square,

F =Variance Ratio & C = Percentage Contribution.

Table 5: ANOVA result for surface roughness Ra [95% confidence level]

SOURCE D.F S.S M.S F C%

(S) 2 0.8110 0.4055 1.89 5.69

(F) 2 0.1219 0.0609 0.28 0.86

(D) 2 12.499 6.2493 29.24 87.67

SXF 4 0.0379 0.0095 0.05 0.27

SXD 4 0.6894 0.1724 0.80 4.84

FXD 4 0.0953 0.0239 0.11 0.67

ERROR 8 1.6495 0.2062

TOTAL 26 14.254

100

Table 6: ANOVA result for Material removal rate (MRR) [95% confidence level]

SOURCE DOF S.S M.S F C%

(S) 2 0.3963 0.1982 0.93 48.03

(F) 2 0.0037 0.0018 0.08 0.45

(D) 2 0.3384 0.1691 0.81 41.02

SXF 4 0.0203 0.0101 0.04 2.46

SXD 4 0.0561 0.0280 0.14 6.80

FXD 4 0.0101 0.0050 0.03 1.24

ERROR 8 0.3611 0.0452

TOTAL 26 0.824997

100

Table 7: ANOVA result for Power consumption (PC) [95% confidence level]

SOURCE DOF S.S M.S F C%

(S) 2 204.0423 102.0212 477.38 36.46

(F) 2 16.2641 8.1320 38.05 2.90

(D) 2 247.1854 123.5927 578.33 44.18

SXF 4 54.142 13.5355 63.34 9.67

SXD 4 20.7271 5.1818 24.25 3.70

FXD 4 17.2881 4.3220 20.23 3.09

ERROR 8 207.5954 25.9494

TOTAL 26 559.6492

100

From ANOVA table for surface roughness it is clears that depth of cut (87.67%) is the major factor to

be selected effectively to get the good surface finish. The interaction SXD (4.84%) has more influence

than other two interactions. For material removal rate, speed (48.03%) is the major factor to be

selected to get high removal of material. The interaction SXD (6.80%) has more influence than other

two interactions. For power consumption (44.18%) is the major factor to be selected to get low

machining power. The interaction SXD (9.67%) has more influence than other two interactions.

Page 353: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

987 Vol. 7, Issue 3, pp. 980-990

D. Artificial Neural networks (ANN)

Artificial Neural Networks (ANN) are a branch of the field known as "Artificial Intelligence" (AI)

which may also consists of Fuzzy logic (FL) and Genetic Algorithms (GA). ANN is based on the

basic model of the human brain with capability of generalization and learning. The purpose of this

simulation to the simple model of human neural cell is to acquire the intelligent features of these cells.

The term "artificial" means that neural nets are implemented in computer programs that are able to

handle the large number of necessary calculations during the learning process [12].

After conducting the DOE, an experimental database composed of 27 runs of 3 different cutting

conditions was generated. Each sample was defined by: surface roughness (Ra), material removal

rate (MRR) and power consumption (PC).The experimental database was used to learn three ANN

process models: (1) Ra, (2) MRR, (3) PC state model. Before training the ANN, a statistical study

was conducted for each model in order to discard those inputs variables that were not significant. The

final inputs variables applied for each model and the main characteristics of the ANN models are

shown in Table 8.

Table 8: Characteristics of ANN models

E. Genetic algorithm (GA)

MATLAB is another software package that has many capabilities to solve engineering problems

especially for constrained optimization problems. Genetic algorithms (GA) belong to the class of

stochastic search optimization methods. The concept of GA was developed by Holland in the 1960s

and 1970s [11]. These are the algorithms based on mechanics of natural selection and natural genetics,

which are more robust and more likely to locate global optimum. Chromosomes are made of discrete

units called genes, they assumed to be binary digits. GA operates with a collection of chromosomes

called a population. The population is normally randomly initialized. Three genetic operators are used

to accomplish this task: reproduction, crossover, and mutation. Reproduction involves selection of

chromosomes for the next generation. This is also called the selection process. The crossover operator

is the most important operator of GA. In crossover, generally two chromosomes, called parents, are

combined together to form new chromosomes, called offspring. The parents are selected among

existing chromosomes in the population with preference towards fitness so that offspring is expected

to inherit good genes which make the parents fitter. The mutation operator introduces random changes

into characteristics of chromosomes. Mutation is generally applied at the gene level. In typical GA

implementations, the mutation rate is very small and depends on the length of the chromosome.

Mutation plays a critical role in GA. Mutation reintroduces genetic diversity back into the population

and assists the search escape from local optima. The foregoing three steps are repeated for successive

Ra model MRR model PC model

Type Back propagation Type Back propagation Type Back propagation

Inputs S, F, doc Inputs S, F, doc Inputs S, F, doc

Output Ra Output MRR Output PC

Hidden

Layers

1

Hidden

Layers

1

Hidden

Layers

1

Neurons 3 Neurons 3 Neurons 3

Mapping

functions Tran-sig

Mapping

functions Tran-sig

Mapping

functions Tran-sig

Training

Method Lev-Marq.

Training

Method Lev-Marq.

Training

Method Lev-Marq.

Epochs 300 Epochs 300 Epochs 300

Learning

Rate 0.1

Learning

Rate 0.1

Learning

Rate 0.1

Page 354: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

988 Vol. 7, Issue 3, pp. 980-990

generations of the population until no further improvement in fitness is attainable. Besides weighting

factors and constraints, suitable parameters of GA are required to operate efficiently.

Genetically optimized neural networks (GONNs) are the new inviting methods for complex

optimization of cutting parameters, describes the multiobjective technique of optimization of cutting

conditions by means of the neural networks taking into consideration the technological, economic and

organizational limitations. The GA parameters along with relevant objective functions and set of

machining performance constraints are imposed on GA optimization methodology to provide

optimum cutting conditions. The multi-Objective fitness function is as following type.

function y=simple_fitness(x)

y=(w1*(k*x(1)^a*x(2)^b*x(3)^c)^-

1)+(w2*(k*x(1)^a*x(2)^b*x(3)^c))+(w3*(k*x(1)^a*x(2)^b*x(3)^c)^-1);

FitnessFunction=@simple_fitness;

E.1 Multi-objective GA

GA optimization methodology is based on machining performance predictions models developed

from a comprehensive system of theoretical analysis, experimental database and numerical methods.

Being a population-based approach, GA are well suited to solve multi-objective optimization

problems. A generic single-objective GA can be modified to find a set of multiple non-dominated

solutions in a single run. There are several multi-objective evolutionary algorithms; here Weight-

based Genetic Algorithm (WBGA) is implemented and its characteristics are shown in table 9.

E.2 Weighted sum approaches to Fitness function

The classical approach to solve a multi-objective optimization problem is to assign a weight Wi to

each normalized objective function f(x) so that the problem is converted to a single objective

problem with a scalar objective function as follows:

y= w1 f1 (x) + w2 f2 (x) + …….. +wk f k (x) ………. (1)

Where f1 (x) is the normalized objective function and SUM (Wi) =1. This approach is called the priori

approach since the user is expected to provide the weights, here the estimation of weights are done by

Entropy Method [13]. Solving a problem with the objective function (1) for a given weight vector Wi

= {w1, w2,...wk} yields a single solution, and if multiple solutions are desired, the problem must be

solved multiple times with different weight combinations. The main difficulty with this approach is

selecting a weight vector for each run. To automate this process; weight based genetic algorithm for

multi-objective optimization, each solution in the population uses a different weight vector Wi = {w1,

w2,.. wk} in the calculation of the summed objective function (1). The weight vector Wi is embedded

within the chromosome of solution x i. Therefore, multiple solutions can be simultaneously searched

in a single run. In addition, weight vectors can be adjusted to promote diversity of the population.

Table 9: Characteristics of Genetic search algorithms

Genetic Algorithm

Variables to optimize Speed, Feed, Depth of cut

Population Size 10

Generations 15

Crossover Frac. 0.8

Elite Count 2

Mutation function Gaussian

Selection function Stochastic

Boundary ranges S=[450,580,740], F=[0.05,0.07,0.09], Doc=[0.10,0.20,0.25]

Stop criterium

Stall Time: 6s

Stall Generations: 7

Table 10: Optimal Turning Conditions & Pin Point Optimal Values

Responses S F D EXP MODEL

(Eq. 1,2,3)

GA GONNs

Ra(µm) 450 0.09 0.20 1.96 2.18 2.24 2.10

MRR(mm3/min) 740 0.09 0.25 0.62 0.59 0.66 0.62

PC(KW) 450 0.09 0.20 3.30 3.41 3.53 3.35

Page 355: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

989 Vol. 7, Issue 3, pp. 980-990

F. Confirmation Experiment

Table 10 shows the turning conditions, the pin point optimal values of controlling factors, results

obtained from the confirmation test, calculated from the developed model [Eq. 1,2,3], and identify in

between the results. Therefore, Eq. (1,2,3) correlates the arithmetic mean surface roughness, material

removal rate and power consumed with the turning conditions (depth of cut, speed, and feed) with a

realistic degree of approximation.

V. FUTURE WORK

AI models are called black-box models, and they cannot be optimized with conventional optimization

methods. Due to this limitation, a cutting parameter optimization methodology based on AI models

requires an advanced search methods for global optimization, such as Genetic Algorithms (GA) and

Mesh Adaptive Direct Search (MADS) algorithms. For better and better acceptance hybrid of GA-

ANN and MADS-ANN ate can be used to find the global optimal solution in complex

multidimensional search spaces.

VI. CONCLUSION

There is general agreement that off-line experiments during product or process design stage are of

great value. Reducing quality loss by designing the products and processes to be insensitive to

variation in noise variables is a novel concept to statisticians and quality engineers.

The fluctuation of average surface roughness (Ra), Material removal rate (MRR) and Power

consumed (PC) values resulted from machining is explained by the fact that they depends on the

turning parameters.

Multiple regressions produced a satisfactory correlation for the prediction of output responses with

reasonable degree of approximation. The ANOVA revealed that the depth of cut is the dominant

parameter followed by speed for surface Roughness. In case of MRR response, the speed is the

dominant one followed by the depth of cut and incase of PC response, depth of cut is the dominant

one followed by feed

Therefore, the above results (table 10) conclude that with-in the domain genetically optimized Neural

Networks having better capability. GONNs has been the most popular heuristic approach to multi-

objective design and optimization problems. The aforesaid methods Taguchi, Regression, Artificial

neural networks and Genetic-Algorithm can be applied for continuous quality improvement of the

product/process and off-line quality control.

REFERENCES

[1] Meyers A.R. and Slattery T.J., Basic Machining Reference Handbook, Industrial Press Inc, (2001)

[2] Shaw M., Metal Cutting Principles, Oxford University Press, 1984.

[3] Kalpakjian S. and Schmid S.R., Manufacturing Processes for Engineering Materials, 5th ed., Pearson

Education, Inc., SBN 9-78013227-271-1 (2008)

[4] Boothroyd G. and Knight W.A., Fundamentals of Machining and Machine Tools, 3rd Ed., CRC Publication.

ISBN 1-57444-659-2, (2006)

[5] Groover M.P., Fundamentals of Modern Manufacturing, Prentice-Hall, Upper Saddle River, NJ (now

published by John Wiley, New York), 634, 1996.

[6] Kalpakjian S. and Schmid S.R., Manufacturing Processes for Engineering Materials, 5th ed., Pearson

Education, Inc., SBN 9-78013227-271-1, 2008.

[7] Chien, W. T. and Chou, C. Y. (2001). The predictive model for machinability of 304 stainless steel.

Journal of Materials Processing Tech., 118, 1-3, (2001), 442–447, ISSN 0924-0136.

[8] Rodrigues L.L.R., Kantharaj A.N., Effect of Cutting Parameters on Surface Roughness and Cutting Force in

Turning Mild Steel., Research Journal of Recent Sciences- ISSN 2277-2502, Vol. 1(10), 19-26, 2012.

[9] Hardeep Singh1*, Rajesh Khanna2, M.P. Garg2., Effect of Cutting Parameters on MRR and Surface

Roughness in Turning EN-8., Current Trends in Engineering Research Vol.1, No.1, 2011.

[10] Anirban Bhattacharya, Santanu Das, P. Majumder (2009), Ajay Batish, Estimating the effect of cutting

parameters on surface finish and power consumption during high speed machining of AISI 1045 steel using

Taguchi design and ANOVA, Prod. Eng. Res. Devel, vol. 3, pp 31–40

Page 356: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

990 Vol. 7, Issue 3, pp. 980-990

[11] Cus, F. and Balic, J. (2003). Optimization of cutting process by GA approach and Computer-

Integrated Manufacturing, 19, 1-2, 113–121, (2003), ISSN 0736-5845.

[12] Asiltürk I. and Çunkas M., Modeling and prediction of surface roughness in turning operations using

artificial neural network and multiple regression method, Expert Systems with Applications, 38(5), 5826-5832

(2010)

[13] K.srinivasa Raju & D.Nagesh Kumar: Multi criterion analysis in engineering and management, text book.

ABOUT THE AUTHOR

Chittam Rajesh was born in 25th Sep 1988. He is currently working as a Lecturer in

Mechanical Engineering Department of Sri kalahasteeswara Institute of Technology,

Srikalahasthi (A.P). He has 2 years’ experience in teaching, one year industry. He received

his B.Tech in Mechanical Engineering from Jawaharlal Nehru Technological University,

Anantapur (A.P.) in 2009. He has done M.Tech from SRI VENKATESWARA

UNIVERSITY, Tirupathi (A.P) in 2012 and planning for Research programme. His research

interests are Advanced machining and Manufacturing technologies.

Page 357: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

991 Vol. 7, Issue 3, pp. 991-997

A MODIFIED SINGLE-FRAME LEARNING BASED SUPER-

RESOLUTION AND ITS OBSERVATIONS

Vaishali R. Bagul1and Varsha M. Jain2 1Student, 2Professor,

PG department, MBES College of Engineering, Ambajogai, Maharashtra, India

ABSTRACT In this paper, we would focus on various super-resolution image reconstruction techniques. Super-resolution is

nothing but conversion of a low resolution image into a high resolution one using a set of training images. In this

paper, we have discussed the learning based and total variation regularization methods for super resolution. Also

we have discussed the methods with their advantages and disadvantages and compared the performance of every

method including their PSNR values and the computational time. Higher number of training images leads to

increase in the computational time of that method, which in turn makes the method difficult to implement in real

time. We will also combine single-image learning based method along with total variation regularization and

obtain its results. Super-resolution technology has applications in many areas of image processing such as

medical imaging, image sensing systems, satellite imaging, space technology and surveillance systems etc. By

comparison, we have put forward the areas where we need to make improvements along with the future scope.

KEYWORDS: Super resolution, learning based, total variation, Regularization, TV up sampling.

I. INTRODUCTION

Now-a-days image enhancement is applied in various image processing areas. Be it in surveillance,

imaging systems, TV signal conversions etc. Low resolution cameras give images with low-resolution

i.e. images from which one cannot extract exact or minute details. In order to extract those details there

needs an increase in the resolution of the image. There are various up-sampling methods and

interpolation techniques to increase the resolution of an image [1]-[12]. ‘Super-resolution’ is a

technique to increase the resolution of the image for which certain pairs of low and high resolution

training images are utilized. Increasing the resolution means to increase the size of the image and the

pixel number [2].

The main aim of this paper is to check the performance of single-image learning based super-resolution

method [8]-[12] using total variation method [13]-[18] and compare it with the existing methods. We

have to check the performance of the super resolution technique by reducing the computational time

and complexity of the system and getting a good quality image. In order to do so we are extending

previous method which was described in [2]. Also it has to be checked that the artifacts are reduced

while a super resolved image is reconstructed. We call image super-resolution as image up scaling,

image zooming, image magnification, image up sampling etc.

Li et al in [1] proposed a non-iterative adaptive interpolation scheme for natural image sources. In this

method, a switching between two methods i.e. bilinear interpolation and covariance based adaptive

interpolation is done in order to reduce the computational complexity of the system. Sun et al in [4]

proposed a Bayesian approach to image hallucination. Primal sketch priors are constructed and used to

enhance the quality of the hallucinated high resolution image. Chang et al [5] have generated a high

resolution image from a single low resolution image, with the help of a set of one or more training

images from scenes of the same or different types.

Page 358: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

992 Vol. 7, Issue 3, pp. 991-997

Various methods are used to recover or obtain a high resolution (HR) image from one or more low

resolution (LR) images [2]-[12]. Pixel replication and bilinear interpolation are the standard

interpolation techniques that tend to increase the pixel count without actually adding the image details.

These techniques blur edges and other sharp details of the images but perform well in smoother regions

[3]. In conventional multi- frame super- resolution methods, [2] - [4] many low resolution images of

the same scene with varying pixel shifts are taken as inputs and correspondence between high and low

resolution patches is learned from a database which consists of low and high resolution image pairs and

then by applying this knowledge to a new low resolution (LR) image, a high resolution (HR) image is

reconstructed.

Super-resolution image reconstruction has got applications in areas related to image processing. It has

got applications in medical imaging, satellite imaging, TV signal conversion (NTSC to HDTV),

surveillance systems, various computer tools like printers, media players, image editing software’s, etc.

[11]. Super – resolution (SR) can be achieved using a single image or multiple images (image sets). On

that basis, SR image reconstruction methods may be classified into 2 classes: (i) Classical multi-image

and (ii) Example- based super- resolution. Glasner et. al [7] combined the above two methods so as to

obtain super - resolution from a single image. These single-frame super-resolution methods are also

called ‘Example based super-resolution’ described by Freeman et al. in [6] which states that to generate

an upscaled image with desired number of pixels it takes two steps: first one is interpolation to double

the number of pixels and the other being prediction of missing details.

Purkait et al in [10] have proposed image zooming technique using fuzzy rule based prediction. In [18],

single image super-resolution algorithm based on spatial and wavelet domain is presented. In [19], a

survey on techniques and challenges in image super resolution reconstruction are discussed.

Regularization literally means to remove the noise from an image. Total variation regularization leads

to reduction in the artifacts while maintaining the sharp edges [20]. In this paper, Learning based method

using single image is combined with total variation regularization in order to get an improved image

quality. It is different from [2] in the fact that in this case a single image is used for learning. Thus, a

huge database of images is not needed for checking correspondence between the image patches so there

is no problem of data redundancy as only high resolution patches constitute its database.

The rest of the paper is organized as follows. In section II, we have described the Learning based, Total

variation Regularization method and Single image learning based using total variation regularization.

In section III, we have described experimental results of combined total variation regularization method

and learning based method and our approach towards it. In section IV and V, we have concluded the

paper along with its future scope.

II. COMPARISON OF SUPER RESOLUTION METHODS

This section describes the Learning based and the Total variation regularization methods for Super –

resolution of an image.

2.1. Learning based methods

Learning based super resolution can be achieved by use of either a multiple pairs of LR-HR images or

by using single image. Single image super- resolution can be categorized into three classes: functional

interpolation, reconstruction based and learning based. Learning based method is the most used method

for super – resolution of an image. There are several learning based methods proposed for super-

resolution of images [2] - [11]. As described in [2],[3],[4], etc., multiple image super – resolution which

needs a huge database of low and high resolution images. Due to this, the computational complexity

and time increases. So this paper describe single image learning based method for super resolution of

an image. A general flow of learning based method is given below -

1. Take a low resolution input image

2. Pass it through a High pass filter and get HF and LF components of the image

3. Upsample the low frequency (LF) using some interpolation method

4. Upsample the high frequency (HF) LR components using training image sets from database

5. Perform a correlative search between LR and HR components of training image

Page 359: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

993 Vol. 7, Issue 3, pp. 991-997

6. Select the corresponding patches and add to upsampled low frequency HR output to get final

HR output image.

In this method, low resolution patch size used is 3 × 3 pixels. Using this method, one can obtain images

with better quality. The drawback of this method is patch redundancy or database redundancy and large

processing time required for searching similar patches in the database [2]. Using this method, a better

quality super resolution image is obtained.

2.2. Total variation regularization methods

Total variation regularization also known as total variation decomposition is a method used to separate

the low and high frequency components of an image. Regularization, in mathematics and statistics and

particularly in the fields of machine learning and inverse problems, refers to a process of introducing

additional information in order to solve an ill-posed problem or to prevent over fitting [17] . As per

Rudin et al [12], regularization is based on the principle that signals with excessive and possibly

spurious detail have high total variation, that is, the integral of the absolute gradient of the signal is high

[20].Various regularization methods are described in [11] - [16], etc.

Two terms, well - posed and ill- posed were described by Hadamard in [15]. He stated that a problem

is said to be well-posed if a unique solution is present for it which is continuously dependent on the data

and when this is not the case, the problem is said to be ill-posed. The flow of this method is as follows

[2]:

1. Take a low resolution image as input

2. Apply total variation decomposition to obtain structure and texture components

3. The low frequency, Structure (cartoon) part is upsampled by using TVR upsampling.

4. The high frequency, Texture part is upsampled using linear interpolation

5. At last, add the high resolution structure and texture components to obtain a high resolution

output image.

The structure part consists of low frequency components along with edge components whereas the

texture component consists of high frequency components minus edge components. In total variation

regularization, ROF method [12] or Chambolle method [13] can be used. These two methods are

iterative methods amongst which the Chambolle method is generally used because its convergence is

faster. In [12], Non-linear methods for noise removal are used where the denoising of images is done

by minimizing the total variation norm of the estimated solution. The structure and texture can be

upsampled both by using either TV upsampling or Linear interpolation. When both components are TV

upsampled the image is quite sharp and looks quite clear and natural

2.3. Single image learning based method using TVR

In our method, there is a low resolution image Xi as an input. Figure 1 below shows a general schematic

of the process. The input image Xi is decomposed using total variation regularization method. The input

image is decomposed into structure (cartoon) and texture components [2]. Structure component (u0) is

the low frequency component and texture component (v0) contains high frequencies. Now the structure

component is upsampled using total variation upsampling method to get HR structure component (U0)

and the texture component is upsampled using bilinear interpolation to get HR texture component (V0).

The texture component is learning processed for which we take an image for training which is same as

the input image. A patch size of 3 × 3 pixels without any overlapping of pixels is taken. The

corresponding HR patches from learning image are added to the upsampled texture component. Finally

the HR structure and HR texture components are added in order to get an output HR image.

One can upsample the structure and texture components both either by using simply linear interpolation

or by using Total variation upsampling or by using a good interpolation method. Various linear or non-

linear filters that would help sharpen the image can be used at the output. In this paper, the output image

obtained after this processing is goes through 2–D Haar Wavelet transform. The input image has 3

dimensions so the wavelet transformation is applied and to any 2 dimensions [18]. Instead of using haar

wavelets, we can use various filters such as those used in [11] in order to get sharper edges. Our method

is different form [2] in the fact that we are using a single image for the learning based processing and

that our output image gets smoother edges, which can be later on sharpened using various techniques.

Page 360: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

994 Vol. 7, Issue 3, pp. 991-997

Figure 1. Proposed system

III. EXPERIMENTAL RESULTS

This method has been implemented on various images using MATLAB 2013a software. The system

has got a dual core processor with speed 2.20 GHz and a 4 GB RAM. For images with lower resolutions

it needs less time to process. In Table 1. Gives the comparison between PSNR and processing time

taken by images of two different sizes. Figure 2 (a),(b) and (c) respectively shows a flower image to

which learning based method using single image is applied, TVR method and LBM + TVR using Single

Image respectively. Figure 3 (a), (b), (c), (d) displays the Learning based image, TVR image, combined

learning and total variation method [2] image and finally image obtained from combination of single

image learning and total variation regularization method respectively.

The processing speed can be increased by using a system with high configuration for implementation.

We have used images of various sizes and converted them into images of size 128 × 128 pixels and 256

× 256 pixels. With 128 × 128 pixel size image, one can get the results as shown in Table 1. The number

of iterations for processing in this method is equal to 30. Patch size is taken as 3 × 3 pixels for LR

image. As we increase the resolution of the input image the processing time increases and so do the

artifacts get introduced. Using this method, one can reconstruct images which can be zoomed upto 2X.

(Magnification factor = 2). As seen in table above, high PSNR values are obtained and time utilised is

also low.

Table 2. The comparison between the PSNR values and processing time required for 2 methods of Total

variation Regularization for different input images is done. In the 1st TV method, we have Total

variation upsampled both the structure and texture components and obtained output images whereas in

the 2nd method one can upsample structure component using TV upsampling and texture component by

Bicubic interpolation [2]. Using TVR, one can get sharp edges and clear image. Super-resolved image

is very similar to the original image

Table. 1. Comparison of PSNR and Processing time of LBM + TVR using 2- D Haar wavelet transform

Image 128 × 128 pixels 256 × 256 pixels

PSNR(dB) Time (sec) PSNR (dB) Time (sec)

Sono 1 46.4692 22.621522 56.6856 90.053187

Red rose 46.1070 24.007834 57.1931 91.943678

CT image 51.4386 23.304213 62.9225 93.818613

Pink rose 47.7972 23.661270 57.1931 92.304604

Page 361: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

995 Vol. 7, Issue 3, pp. 991-997

Table 2. Comparison of PSNR and Processing time for TVR method using 2 different methods

Figure 2. Flower image processed using (a) Learning based method (b) TVR method and (c) LBM + TVR

method using Haar wavelet transform

Figure 3. Lighthouse image processed using (a) Learning based method (b) TVR method (c) LBM + TVR [2]

(d) LBM+TVR using Single image

IV. CONCLUSIONS

When we combined single image learning based method and total variation regularization method we

got resulting images which had smoother edges. In order to sharpen the image we used 2D - Haar

wavelet transform which helped us to combine the structure and texture components so that the image

looks better. The computational time required for processing is reduced. In Table 1 and 2, we have

compared the PSNR values and Computational time for every method. Our method is simple to execute,

though it needs images sharpening and contrast enhancement. The system complexity is reduced and

the computational time too. We get high PSNR values for the output images. Fig 2 shows the

comparison of images super resolved and reconstructed using learning based method, total variation

regularization method and our method. This method is quite useful in medical imaging for processing

CT images. We get reduced artifacts using this approach

V. FUTURE WORK

Super resolution techniques can be applied in various image processing areas. We can also apply fuzzy

rule based prediction to individual components so as to get an improved image quality. The process is

simple in computation, no huge databases required for processing the images. A developed algorithm

can also improve the image quality. In order to get a good quality image, we can combine the Tikhonov

regularization with TV regularization so that the images are not too smooth. We can also change the

color model in case of color image processing. Convert RGB model into YIQ or YCbCr model and then

apply interpolation to individual components for better results.

Input Image Total variation method 1 Total variation method 2

Time (sec) PSNR (dB) Time (sec) PSNR (dB)

Red rose 18.800081 66.8998 30.225065 62.8053

Old man 16.046236 64.7839 30.346805 60.3586

Baby 23.692761 63.6995 32.417537 60.9683

Mango 29.777048 65.3266 55.420420 62.1022

Pink rose 19.397311 70.0629 30.902189 64.8487

Page 362: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

996 Vol. 7, Issue 3, pp. 991-997

ACKNOWLEDGEMENTS

The authors would like to thank Dr. B. I. Khadakbhavi, Principal, College of Engineering, Ambajogai

and also Dr. V. G. Kasbegoudar, PG Dean, College of Engineering, Ambajogai for their time to time

and valuable guidance. Also we would express our sincere gratitude towards Dr. B. M. Patil for their

immense support and boost.

REFERENCES

[1]. X. Li and M. T. Orchard, “New edge-directed interpolation” (201) IEEE Transactions on Image

Processing, Vol. 10, No. 10, pp. 1521-1527.

[2]. T. Goto, Y. Kawamoto, Y. Sakuta, A. Tsutsui , M. Sakurai, “Learning-based Super-resolution Image

Reconstruction on Multi-core Processor” (2012) IEEE Transactions on Consumer Electronics, Vol. 58,

No. 3, pp. 941- 946.

[3]. P. P. Gajjar and M. V. Joshi, “New learning based super-resolution: Use of DWT and IGMR-F prior”

(2010) IEEE Trans. on Image Processing, Vol. 19, No. 5, pp. 1201-1213

[4]. J. Sun , N. N. Zheng, H. Tao and H. Y. Shum, “ Image Hallucination with primal sketch priors” (2003)

IEEE Computer society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 729-736.

[5]. H. Chang, D. Y. Yeung and Y. Xiong, “Super-resolution through neighbor embedding” (2004) IEEE

Computer society Conference on Computer Vision and Pattern Recognition, Vol.1, pp. 275-282.

[6]. S. Dai, M. Han, W. Xu, Y. Wu, and Y. Gong, “Soft edge smoothness prior for alpha channel super

resolution” (2007) IEEE Computer Vision and Pattern Recognition, pp. 1-8.

[7]. A. Lukin , A. S. Krylov and A. Nasanov, “Image Interpolation by Super- resolution” [2006] International

conference Graphicon, Novosibirsk Akademgorodok, Russia.

[8]. W. T. Freeman, T. R. Jones, and E. C. Paztor, “Example based super- resolution” (2002) IEEE Computer

Graphics and Applications, Vol. 22, No. 2, pp. 56-65.

[9]. D. Glasner, S. Bagon, M. Irani, “Super-Resolution form a Single Image” (2009). ICCV

[10]. P. Purkait and B. Chanda, “Fuzzy- Rule based Approach for Single Frame Super Resolution” (2013) In

Proceedings of the IEEE International Conference on Fuzzy Systems (Fuzz-IEEE ’13).

[11]. M. Sakurai , Y. Sakuta, M. Watanabe, T. Goto, S. Hirano, “Super-resolution through Non-linear

Enhancement filters” (2013) IEEE International Conference on Image Processing, pp. 854-858

[12]. S. Naik and N. Patel, “Single image Super- resolution in Spatial and Wavelet domain” (2013) The

International Journal of Multimedia & its Applications, Vol. 5, No. 4, pp. 23-32.

[13]. H. A. Aly and Eric Dubois, “Image Up-sampling using Total –Variation Regularization with a New

Observation Model” (2005) IEEE Transactions on Image Processing, Vol. 14, No. 10, pp. 1647-1659.

[14]. L. Rudin, S. Osher, E. Fatemi, “Nonlinear total variation based noise removal algorithm” (1992)

Physica D, Vol. 60, pp. 259 – 268.

[15]. A. Chambolle , “An algorithm for total variation minimization and applications” (2004) J. Mathematical

Imaging and Vision, Vol. 20, No. 1, pp. 89-97.

[16]. F. Guichard and F. Malgouyres, “Total variation based interpolation” (1998) Proceedings of Eusipco’98,

pp. 1741-1744.

[17]. J. Hadamard (1952) Lectures on Cauchy’s Problem in Linear Partial Differential Equations, New York:

Dover.

[18]. Michael. K. Ng, H. Shen, E. Y. Lam, L. Zhang, “A Total Variation Regularization based Super-

resolution Reconstruction Algorithm for Digital Video” (2007) EURASIP Journal on Advances in Signal

Processing, Volume 2007, Article ID 74584,2007.

[19]. H. Pandey, P. Swadas, M. Joshi, “A Survey of Techniques and Challenges in mage Super-resolution

Reconstruction” (2013), International Journal of Computer Science and Mobile Computing, Vol. 2,

Issue. 4, pp. 317 -325.

[20]. Wikipedia contributors, 11 April 2014, Total variation denoising,

en.wikipedia.org/wiki/Total_variation_denoising

AUTHORS

Vaishali R. Bagul, received B. E. degree from Department of Electronics and

Telecommunications Engineering, GECA, Aurangabad, India in 2010. She is currently

pursuing her M.E. degree from COE, Ambajogai, India. Her interests are image and signal

processing. Her research interests are image segmentation, image enhancement, etc.

Page 363: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

997 Vol. 7, Issue 3, pp. 991-997

Varsha M. Jain, received B. E. degree from Department of Electronics and

Telecommunications, WIT, Solapur, India in 1996. She has completed her M.Tech from

Department of Electronics Engineering, VJTI, Mumbai, India in 2007. She is currently

working as an Associate Professor with the Department of Electronics Engineering, COE,

Ambajogai. Her interest areas include Matlab Signal and Image processing. Her research

interests are VLSI and Embedded system.

Page 364: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

998 Vol. 7, Issue 3, pp. 998-1002

DESIGN MODIFICATION AND ANALYSIS OF TWO WHEELER

COOLING FINS-A REVIEW

Mohsin A. Ali1 and S.M Kherde2 1Mechanical Engineering Department, KGIET, Amravati, India

2Professor, Mechanical Engineering Department, KGIET, Amravati, India

ABSTRACT Engine life and effectiveness can be improved with effective cooling. The cooling mechanism of the air cooled

engine is mostly dependent on the fin design of the cylinder head and block. Insufficient removal of heat from

engine will lead to high thermal stresses and lower engine efficiency. The cooling fins allow the wind and air to

move the heat away from the engine. Low rate of heat transfer through cooling fins is the main problem in this

type of cooling. The main of aim of this work is to study various researches done in past to improve heat

transfer rate of cooling fins by changing cylinder block fin geometry and climate condition.

KEYWORDS: cooling fins, Heat transfer, Convection & Thermal Stresses.

I. INTRODUCTION

In Engine When fuel is burned heat is produced. Additional heat is also generated by friction between

the moving parts. Only approximately 30% of the energy released is converted into useful work. The

remaining (70%) must be removed from the engine to prevent the parts from melting. For this purpose

Engine have cooling mechanism in engine to remove this heat from the engine some heavy vehicles

uses water-cooling system and almost all two wheelers uses Air cooled engines, because Air-cooled

engines are only option due to some advantages like lighter weight and lesser space requirement. The

heat generated during combustion in IC engine should be maintained at higher level to increase

thermal efficiency, but to prevent the thermal damage some heat should remove from the engine. In

air-cooled engine, extended surfaces called fins are provided at the periphery of engine cylinder to

increase heat transfer rate. That is why the analysis of fin is important to increase the heat transfer

rate. Computational Fluid Dynamic (CFD) analysis have shown improvements in fin efficiency by

changing fin geometry, fin pitch, number of fins, fin material and climate condition.

II. LITERATURE REVIEW

In the research of J. Ajay Paul and Sagar Chavan Vijay [2] Parametric Study of Extended Fins in the

Optimization of Internal Combustion Engine they found that for high speed vehicles Engines thicker

fins provide better efficiency. When fin thickness increases, the gap between the fins reduces that

resulted in swirls being created which helped in increasing the heat transfer. Large number of fins

with less thickness can be preferred in high speed vehicles than thick fins with less numbers as it

helps inducing greater turbulence. [2]

Author plotted the experimental results in figure 1, it shows the variation of the heat Transfer with

respect to velocity. Ansys fluent software was used to predict the behavior or wind flow and analysis.

At zero velocity it is seen that the heat transfer from the 4mm and 6mm fins are the same. When the

velocity is increased it can be seen that the heat transfer is increased with due to forced convection

and also due to the swirl generated between two fins which induces turbulences and hence higher heat

transfer. For a larger fin thickness, the corresponding fin spacing is comparatively small. As a

consequence, the generated swirled flow may mingle with the main flow and result in a higher heat

transfer performance. [2]

Page 365: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

999 Vol. 7, Issue 3, pp. 998-1002

The heat transfer from 6mm fins is found to be the higher at high velocities. For high speed vehicles

thicker fins provide better efficiency. When fin thickness was increased, the reduced gap between the

fins resulted in swirls being created which helped in increasing the heat transfer. Large number of fins

with less thickness can be preferred in high speed vehicles than thick fins with less numbers as it

helps inducing greater turbulence. [2]

Figure 1. Results of Analysis

In the research of J.C. Sanders et.al. [3] to find out overall heat transfer coefficient from the barrel he

conducted experiment to of cooling tests on two cylinders, one with original steel fins and one with 1-

inch spiral copper fins brazed on the barrel. The copper fins improved the overall heat transfer

coefficient from the barrel to the air 115 percent. They also concluded that in the range of practical

fins dimensions, copper fins having the same weight as the original steel fins will give at least 1.8

times the overall heat transfer of the original steel fins.

On the other hand Kumbhar D.G et.al. [4] Heat transfer augmentation from a horizontal rectangular

fin by triangular perforations whose bases parallel and towards the fin base under natural convection

has been studied using ANSYS. They have concluded that the heat transfer rate increases with

perforation as compared to fins of similar dimensions without perforation. The perforation of the fin

enhances the heat dissipation rates at the same time decreases the expenditure for fin materials also.

N.Nagarani and K. Mayilsamy, Experimental heat transfer analysis on annular circular and elliptical

fins.[6]This other had analyzed the heat transfer rate and efficiency for circular and elliptical annular

fins for different environmental conditions. Elliptical fin efficiency is more than circular fin. If space

restriction is there along one particular direction while the perpendicular direction is relatively

unrestricted elliptical fins could be a good choice. Normally heat transfer co- efficient depends upon

the space, time, flow conditions and fluid properties. If there are changes in environmental conditions,

there is a change in heat transfer co-efficient and efficiency also. [5]

To compare the rate of heat transfer with solid and permeable fins Ashok Tukaram Pise and Umesh

Vandeorao Awasarmol [7] conducted the experiment. Permeable fins are formed by modifying the

solid rectangular fins with drilling three holes per fins incline at one half lengths of the fins of two

wheeler cylinder block. Solid and permeable fins block are kept in isolated chamber and effectiveness

of each fin of these blocks were calculated. Engine cylinder block having solid and permeable fins

were tested for different inputs (i.e. 75W, 60W, 45W, 30W, 15W). It was found that permeable fins

block average heat transfer rate improves by about 5.63% and average heat transfer coefficient 42.3%

as compared to solid fins with reduction of cost of the material 30%.

“Optimal Design of an I C engine cylinder fin array using a binary coded genetic algorithm” by

G.Raju, Dr. Bhramara Panitapu, S. C. V. Ramana Murty Naidu. [8]This study also includes the effect

of spacing between fins on various parameters like total surface area, heat transfer coefficient and

total heat transfer .The aspect rations of a single fin and their corresponding array of these two

profiles were also determined. Finally the heat transfer through both arrays was compared on their

weight basis. Results show the advantage of triangular profile fin array. Heat transfer through

Page 366: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1000 Vol. 7, Issue 3, pp. 998-1002

triangular fin array per unit mass is more than that of heat transfer through rectangular fin array.

Therefore the triangular fins are preferred than the rectangular fins for automobiles, central processing

units, aero-planes, space vehicles etc… where weight is the main criteria. At wider spacing, shorter

fins are more preferred than longer fins. The aspect ratio for an optimized fin array is more than that

of a single fin for both rectangular and triangular profiles. [8]

R.P. Patil and H.M. Dange [9] conducted CFD and experimental analysis of elliptical fins for heat

transfer parameters, heat transfer coefficient and tube efficiency by forced convection. The

experiment is carried for different air flow rate with varying heat input. The CFD temperature

distribution for all cases verifies experimental results. At air flow rate of 3.7 m/s, the heat transfer rate

decreases as heat input increases. Also h is higher at above atmospheric temperature and lower at

below atm. Temperature. At air flow rate of 3.7 m/s the efficiency, increases as heat input increases.

Magarajan U. , Thundil karrupa Raj R. , Elango T. “Numerical study on heat transfer I C Engine

cooling by extended fins using CFD” [4]In this study, heat release of an IC engine cylinder cooling

fins with six numbers of fins having pitch of 10 mm and 20 mm are calculated numerically using

commercially available CFD tool Ansys Fluent. The IC engine is initially at 150 and the heat release

from the cylinder is analyzed at a wind velocity of 0 km/h. It is observed from the CFD result that it

takes 174.08 seconds (pitch=10mm) and 163.17 secs ( pitch =20mm) for ethylene glycol domain to

reach temperature of 423 K to 393 K for initially. The experiment results shows that the value of heat

release by the ethylene glycol through cylinder fins of pitch 10mm and 20mm are about 28.5W and

33.90 W.[10]

Pulkit Agarwal et.al [11] simulated the heat transfer in motor-cycle engine fins using CFD analysis. It

is observed that when the ambient temperature reduces to a very low value, it results in overcooling

and poor efficiency of the engine. It is observed that when the ambient temperature reduces to a very

low value, it results in overcooling and poor efficiency of the engine. They have concluded that

overcooling also affects the engine efficiency because of overcooling excess fuel consumption occurs.

This necessitates the need for reducing air velocity striking the engine surface to reduce the fuel

consumption. It can be done placing a diffuser in front of the

Mr. N. Phani Raja Rao, Mr. T. Vishnu Vardhan. “Thermal Analysis of Engine Cylinder Fins By

Varying Its Geometry And Material.”[12] The principle implemented in the project to increase the

heat dissipation rate by using the invisible working fluid nothing but air. The main aim of the project

is to varying geometry, material. In present study, Aluminium alloy 6061 and magnesium alloy are

used and compared with Aluminium Alloy A204. - The various parameters (i.e., shape and geometry

of the fin) are considered in the study, shape (Rectangular and Circular), thickness (3 mm and 2.5

mm). By reducing the thickness and also by changing the shape of the fin to circular shaped, the

weight of the fin body reduces thereby increasing the heat transfer rate and efficiency of the fin. The

weight of the fin body is also reduced when Magnesium alloy is used. The results shows, by using

circular fin with material Aluminium Alloy 6061 is better since heat transfer rate, Efficiency and

Effectiveness of the fin is more. By using circular fins the weight of the fin body reduces compare to

existing engine cylinder fins.

S.M. Wange and R.M. Metkar [13] have done experimental and computational analysis of fin array

and shown that the heat transfer coefficient is more in notch fin array than without notch fin array.

Geometric parameters of fin affects on the performance of fins, so proper selection of geometric

parameter such as length of fin, height of fin, spacing between fins, depth of notch is needed.

Heat Transfer Augmentation of Air Cooled 4 stroke SI Engine through Fins- A Review Paper. [14]

The author had study number of research paper and concludes that the phenomenon by which heat

transfer takes place through engine fins must frequently be improved for these reasons. Fins are

extended surface which are used to cool various structures via the process of convection. Generally

heat transfer by fins is basically limited by the design of the system. But still it may be enhanced by

modifying certain design parameters of the fins. Hence the aim of this paper is to study from different

literature surveys that how heat transfer through extended surfaces (fins) and the heat transfer co-

efficient affected by changing cross-section, climatic conditions, materials etc. It is to be noted that

heat transfer of the fin can be augmented by modifying fin pitches, geometry, shape, and material and

wind velocity. As per available literature surveyed there is a little work available on the wavy fins

geometry pertaining to current research area to till date. So there is a scope of research in the field of

heat transfer study on wavy fins on cylinder head –block assembly of 4 stroke SI engine.

Page 367: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1001 Vol. 7, Issue 3, pp. 998-1002

N.Nagarani and K. Mayilsamy, Experimental heat transfer analysis on annular circular and elliptical

fins.[15]This other had analyzed the heat transfer rate and efficiency for circular and elliptical annular

fins for different environmental conditions. Elliptical fin efficiency is more than circular fin. If space

restriction is there along one particular direction while the perpendicular direction is relatively

unrestricted elliptical fins could be a good choice. Normally heat transfer co- efficient depends upon

the space, time, flow conditions and fluid properties. If there are changes in environmental conditions,

there is a change in heat transfer co-efficient and efficiency also.[15]

2.1 Examples of Commercially Available Engines

Figure 1. Examples of Engines Courtesy by KGN AUTOMOBILES, GACHIBOWLI, HYERABAD, AP, INDIA

III. CONCLUSION

The summary of the present literature review is as follows:

1. Design of fin plays an important role in heat transfer. The fin geometry and cross sectional area

affects the heat transfer co efficient there is a scope of improvement in heat transfer of air-cooled

engine cylinder fin if mounted fin’s shape varied from conventional one.

2. From the all the research and experiment that covered in this paper it can be conclude that Contact

time for the air flows over the fin is also important factor in heat transfer rate. If we can increase the

turbulence of air by changing the design and geometry of the fins it will increase the rate of heat

transfer and it is found that Curve and Zig-zag fin shaped cylinder block can be used for increasing

the heat transfer from the fins by creating turbulence of upcoming air. Improvements in heat transfer

with new curve and Zig-Zag design fin can be compare with conventional one by CFD Analysis

(Ansys Fluent).

REFERENCES

[1]. Shri. N.V. Hargude and Dr. S.M. Sawant, “Experimental Investigation of Four Stroke S.I. Engine using

Fuel Energizer for Improved Performance and Reduced Emissions”, International Journal of Mechanical

Engineering & Technology (IJMET), Volume 3, Issue 1, 2012, pp. 244 - 257, ISSN Print: 0976 – 6340, ISSN

Online: 0976 – 6359 [2].J.Ajay Paul, SagarChavan Vijay, Magarajan&R.ThundilKaruppaRaj, “Experimental and Parametric Study of

Extended Fins In The Optimization of Internal Combustion Engine Cooling Using CFD”, International Journal

of Applied Research in Mechanical Engineering

[3]. J.C.Sanders, et al. Cooling test of an air-cooled engine cylinder with copper fins on the barrel, NACA

Report E-103

[4]. D.G.Kumbhar, et al. (2009). Finite Element Analysis and Experimental Study of Convective Heat Transfer

Augmentation from Horizontal Rectangular Fin by Triangular Perforations. Proc. Of the International

Conference on Advances in Mechanical Engineering.

[5].N.Nagarani and K. Mayilsamy (2010). "EXPERIMENTAL HEAT TRANSFER ANALYSIS ON

ANNULAR CIRCULAR AND ELLIPTICAL FINS." International Journal of Engineering Science and

Technology 2(7): 2839-2845.

Page 368: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1002 Vol. 7, Issue 3, pp. 998-1002

[6]. S.S.Chandrakant, et al. (2013). "Numerical and Experimental Analysis of Heat Transfer through Various

Types of Fin Profiles by Forced Convection." International Journal of Engineering Research & Technology

(IJERT)

[7]. A. T. Pise and U. V. Awasarmol (2010). "Investigation of Enhancement of Natural Convection Heat

Transfer from Engine Cylinder with Permeable Fins." International Journal of Mechanical Engineering &

Technology (IJMET) 1(1): 238-247.

[8]. G.Raju, Dr. BhramaraPanitapu, S. C. V. RamanaMurty Naidu. “Optimal Design of an I C engine cylinder

fin array using a binary coded genetic algorithm”. International journal of Modern Engineering Research. ISSN

2249-6645,Vol. 2 Issue.6, Nov-Dec.(2012),pp.4516-4520

[9]. R.P. Patil and P. H. M. Dange (2013). "Experimental and Computational Fluid Dynamics

Heat Transfer Analysis on Elliptical Fin by Forced Convection." International Journal of Engineering

Research & Technology (IJERT) 2(8).

[10]. Magarajan U., Thundilkaruppa Raj R. and Elango T., “Numerical Study on Heat Transfer of

Internal Combustion Engine Cooling by Extended Fins Using CFD”, Research Journal of Recent Sciences ISSN

2277-2502 Vol. 1(6), pp.32-37, June (2012).

[11]. P. Agarwal, et al. (2011). Heat Transfer Simulation by CFD from Fins of an Air Cooled Motorcycle

Engine under Varying ClimaticConditions. Proceedings of the World Congress on Engineering.

[12]. Mr. N. Phani Raja Rao, Mr. T. Vishnu Vardhan. “Thermal Analysis Of Engine Cylinder Fins By Varying

Its Geometry And Material.” International journal of Engineering Research & Technology.ISSN:2278-

0181,Vol. 2 Issue 8, August(2013)

[13]. S. Wange and R. Metkar (2013). "Computational Analysis of Inverted Notched Fin Arrays Dissipating

Heat by Natural Convection." International Journal of Engineering and Innovative Technology (IJEIT) 2(11)

[14]. Heat Transfer Augmentation of Air Cooled 4 stroke SI Engine through Fins- A Review Pape. International

Journal of Recent Development in Engineering and Technology. ISSN 2347 – 6435, Volume 2, Issue 1, January

(2014)

[15]. N.Nagarani and K. Mayilsamy, Experimental heat transfer analysis on annular circular and elliptical fins.”

International Journal of Engineering Science and Technology 2(7): 2839-2845

[16]. Islam Md. Didarul, Oyakawa Kenyu, Yaga Minoru and Senaha Izuru, “Study on heat transfer and fluid

flow characteristics with short rectangular plate fin of different pattern” Experimental Thermal and Fluid

Science, Volume 31, Issue 4, February 2007

[17]. P. R. Kulkarni, “Natural Convection heat transfer from horizontal rectangular fin arrays with triangular

notch at the center.” Paper presented at NCSRT-2005. (Nov 18-19, 2005), Pg. No.: 241-244.

ABOUT AUTHOR

Mohsin A. Ali BTech(MECH) From GCOE Amravati. Pursuing ME(CAD/CAM) from

SGBA University, Amravati

Page 369: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1003 Vol. 7, Issue 3, pp. 1003-1008

VIRTUAL WIRELESS KEYBOARD SYSTEM WITH CO-

ORDINATE MAPPING

Souvik Roy, Ajay Kumar Singh, Aman Mittal, Kunal Thakral Department of Electronics and Communication Engineering,

Guru Gobind Singh Indraprastha University, New Delhi, India

ABSTRACT This paper presents more efficient algorithm used to detect the finger stroke of a projected keyboard layout on

any flat non-reflecting surface. The virtual keyboard consist of variable intensity projector for projecting

keyboard layout, a Camera with infra-red filter for capturing only infra-red wavelength light reflected objects,

an infra-red Diode for object detection, photo-diode with simplified circuitry to on-off the key stroke detection

and key board layout projection. The camera used is connected to PC or laptop with a wireless connection

operating at 2.4GHz frequency of IEEE standard. An image processing algorithm is designed in open source to

extract the keystroke on the surface and display the exact key on the screen. The integration of components with

the software that is designed to run on any Operating system with even lower level of processor gives expected

keystroke. Comparison algorithm calculates and checks the nearest value suitable for key injection to the system

API. Height of device made has been kept as low as possible so that it does not interrupt the view of display

screen of laptop or PC. Present work can be more upgraded with more gesture based feature that controls the

virtual keyboard device and making the extra surface for virtual mouse option.

KEYWORDS: Keyboard, Object Detection, Finger Detection, Camera, Infra-Red Filter, Frame Capturing

I. INTRODUCTION

With an increasing demand for switching into new and virtual environment from the old physical

environment where hardware ware out has been a problem due to continuous use of it. Virtualization

[1] of keyboard to display has been carried out since decades but the precision of functionality has

become a problem. Virtual keyboard is the most important part of virtualization as it cannot be

obsoleted from the computer part. Hence there is a need for upgrading the method of keystroke

detection that will make it more precise and adapt to human tendency of pressing.

II. DESIGN

The design of the Virtual keyboard system consists of minimum use of hardware devices and

maximum dependence on software algorithm.

2.1. System design

Virtual keyboard system is designed keeping in mind the maximum efficiency of algorithm [2] and

lower error rate that will not make user feeling different response rate then physical keyboard. The

design consists of following modules:

CMOS wireless camera

Infra-red band pass filter

Keyboard Projector with intensity control

Infra-red laser diode

Double Concave lens

Diffraction grating

Holographic film

Page 370: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1004 Vol. 7, Issue 3, pp. 1003-1008

Current controller circuit

Software for Key-stroke detection

LM-1117 voltage convertor

All these modules are used to design a proper and highly efficient system.

2.2. Hardware design

The Hardware of the system is most important part of the system as exact configuration of

components is required to simplify the algorithm and error rate. The first sensor part of the system

CMOS [3] wireless camera is integrated with Infra-red band pass filter which limit the camera sensing

capacity to only Infra-red light. Limiting the sensing capacity of camera is useful in detection of

finger touching the virtual projected keyboard layout as it is the least interfering light in any

environment that could generate error and algorithm more complex. Second part of the design is the

Keyboard projector [4] that is used to project a fixed designed keyboard with two different language

characters printed on them. This projector projects a visible laser beam passing through biconcave

lens [5] and a holographic [6] keyboard layout plate thus projecting the shadow of keyboard. The

intensity of the laser diode is controlled by limiting the current flowing to the laser diode. Current

limiting is controlled through current controlling circuit which consist of variable resistor to change

the intensity of laser diode. The third part of the design is Infra-red laser diode that is used to generate

a layer of linear Infra-red light parallel to the surface on which keyboard layout is projected. Since the

infra-red laser cannot emit a linear diffracted [7] beam to cover the whole keyboard layout without

diverging. Therefore we need to pass the laser light through Biconcave lens adjusted in such a

distance that emerging of parallel laser beam to pass through Diffraction plate which diverge the beam

linearly hovering over the whole surface of keyboard layout.

2.3. Software/Algorithm Design

The design of software starts from the image processing algorithm which is the most important part of

key-stroke detection. Now the camera is only able to detect objects blocking infra-red light from the

Infra-red laser [16] beam plane or objects emitting infra-red light in the camera field of view. Starting

with capturing image frames from a CMOS camera and transmitting frames through Wi-Fi [8]

network to the laptop or PC using frame capturing code written. This capturing of image should be

such that the generated image is mirror image of the original image. Image frames generating from

CMOS camera consist of infra-red images as the camera lens is integrated with infra-red band pass

filter. The image contains lot of noise and high frequency regions where object edges to be extracted,

therefore a two steps are performed before moving into key-stroke detection. Firstly Gaussian [9]

filter is used to remove the white noise that is present everywhere in the whole frequency range of

image spectrum [10]. It also has linear graph according to which there is no sharp/abrupt change in

frequency band, thereby increasing the accuracy of detecting sharp edges of object which are

generally in high frequency region of image histogram. Secondly threshold function is used to omit

the region of non-object and showing the region of objects touching the surface. Key-stroke detection

is started with point of interests extraction where centroid of all the touching points is calculated using

cvFindContour [11] function. This function provides a co-ordinate that is average area of the finger

tip. The co-ordinate value extracted is need to find out for actual key pressed at that location, which is

done through a method of predefined [12] keyboard with fixed key co-ordinate location are stored in a

one-dimensional array. The mapping of generated co-ordinate with beforehand preserved co-ordinate

of keyboard pattern is done through comparison of co-ordinated to find out minimum difference of

displacement between co-ordinates. This comparison does a very efficient computation as uses tree

[13] structure for comparison. After the comparison of co-ordinates the nearest key value is found out

which is injected into Operating system through available API for windows system allowing to inject

the nearest key found out. Finally the key pressed is found out.

III. IMPLEMENTATION

Virtual keyboard use image processing algorithm in OpenCV to decease the processing time. It gives

user a idea to connect with their laptop or PC using a virtual environment.

Page 371: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1005 Vol. 7, Issue 3, pp. 1003-1008

3.1. Hardware

The described components for hardware are used to make a proper prototype that could match the

requirement of software design. Positions of the three main components are kept at a fixed height and

angle such that the camera can capture the whole keyboard image. Height between projector and

camera is kept to be 3cm which makes linear relationship between camera view and projector

constant. Camera used wide angle lens for complete view of the keyboard layout at minimum height.

As per the pre-defined keyboard key location stored in one dimensional array the height of both

projector and camera is fixed. The only one laser plane generating component is kept at the lowest

section touching the surface so to minimum distance between surface and laser plane. Laser keyboard

generating diode is connected with current controlling circuit using potentiometer used to control the

intensity of keyboard layout. Intensity of keyboard layout is independent of image processing

algorithm. The whole hardware is powered with rechargeable 9v battery.

Figure 1. Working prototype

3.2. Algorithm

Design of algorithm is needed to be integrated with the hardware for proper working of the hardware.

The Wi-Fi enabled camera is connected with laptop or PC for transmission of video frames. Rate of

sending video frames is able to match with the transfer speed bandwidth allowing 30fps to be

transferred easily. Software designed is completed after writing the code in openCV and compiling

using g++ compiler generating object file, configuration file and application file. Application file

generated is used to do the real time image processing using data send from Wi-Fi camera. With the

start of application file an option pops up for selection of camera and then checking establishment of

proper connection with camera. The processing of real time video frames [14] coming from wireless

camera is continuously going through. If any key event is found the API informs the OS to print that

corresponding key. The applications runs in the background to make user feel like actual keyboard.

Page 372: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1006 Vol. 7, Issue 3, pp. 1003-1008

Figure 2. Key Stroke Identification process

3.3. Wireless Communication

Connection of Wi-Fi CMOS camera to the laptop or PC is done with simply using the Wi-Fi

transmitter at 2.4GHz to establish a connection with the laptop or PC. The CMOS camera is used as

transmitter and laptop or PC Wi-Fi connects to that network

3.4. Efficiency

The efficiency of algorithm is found out to be 90%. It is calculated using the test cases where 3

different paragraphs are written with their accuracy being calculated. At the same time key-stroke

location mapping efficiency found out to be 80% giving correct result.

Figure 3. Probability of correct key stroke event v/s Area of Key

The probability of correct key-stroke event within a key area of projected keyboard layout is a slope

down from value 1 to 0 as the finger stroke value shifts from centre to the boundary of key.

3.5. New Feature

Page 373: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1007 Vol. 7, Issue 3, pp. 1003-1008

The virtual keyboard has a feature of auto switch off mode. This feature allows the projected key-

board layout to switch off the keyboard projector laser diode until the user finger comes near the

keyboard region. An photodiode [15] is put just above the plane laser which continuously checks if

the no signal is received for 300seconds then it switch off of the key-board layout projector to save

power It switch on automatically by just hovering hand over the key-board layout.

3.6. ADVANTAGE

The most important advantage of this system is to make the system independent of on-board

processing which uses most of the battery power as well as the independency of each module makes it

easier for upgrade to new feature. The wireless feature makes it useful to work from a distant location

with virtually controlling the laptop or PC.

IV. CONCLUSIONS

The virtual keyboard system with keyboard co-ordinate mapping has been implemented with

minimum use of complex hardware structure and algorithm that provides a better result without much

complexity. More gesture based feature can be added that will make it more close to the virtual

device.

REFERENCES

[1]. Celluon keyboard (http://www.gizmag.com/celluon-epic-laser-keyboard-review/28342/) An

Introduction to Virtualization by Amit http://www.kernelthread.com/publications/virtualization/

[2]. www.wikipedia.org (http://en.wikipedia.org/wiki/Laser_projection_keyboard) Yael Ben-Haim & Elad

Tom-Tov “A Streaming Parallel Decision Tree Algorithm”IBM Haifa Research Lab, Journal of

Machine Learning Research 11 (2010) 849-872

[3]. www.robopeak.com/blog/ Igor Brouk, Kamal Alameh “Design and Characterization of CMOS/SOI

Image Sensors”, IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 54, NO. 3, MARCH

2007

[4]. www.dfrobot.com

http://www.dfrobot.com/index.php?route=product/product&product_id=931#.U0rB4vTIuuI)

[5]. BI-CONCAVE LENS (BCC), http://www.lambda.cc/bi-concave-lens-bcc/

[6]. http://www.slideshare.net/priyalbhagat/laser-protection-virtual-keyboard

[7]. Kolsch, M. and Turk, M.” Keyboards without Keyboards: A survey of Virtual Keyboards,” n. pag.

University of California, Nov. 05, 2003 (http://www.create.ucsb.edu/sims/PDFs/Koelsch_an

d_Turk_SIMS.pdf).

[8]. Lawrence Erlbaum Associates Inc, “Human Computer Interaction,” pp 89-

129,2002(http://www.almaden.ibm.com/u/zhai/papers/Zhai HunterSmithHCIGalley.pdf).

[9]. Alpern, M., “Projection Keyboards,” Oct. 28, 2003 (http://www.alpern.org/weblog/stories/2003/01/09/

projectionKeyboards.html).

[10]. Baxes, G. A. “Digital Image Processing – Principles and Applications, America, 1994.

[11]. Gonzalez, R. and Woods, R.,”Digital Image Processing,,” India, 2002.

[12]. Wijewantha N. S.,” VistaKey: A Keyboard Without A Keyboard – A Type Of Virtual Keyboard,”

Final year project thesis, Informatics Institute of Technology, Wellawatta, Sri Lanka, April 2004.

[13]. Hirsh, L., “Virtual Keyboard Replicates Real Typing,” Oct. 23, 2003

(http://www.wirelessnewsfactor.com/perl/story/147 62.html).

[14]. Wrolstad, J., “Laser Device Projects Full-Size Keyboard for Handhelds,” Oct. 28, 2003

(http://www.wirelessnewsfactor.com/perl/story/177 56.html).

[15]. Khar, A., “Virtual Typing,” Oct. 28, 2003 (http://www.pcquest.com/content/technology/10303

0402.asp).

[16]. Aid Inc, “LightKey Visualize A Virtual Keyboard. One With No Moving Parts,” Nov. 12,

2003(http://www.advancedinput.com/AIDpdfDownloads/AIDLightKey.pdf)

Page 374: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1008 Vol. 7, Issue 3, pp. 1003-1008

AUTHORS

Aman Mittal was born in 1992.He is pursuing bachelors in Electronics and Communication

Engineering from Guru Gobind Singh Indraprastha University, New Delhi. His main

research interests include Robotic systems, computer vision and System Integration. He has

participated in several projects and currently working on a generic device.

Kunal Thakral was born in 1992. He is pursuing bachelors in Electronics and

Communication Engineering from Guru Gobind Singh Indraprastha University,New Delhi.

His main research interests include Robotic systems, micro-controller and Gesture control.

He has participated in several projects and currently working on a generic device and

UART.

Souvik Roy was born in 1992.He is pursuing bachelors in Electronics and Communication

Engineering from Guru Gobind Singh Indraprastha University, New Delhi. His main

research interests include Robotic systems, computer vision and machine learning. He has

filled patent on new technique to save water during irrigation. He has implemented in

several ideas into working prototype and currently working on freescale semiconductor’s

smart car coding competition using ARM microcontroller.

Ajay Kumar Singh was born in 1992.He is pursuing bachelors in Electronics and

Communication Engineering from Guru Gobind Singh Indraprastha University, New Delhi.

His main research interests include Robotic systems, computer vision and System

Integration. His main research interests include Robotic systems, UAVs, computer vision,

Embedded and System Integration. He has participated in several projects and currently

working on a generic device.

Page 375: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1009 Vol. 7, Issue 3, pp. 1009-1017

SECURE KEY MANAGEMENT IN AD-HOC NETWORK:

A REVIEW

Anju Chahal1 and Anuj Kumar2, Auradha2 1Department of Computer Science Engineering, AMITY University, Haryana, India

2Assistant Professor, AMITY University, Haryana, India

ABSTRACT An ad hoc network is a decentralized type of network. The network is ad hoc because it does not have a pre-

existing infrastructure. An ad hoc network is a common wireless network that can communicate with each other

without any centralized administration or pre-existing infrastructure. Due to nature of Inconstant Wireless

medium Data Transfer is a major problem in ad hoc it lacks Security and Reliability of Data. Cryptographic

techniques are often used for secure Data transmission wireless networks. Most cryptographic technique can be

symmetric and asymmetric, depending on the way they use keys. However, all cryptographic techniques is good

for nothing if key management is weak. There are various type of key management schemes that have been

proposed for ad hoc. In this survey, we present a complete study of various key management techniques to find

an efficient key management for Secure and Reliable Data transmission in ad hoc.

KEYWORDS: Ad-Hoc network, Security issues, Key Management.

I. INTRODUCTION

An Ad Hoc network is a collection of wireless nodes that are communicated with each other without

any centralized administration (node). Ad Hoc network is kind to peer-to-peer networks, where there

is no fixed infrastructure (i.e. network is formed on demand, and have a fully dynamic network

topology. There is no central authority and ad hoc network is self – organizing and adaptive .Node

forming in ad hoc network is often low energy, portable small devices. Ad Hoc network may be ideal

different from the other network in computer science classrooms an Ad Hoc network could form

between students PDA and the workstation of the teacher [1]

Ad hoc network is dynamic in nature e.g. Consider in it’s the 8 nodes show in fig 1.They are

connected to other nodes within their individual range. Now consider a node 3 move from its present

position and comes near to the node 7.Then’s in previous link of the node ‘s 4 is broken and its forms

a new link through its new neighbor node 7.This scenario shows an example that an ad hoc network in

dynamic in nature.

II. SECURITY ISSUES

2.1 Security Goal for Ad-Hoc Network [2]

2.1.1 Confidentiality: Confidentiality ensures that only authorized person can have access

to certain information. Classified information application that uses ad hoc network

like in military operations, certain information can be appropriate. So disclosure of

such information can be high in price.

Page 376: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1010 Vol. 7, Issue 3, pp. 1009-1017

Figure1. Ad-Hoc Network

2.1.2 Availability: Availability ensures that the requested service should be available as when

requested. So availability opposes Denial of Service (DOS). With denial of service attack

competitor can also break down important services like key management. So, availability is

an important security goal that should be achieved with any kind of ad hoc network

application.

2.1.3 Integrity: Integrity implies that message should be unaltered during its transmission from

source to destination. Message can be modified Un-intentionally during transmission

considering of radio propagation. A malicious attacker can also modify a message

intentionally during its transmission.

2.1.4 Authentication: Authentication is the process of identification, that a receiving entity is

assured that message he receives come from an authorized source. In an ad hoc network,

mobile node is vulnerable to compromise without proper authentication an attacker can

authenticate user and thus can have the full control of the entire network.

2.1.5 Non Repudiation: Non Repudiation implies that once a message has been sent, the sender

cannot deny that they ever sent or received such a message. It is important security services

by which compromised node can detect and isolate.

2.2 security attack [3]

2.2.1 Passive Attack: In passive attacks an originator captures the data without altering it. The

attacker does not modify the data and does not edit any additional data. The main goal of

attacker is to obtain information that is being transmitted.

2.2.2 Active Attack: In Active attacks an attacker actively participates in distort the normal

operation of the network services. An attacker can create an active attack by modifying

packets or by modifying the information or giving the false information.

Active attack can be divided into two major groups:

a) Internal attack: are forms compromised nodes that were once an authorized part of the

network. Since the already part of the network as authorized nodes, they are much more

secure and difficult to detect as compared to external attack.

b) External attack: are carried by node that is not an authorized part of the network.

2.3 Key Issues and Challenges [4]

Page 377: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1011 Vol. 7, Issue 3, pp. 1009-1017

2.3.1 Link Level Security: Nodes in ad hoc network communicate by wireless link, which is much

vulnerable to various active and passive attacks. Absence of security mechanism like firewall,

access control leads a node in an ad hoc network attack to be attacked from any direction.

Attacks like impersonating a node, traffic re-direction, denial of service etc. Make it hard to

achieve the prime security goals.

2.3.2 Secure Routing: Most of the researches in the area an ad hoc networking lead to obtain a

secure routing protocol for mobile ad hoc network, which is hard to achieve. In most of the

routing protocol in mobile ad hoc networks, intermediate nodes are acting as disseminate. So

if a node is compromised, then it can generate false routing protocol information, spread

previous routing information, insert new information with existing information, which will

finally break down the whole network. Sometime node can act meanly to save own battery

power. A compromised node can also send malicious information to other nodes, which in

turn attacks other nodes in the network.

2.3.3 Key Management: Key management is one of the prime requirements in any secure network.

In ad hoc networks have no fixed infrastructure, no central authority and connectivity is not

always guaranteed, Key management becomes a key issue for securing ad hoc networks. Key

management issues are discussed in detail.

2.3.4 Dynamic Mobility: One of the major characteristic of an ad hoc network is the node is

dynamic in nature. Nodes can join or leave in ad hoc network at any time and thus there is no

guaranteed connectivity between nodes. Static security mechanisms are not always suitable

for ad hoc network. So this property of ad hoc network makes it difficult for the researchers

come up with secure key management and routing protocol.

III. KEY MANAGEMENT IN AD HOC NETWORK

Cryptography is a powerful tool in achieving security. Mostly cryptography is used for secure, Robust

and efficient key management subsystem. Key management is a basic main part of security of ad hoc

network. Some of symmetric and asymmetric key management scheme has been purposed in the ad

hoc network. Key management used with Key Generation Key distribution, Key storage, updating

keys, Revocation, deleting, arching and using the key acing to secure.

Key management schemes in Ad-Hoc Network

Symmetric Key management

scheme

Asymmetric key management

scheme

Group key management

schemes

Hybrid composite key management

service

1. DKPS2. PKIE3. INF

1. SRP2. URSA3. INF4. SOKM5. SEKM6. Z&H7. SOKS8. ID-C9. Identity based10. Three level key

1. SGEK2. PGSK 1. Cluster based

composite key.2. Hybrid schema

zone based

Figure2: Key Management Scheme in Ad hoc Network

Page 378: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1012 Vol. 7, Issue 3, pp. 1009-1017

3.1 Symmetric Key Management in Ad hoc Network

In symmetric key management same keys are used by sender and receiver. This key is used for

encrypting the data as well as for decrypting the data. If n nodes want to communicate in Ad -hoc

network number of keys are required, where k = n (n-1)/2. In public key cryptography, two keys are

used, one private key and other public key. Different keys are used for encryption and decryption. The

private key is used for decryption. The public key is used for encryption and it available to the public.

In each communication new pair of public and private key is created. It requires less no of keys as

compared to symmetric key cryptography. Symmetric key is used for long massage. Now discuss

about some of the symmetric key management schemes in Ad-Hoc

3.1.1. Distribute key Pre-Distribution Scheme (DKPS): In its consist of three important

phase

a) Distributed Key Selection (DKS) – In these phase every node takes the

Random key from the universal set by using exclusive property.

b) Secure Shared-key Discovery (SSD) – In the second phase of DKPS in

which every node having a shared key with another node. Node can’t find

that which key on the ring are in common with which node. This method

is not providing security, but easy to evaluate due to eavesdropping can

occur in DKS phase.

c) Key Exclusion Property Testing (KEPT) - Last phase of DKPS

symmetric key management scheme is KEPT. In its matrix is used for

present the relationship between mobile nod‘s key and shared keys it uses

binary values for constructing the matrix. A KEPT phase test that is all

keys of mobile nodes fulfilling the exclusive property of CFF. Features of

DKPS are no need of TTP. DKPS needs less storage as compared to pair-

wise key agreement approach. This scheme is more efficient as compared

to group key agreement [5]

3.1.2. Peer Intermediaries for Key Establishment (PIKE): In this uses the sensor nodes

to establish the shared key. PKIE is a symmetric key agreement scheme, it uses

unique secret key in a set of nodes .This model uses the concept of random key pre-

distribution, and in 2-D case with each of the O (n) nodes every mobile node shares a

unique secret key in horizontal and vertical dimension. This scheme can be extended

to 3D or any other dimension. Features of this model are good security services, and

fair scalability [6]

3.1.3. Key Infection (INF): This model is simple and every mobile node participates

equally to making the key establishment process. INF model has no need of

collaborative effort due to node acts as a trust component; this component broadcasts

their symmetric key. This model having weak security services, but INF having low

storage cost, low encryption, and low operation. It has fair scalability with the

problem of late entry of mobile node [7].

3.2 Asymmetric key management in Ad hoc Network

Asymmetric keys, use two-part public and private key. Each recipient has a private key that is kept

secret and a public key that is used for everyone. The sender sent the recipient’s public key and uses it

to encrypt the message. The recipient uses the private key to decrypt the message and never publishes

or transmits the private key to anyone. Thus, the private key is never passing over and remains

invulnerable. This system is sometimes using public keys. This reduces the risk of data loss and

increases compliance management when the private keys are properly managed.

3.2.1 Secure Routing Protocol (SRP): This scheme is composed with three nodes and an

administrative authority which work as dealer in this model. Dealer is the entity

which provides the initial certificate to the mobile nodes. Three nodes are defined as:

1. Client Node. 2. Server Node. 3. Combiner Node. SRP node plays the important

task in SRP model.

Page 379: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1013 Vol. 7, Issue 3, pp. 1009-1017

3.2.2 Ubiquitous and Robust Access Control (URSA): URSA is efficient and provides

reliable Availability with having the feature of encrypted local communication. This

model uses efficient threshold scheme to broadcast the certificate (RSA Certificate)

signing keys to all nodes. Each mobile node of Ad-Hoc updates their certificates

periodically. This scheme provides communication delay, search failure, and degrades

the system security. To protect the network from DOS attack and the compromise the

signing key URSA using verifiable and proactive secret sharing mechanisms [10].

3.2.3 Mobile Certificate Authority (MOCA): The mobile nodes having great

computational power, physically more secure. When the nodes are equally equipped

than, MOCA nodes are selected randomly. This scheme is decentralized and the

services of a CA are distributed to MOCA nodes [11]. In their scheme , a node could

locate k+α MOCA node either randomly through the shortest path in its route cache

.But the critical question is how nodes can discover those paths securely since most

routing protocols are based on the establishment of a key services[11].

3.2.4 Partially Distributed Threshold CA Scheme: Partially Distributed Threshold CA

Scheme was discovered by Zhou, L. and Hass, Z. in 1999. When the mobile ad-hoc

network is constructed, this scheme is using the concept of CA distribution in

threshold fashion. Security services like off line authentication, great intrusion

tolerance, and trust management by CA (certification authority) are provided by

asymmetric key management scheme. The key is generated by this model are

accepted by self-organized network and partial distributed threshold CA. This scheme

having the scalability of CRL (certificate revocation list), and certification [12].

3.2.5 Self-Organized Key Scheme (SOKS): In the self-organized network each mobile

node acts as a distinct CA.SOKS was disclosed by Capkun, S., Buttya, L., and

Hubaux, P. in 2003. It has poor scalability and poor resource efficiency, but having

the off line authentication and limited intrusion detection security services. SOKS

having high intermediate encryption operations and high storage cost [13].

3.2.6 Key Distribution Technique (ID-C): In this scheme node create or initialize the Ad-

Hoc network with using the threshold private key generator identity based scheme.

The generated key is accepted by self-organized network. Off net authentication, trust

management and intrusion tolerances type security services are provided by ID-C

asymmetric key management scheme. Scalability is provided through an Id

Revocation list with greater resources efficiency. This scheme has medium storage

coast, operation and encryption [14].

3.2.7 Identity-Based Key Asymmetric Management Scheme: In Secured ID-based key

management scheme for Ad-Hoc network which allows nodes to use their public keys

directly from their known network identities and with some other common

information. This scheme provides inline Certification Authority (PKI) to share a

secret key. It also provides end-to-end authentication and enables mobile user to

ensure the authenticity of user of the peer node. The significant advantage of solution

is to avoid users to generate their own public keys and to then distribute these keys

throughout the network. This scheme solved the security problem in the ad hoc

network and is also suitable for application to other wired and wireless network. In

this major problem of security [15].

3.2.8 Three Level Key Management Scheme: Secure and Highly Efficient Three Level

Key Management scheme for Ad-Hoc network is proposed by Wan AnXiong, Yao

Huan Gong in 2011. To achieve three level security in ad hoc this model uses ID-

Based Cryptography with threshold secret sharing, Elliptic Curve Cryptography

(ECC) and Bilinear Pairing Computation. ECC provides short keys to mobile nodes

and high security level. Key generation and key distribution security services in the

prevention from adversaries attack are done by (t, n) threshold secret sharing

algorithm. ECC provides an enhanced security level with using 160 bits key and 1024

bits equivalent strength of RSA. Pairing technology provides confidentiality and

authentication with less computational cost and reduced communication overhead [16].

Page 380: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1014 Vol. 7, Issue 3, pp. 1009-1017

3.3. Group Key Management Scheme in Ad hoc Network

Group key in cryptography is a single key which is assigned only for one group of nodes in Ad-Hoc

network. For create a group key, group key is creating and distributing a secret for group members.

There are specifically three categories of group key protocol

a) Centralized, in which the controlling of group is being done by one entity.

b) Distributed, group members or a mobile node which comes in the group are equally responsible

for making the group key, distribute the group key.

c) Decentralized, more than one entity is responsible for making, distributing group key. Let us

discuss about some important Group key Management schemes in Ad-Hoc network.

3.3.1 Simple and Efficient group key Management (SEGK): This scheme presents the reliable

double multicast tree formation and maintenance protocol, which ensures that it

covers all group members. The initialization process is starting by the group

coordinator with sending the join message into the ad-hoc network. No of nodes are

directly propositional to compute cost. In SEGK model, any mobile node or group

member can join and leave the network. To ensure the backward and forward security

updating of group key is done very frequently. Two detection methods are described

in SEGK model [17].

a) Tree Links, when the node mobility is not a significant detection is done through tree

links.

b) Periodic Flooding of Control Messages, for the high mobility environment this method is

used.

3.3.2 Private Group Signature Key (PGSK): Group signatures are proposed in [18],

provide anonymity for signers. Any member of the group can sign messages, but the

resulting signature keeps the identity of the signer’s secret. In some systems there is a

third party that can trace the signature, or undo its anonymity, using a special

trapdoor. Some systems support revocation where group membership can be

selectively disabled without affecting the signing ability of unprovoked members.

Currently, the most efficient constructions are based on the Strong-RSA assumption.

A Private Group Signature key is generated by a Key Server for each node in the

Network, which ensures full anonymity which means a signature does not reveal the

signer’s identity but everyone can verify its validity.

3.4 Hybrid key management scheme in Ad hoc network

Hybrid or composite keys are those keys which are made from the combination of two or more

keys and it may be combination of symmetric & asymmetric key. Let us discuss about some of

the important Hybrid key management schemes in Ad-Hoc network.

3.4.1 Cluster Based Composite Key Management: This scheme takes the concept of off-line CA,

mobile agent, hierarchical clustering and partial distributes key management. Public

key of the members are maintained by cluster head that reduces the problem of

storage in PKI. On the basis of current, trust value and the old public key, cluster

head’s public key is computed. Using the timestamp in key number key renewal

process can be done easily. It supports network extendibility through hierarchical

clustering. This model saves network bandwidth and storage space [19].

3.4.2 Zone-Based Key Management Scheme: This scheme uses ZRP (Zone Routing Protocol)

proposed in [20], in this model for each mobile node zone is defined. Some pre-

defined number is allocated to each node which depends on the distance in hops.

Symmetric key management is used by node only for intra or inside zone (zone

radius). Without depends on clustering node uses asymmetric key management for

inter-zone security. It provides an efficient way to making the public key without

losing the capability of making the certificates.

Page 381: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1015 Vol. 7, Issue 3, pp. 1009-1017

IV. COMPARATIVE SURVEY

In the Previous Section we have discussed about some of the most important Key Management

Techniques in Mobile ad hoc networks. In Comparative survey, we are going to compare these Key

Management techniques based upon some of the Features like Reliability, Security, Scalability and

Robustness. The Comparative Survey is made depending upon the results that are analyzed from

various research works and journals. Table I shows the Comparative Survey of Key Management

schemes in ad hoc Networks. Let us discuss about the features of Key Management schemes that we

are going to compare. 4.1 Security: Central security issues are trust management and vulnerability. Trust relations may

change during the network lifetime. The system should enable the exclusion of compromised

nodes. In order to judge the security of a key-management scheme, possible vulnerabilities

should be important. Security services enabled one or a combination of confidentiality, integrity,

authentication and non-repudiation.

4.2 Scalability: Key management operations should finish in a timely. The fraction of the available

bandwidth occupied by network management traffic should be kept as low as possible. Any

increase in management traffic reduces the available bandwidth for payload data accordingly.

Hence, scalability of key-management protocols is essential.

4.3 Reliability: The Reliability of a Key Management scheme depends upon the Key Distribution,

Storage and Maintenance. It is necessary to make sure that the Keys are Properly Distributed

among the nodes, safely stored where hacker aren’t able to hack the keys and should be Properly

Maintained.

4.4 Robustness: The key-management system should survive despite denial-off service attacks and

unavailable nodes. The key-management operations should be able to be completed despite

faulty nodes and nodes exhibiting behavior, that is, nodes that deliberately deviate from the

protocol. Necessary key management operations caused by dynamic group changes should

execute in a timely manner. Key management operations should not require network wide and

strict synchronization. It is resistance to security attacks (e.g. man-in-the-middle).

TABLE1. Comparative survey of key management

Security Scalability Robustness Reliability

DKPS Medium Medium Medium High

PKIE Medium Low Medium Medium

INF Low High High Low

URSA Medium High Low High

MOCA High High Low Medium

SOKM Medium Medium High Medium

SEKM High Medium High High

Identity Based High High Medium High

SEGK Low High High Low

PGSK High Medium High High

Cluster based key Low Low Low Medium

Zone based key Low Low Medium Low

V. CONCLUSION

Different types of key management schemes are covered in this survey paper. In summary, symmetric

key management schemes are described in three categories DKPS, PIKE and INF. DKPS symmetric

Page 382: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1016 Vol. 7, Issue 3, pp. 1009-1017

key management scheme is much efficient as compared to group key schemes and pair wise key

agreement. PIKE scheme has good security services with fair scalability. INF model has no need of a

collaborative effort with having low storage cost. In this paper observe that DKPS is highly secure

and efficient schemes as compared to other symmetric key management schemes. The identity-based

key management is reliable. This scheme solved the security problem in the ad hoc network and is

also suitable for application to other wired and wireless network. This is a major problem of security

in the identity-based key management. SEGK is group key scheme in Ad-Hoc network double

multicast tree is constructed in this model. . Two detection methods are introduced into SEGK

scheme. Cluster based &Zone based key schemes come in hybrid or composite key management

scheme. In future work, we will focus in Identity based secure key management using Elliptic Curve

Cryptography. Advantages of using elliptic curve they seems to offer a level of security comparable to

classical that use much larger key size .Minimum key sizes for ECC should be 132 bits Vs. 952 bits

for RSA .

ACKNOWLEDGEMENTS

Author is thankful to his guides Mr. Anuj singh and Ms. Anuradha who give him an opportunity to

carry out this work. Author is also thankful to NPTEL which provides E-learning through online Web

and Video courses in Engineering, Science and humanities streams

REFERENCES

[1] Panagiotis Papadimitraos and Zygmunt J. Hass, Securing Mobile Ad Hoc Networks, in Book The

Handbook of Ad Hoc Wireless Networks (Chapter 31), CRC Press LLC, 2003.

[2] J.Kong P.Zerfos H.Luo, S.Lu and L. Zhang, “Providing robust and ubiquitous security support for mobile

ad hoc networks”, in Proceedings of the 9th International Conference on Network Protocols(ICNP),

November 2001, pp. 251-260.

[3] Preetida Vinayakray-Jani,” Security within Ad hoc Networks”, Nokia Research Center, Helsinki, Finland.

Position Paper, PAMPAS Workshop, Sept. 16/17 2002, London.

[4] Wu, B., Chen, J., Wu, J., and Cardei, M. (2006). A Survey on Attacks and Countermeasures in Mobile Ad

Hoc Networks. Wireless/Mobile Network Security, Springer. Chapter 12.

[5] Aldar C-F. Chan, “Distributed Symmetric Key Management for Mobile Ad hoc Networks”, IEEE, 2004.

[6] Aziz, B., Nourdine, E. and Mohamed, E., “A Recent Survey on Key management Schemes in

MANET”ICTTA’08, pp. 1-6, 2008.

[7] R. Anderson, Haowen and Perring, Adrian, “Key Infection: Smart trust for smart dust”, 12th IEEE

International Conference on Network Protocol ICNP, 2004.

[8] Valle, G. and Cerdenas, R., “Overview the key Management in Ad Hoc Networks”, ISSADS pp. 397 – 406,

2005.

[9] Wu, B., Wu, J., Fernandez, E., Ilyas, M. and Magliveras, S., “Secure and Efficient key Management in

mobile ad hoc networks”, Network and Computer Applications, Vol. 30, pp. 937-954, 2007.

[10] Luo, H. and Lu, S., “URSA: Ubiquitous and Robust Access Control for Mobile Ad Hoc Networks”, IEEE /

ACM Transactions on Networking Vol. 12, pp. 1049-1063, 2004.

[11] Yi, S., Naldurg, P. and Kravets , R, “Security-aware ad hoc routing for wireless networks ”,MobiHoc, pp.

299-302, 2001.

[12] Zhou, L. and Hass, Z.,”Secure Ad Hoc Networks”, IEEE Network Magazine vol. 13, no. 6, pp.24-30, 1999.

[13] Capkun, S., Buttya, L., and Hubaux, P.,”Self-Organized Public Key Management for Mobile AdHoc

Networks”, IEEE Trans. Mobile Computing, vol. 2, no. 1, pp. 52-64, 2003.

[14] A. Khalili, Katz, Jonathan and Arbaugh, William A.,” Towards secure key distribution in truly ad hoc

networks”, IEEE Workshop on Security and Assurance in ad hoc Networks – i conjunction with the 2003

International Symposium on Application and the Internet, 2003.

[15] AnilKapil and SanjeevRana, “Identity-Based Key Management in MANETs using Public Key

Cryptography”, International journal of Security, vol. (3): Issue (1).

[16] Wan AnXoing, Yao Huan Gong, “Secure and Highly Efficient Three Level Key Management Scheme for

MANET”, WSEAS TRANSACTIONS on COMPUTERS, Vol. 10, Issue 10, 2011.

[17] Bing Wu, Jie Wu and YuhongDong,”An efficient group key management scheme for mobile ad hoc

network”, International Journal and Networks, Vol. 2008.

[18] D. Boneh, X. Boyen, and H. Shacham, “Short group signatures,” in Advances in Cryptology–Crypto’04,

Lecture Notes in Computer Science, vol. 3152, 2004, pp. 41–55

Page 383: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1017 Vol. 7, Issue 3, pp. 1009-1017

[19] R. PushpaLakshmi, A. Vincent Antony Kumar,”Cluster Based Composite Key Management in Mobile Ad

Hoc Networks”, International Journal of Computer Applications, vol. 4- No. 7, 2010.

[20] ThairKhdour, Abdullah Aref, “A HYBRID SCHEMA ZONE-BASED KEY MANAGEMENT FOR

MANETS”, Journal of Theoretical and Applied Information Technology, vol. 35 No. 2, 2012.

AUTHORS

Anju Chahal M.Tech (CSE) AMITY University, GURGAON, HARYANA, Anju Chahal

was born in Bhiwani, Haryana, India, in 1992. She received the Degree in Bachelors of

Technology in Computer Science Engineering from Shri Baba Mastnath Engineering

College (SBMN), MDU Rohtak, in Year 2008-2012 and currently pursuing Masters in

Computer Science engineering, degree from AMITY University, Haryana. Her research

interests include Network Security, Cloud Computing, and Data Security.

Anuj Kumar Assistant Professor at the Department of Computer science in AMITY University, GURGAON,

HARYANA, India. His research interests include Data storage , Network security , and Data Security.

Anuradha Rani Assistant Professor, AMITY University, HARYANA ,INDIA Anuradha rani is born in hansi a

professor at the Department of Computer science in AMITY University , GURGAON, HARYANA, India. She

received degree in Masters of science in Information Technology from G.J.U, Hisar ,2005, Masters of

Technology, in Computer Science, from Banasthali university 2007. Beachelors of science from K.U.K

University 2003. Her research interests include Network Security, data mining, and Networking.

Page 384: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1018 Vol. 7, Issue 3, pp. 1018-1026

PREDICTION OF STUDY TRACK BY APTITUDE TEST USING

JAVA

Deepali Joshi and Priyanka Desai Department of Computer Engineering,

Thakur College of Engineering & Technology, Mumbai, Maharashtra, India

ABSTRACT In today’s competitive world everyone wants to be successful to achieve this it is essential to be successful in

academics. The basic education is from 1st to 10th standard and once 10th standard is complete there are various

courses that can be selected by the student. The students get confused for selecting the appropriate field. The

proposed system can be used to solve this problem in order to achieve this aptitude test is implemented. Aptitude

Test is utilized by the proposed system which predicts the suitable stream depending upon the intellectual

capability of the student. To find the suitable stream aptitude test method can be used as compared to the

traditional process. The proposed system is beneficial as compared to traditional system as the accuracy of results

is better.

KEYWORDS: Aptitude, ssc, ssc marks, Accuracy, Streams

I. INTRODUCTION

Each and every person wants to be successful in all phases of life. To be successful it depends on

whether or not the correct field is selected. If correct field is selected by students then they will be

successful in their careers [1]. If appropriate field is not selected then student’s face lot of problems.

The education is categorized in various phase’s. Once 10th standard is complete number of options are

available to students and depending upon the field the career will be decided. After 10th various courses

are available like Science, Commerce and Arts [6]. As many options are available it becomes difficult

to choose the suitable field. A method or technique is required through which the students can find

suitable stream. Some solutions are available in order to solve the problem but they do not provide

appropriate results. One method which is used to specify the stream is the Aptitude Test Method. The

aptitude test consists of Questions and along with questions answers are also displayed. The student

have to find out the correct answer from the given options. The questions specify various streams like

Science, Commerce, Arts and Diploma. Through the aptitude test intellectual capability of the student

can be assessed and the suitable stream can be predicted to the student. The paper contains detailed

information about Building the Model, DATA COLLECTION, Tools, Implementation, Solution,

Results and Conclusion and followed by references.

II. BUILDING THE MODEL

Aptitude Test is an efficient method through which prediction of the field can be done. The main

advantage of aptitude test is that it completely depends upon the intellectual capability of the student.

The test will be carried out on every individual student and the result will be generated. The result of

student’s will be different because it will be specific to that student only. In order to build the model

Java Net beans is used and along with this a database is connected so that the questions as well as

answers are stored. The database used is SQLite.

Page 385: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1019 Vol. 7, Issue 3, pp. 1018-1026

III. DATA COLLECTION

To implement the proposed system data is required and the data consists of questions and their correct

answer along with three options. The data is gathered from the 10th standard text books and the internet

is also utilised for this purpose. The questions and answers belong to the streams like Science,

Commerce, Arts, Diploma and Engineering.

IV. TOOLS

The tools which are used to implement the proposed system are Java Net beans and the database is

SQLite.

4.1 Java

Java is a set of several computer software products and specifications from Sun Microsystems (which

has since merged with Oracle Corporation), that together provide a system for developing application

software and deploying it in a cross-platform computing environment. Java is used in a wide variety

of computing platforms from embedded devices and mobile phones on the low end, to enterprise

servers and supercomputers on the high end. While less common, Java applets are sometimes used to

provide improved and secure functions while browsing the World Wide Web on desktop computers.

4.2 SQLite

SQLite is an in-process library that implements a self-contained, server less, zero-configuration,

transactional SQL database engine. The code for SQLite is in the public domain and is thus free for use

for any purpose, commercial or private. SQLite is currently found in more applications than we can

count, including several high-profile projects. SQLite is an embedded SQL database engine. Unlike

most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes

directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and

views, is contained in a single disk file. The database file format is cross-platform - you can freely copy

a database between 32-bit and 64-bit systems or between big-endian and little-endian architectures.

These features make SQLite a popular choice.

V. IMPLEMENTATION

In order to predict the suitable stream, the student have to give input such as name, gender, phone

number, email id, hobbies, railway line, nearby station. Once this is complete a option is provided to

the student that is take aptitude test. As soon as the student clicks on this option the aptitude test screen

will be displayed. The aptitude test is implemented so that the student can find the suitable stream

depending upon the intellectual capability. The aptitude test consists of questions from various streams.

For each stream there are three levels of questions and if the student gives correct answer then the level

will be incremented. The questions of first level are of one mark, the second level questions are of two

marks and the third level questions are of three marks. The first level consists of three questions, if the

student gives correct answer of first question then the level will be incremented and the questions of

level two will be displayed to the student. If the answer to the question one of level one is incorrect then

the second question of level one will be displayed and there is no increment of the level. Similarly if

the answer of first question of level two is correct then the level will be incremented and the question

of level three will be displayed to the student. Thus if the student gives correct answer of all three levels

then the student has faced only three questions and the suitable stream will be displayed to the student.

The questions and their answers are stored in database and will be displayed to the student on the onset

of the test. There are various attributes such as id, Stream, Number, Questions, ans1, ans2, ans3, ans4,

correctans. The attribute id is used for identification. Point is basically the marks allotted for each and

every question. Stream specifies to which stream the question belongs. The streams are MED which

specifies the question belongs to Science field, COM specifies Commerce, ARTS as the name suggests

Arts, ENG specifies diploma. Number indicates the question number. Question consists of questions to

be asked or displayed to the student. Every question will have four answer options one of which is

correct. The correctans consists of answer number which is correct. For each stream there are three

Page 386: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1020 Vol. 7, Issue 3, pp. 1018-1026

levels of questions and if the student gives correct answer then the level will be incremented. The

questions of first level are of one mark, the second level questions are of two marks and the third level

questions are of three marks. The first level consists of three questions, if the student gives correct

answer of first question then the level will be incremented and the questions of level two will be

displayed to the student. If the answer to the question one of level one is incorrect then the second

question of level one will be displayed and there is no increment of the level. Similarly if the answer of

first question of level two is correct then the level will be incremented and the question of level three

will be displayed to the student. Thus if the student gives correct answer of all three levels then the

student has faced only three questions and the suitable stream will be displayed to the student.

Table 1. Database Table.

The above table contains various attributes such as id, point, stream, number, questions, ans1, ans2, ans3,

ans4 and correctans. Id is used for identification, point specifies the marks allotted to each and every

question. The stream specifies to which stream the question belongs such as Commerce, Arts, Diploma

and Science. Number is nothing but the question number. Questions contain the detailed questions along

with answer option ans1, ans2, ans3, ans4 and the correctans contains the numeric which specifies correct

answer.

Page 387: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1021 Vol. 7, Issue 3, pp. 1018-1026

Figure 1. Student Details

In the above figure the student will enter their basic detail.

The first step in proposed system is to gather the student details such as the Name, Gender, Phone No,

Email id, Hobbies, Railway Line, Near by Station. Once the basic details are filled by the student the first

screen appears where the question, answers, number of questions attempted will be displayed to the student.

It also consists of buttons such as Start, Next, End which perform the operation as the name suggests.

Page 388: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1022 Vol. 7, Issue 3, pp. 1018-1026

Figure 2. Main Page

The above screen is the first screen that appears when the student clicks on aptitude.

As soon as the student clicks on Start button the first question will be displayed to the student along

with the options. Question no will also be displayed to the student. On the screen it display’s Question

No. 1/12, which indicates the first question out of twelve. In the next screen the student has entered the

option 1 and then clicks on Next the next question will be displayed. Similarly the student has to answer

twelve questions and then the result will be displayed. The result that is generated is engineering and

along with this the Aptitude score is also displayed.

Page 389: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1023 Vol. 7, Issue 3, pp. 1018-1026

Figure 3. Aptitude Question with options.

In the above figure the aptitude question along with the answer options is displayed.

Figure 4. First question with answer.

Page 390: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1024 Vol. 7, Issue 3, pp. 1018-1026

In the above figure once the question has been read and understood the student enters the suitable answer

option.

Figure 5. Next question with options.

The second question along with the options are displayed.

Figure 6. Result

In the above figure the suitable stream is predicted.

Page 391: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1025 Vol. 7, Issue 3, pp. 1018-1026

VI. SOLUTION

The proposed system uses Aptitude Test method in order to specify the suitable stream to the student.

It takes input such as name, contact number, email-id, hobbies, railway line, nearby station this is done

for registration purpose. Once the registration is complete the aptitude screen will be displayed and as

soon as the student clicks on start the question along with the answers will be displayed. There are four

options for the answers and the student has to select one. The questions belong to different streams.

Depending upon the correct answer given by the student maximum number of times for the specific

stream indicates that the student has prerequisite knowledge and that stream is more suitable for that

student. The aptitude test method provides efficient result because it depends upon the intellectual

capability of the student.

VII. RESULTS

The proposed system is more efficient as compared to the existing system and generates appropriate

results. Through the aptitude test, the student can answer the questions and find their intellectual

capability for the specific stream and according to the correct answers the stream will be predicted. The

Aptitude Test method was conducted for twenty-five students of Abhinandan Classes. Each and every

student personally gave the aptitude test on separate machines and the questions displayed to them were

different to make sure that their intellectual capability is tested in a efficient way. The aptitude test

method worked successfully and predicted the suitable stream for individual student.

The pie chart specifies the result of the Aptitude Test. There are four streams Science, Commerce, Arts,

Diploma. The test was conducted on twenty-five students, out of which science was the suitable stream

for eight students and in percentage it can be specified as 32%, commerce for five students and

percentage is 20%, arts for three students and percentage is 12%, diploma for nine students and can be

specified as 20%.

Figure 7. Pie Chart

The above pie chart shows the percentage of the specific field being predicted.

VIII. CONCLUSION

The proposed system was tested on students from Abhinandan Classes. Each and every student gave

the test individually on separate machines and distinct questions displayed to them. The number of

students who appeared for the aptitude test was 25. The suitable stream was predicted to them through

32%

20%12%

36%

Stream

Science Commerce Arts Diploma

Page 392: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1026 Vol. 7, Issue 3, pp. 1018-1026

the aptitude test method. Science was predicted for eight students and in percentage it can be specified

as 32%, commerce for five students, arts for three students, diploma for nine students. Once the aptitude

test was completed an interactive session was carried out and the students commented that the streams

predicted to them were suitable from their point of view. The aptitude test worked successfully and

predicted the suitable stream for individual student. The students as well as their parents have clarity

about the stream and their wards future.

IX. FUTURE WORK

In this paper, we proposed and built a model to predict a suitable stream to the students depending on

their intellectual capability. The proposed system aims to improve the quality of education by helping

the student to select the stream which is appropriate for them. The proposed system is implemented for

ssc students and predicts the stream. The research can be enhanced by implementing it for the hsc

students so that once the stream can be predicted to them.

REFERENCES

[1] Ahmad I., Manarvi, I., Ashraf, N. “Predicting university performance in a subject based on

high school majors”, 978-1-4244-4136-5/09/ ©2009 IEEE

[2] Zhiwu Liu, Xiuzhi Zhang. “Prediction and Analysis for Students' Marks Based on Decision Tree Algorithm”,

Intelligent Networks and Intelligent Systems (ICINIS), 2010 3rd International Conference on Digital Object

Identifier:10.1109/ICINIS.2010.59 Publication Year: 2010 , Page(s): 338 – 341

[3] Anupama Kumar S, Vijayalakshmi M.N. “Mining of student academic evaluation records in higher education”,

Recent Advances in Computing and Software Systems (RACSS), 2012 International Conference on Digital Object

Identifier: 10.1109/RACSS.2012.6212699

Publication Year: 2012 IEEE , Page(s): 67 – 70

[4] Bunkar, K, Singh U.K., Pandya B, Bunkar R, “Data mining: Prediction for performance

improvement of graduate students using classification”,

Wireless and Optical Communications Networks (WOCN), 2012 Ninth International Conference on Digital

Object Identifier:10.1109/WOCN.2012.6335530 Publication Year: 2012 IEEE, Page(s): 1 – 5

[5] Garcia, E.P.I. ; Mora, P.M., “Model Prediction of Academic Performance for First Year Students”, Artificial

Intelligence (MICAI), 2011 10th Mexican International Conference on Digital Object

Identifier: 10.1109/MICAI.2011.28 Publication Year: 2011 IEEE , Page(s): 169 – 174

[6] Qasem A. Al-Radaideh, Ahmad Al Ananbeh, and Emad M. Al-Shawakfa A “Classification Model For

Predicting The Suitable Study Track For School Students”. IJRRAS 8 (2) August 2011

[7] Pumpuang, P., Srivihok, A., Praneetpolgrang, “Comparisons of classifier algorithms: Bayesian network,

C4.5, decision forest and NBTree for Course Registration Planning model of undergraduate students”, Systems,

Man and Cybernetics, 2008. SMC 2008. IEEE International Conference on Digital Object

Identifier: 10.1109/ICSMC.2008.4811865

Publication Year: 2008 IEEE, Page(s): 3647 – 3651

AUTHORS

Deepali Joshi, M.E (Pursuing), B.E (CMPN). Area of Specialization Data Mining,

Operating System

Priyanka Desai, Ph.D (Pursuing), M.Tech (CSE), B.E (CSE). Area of

Specialization Networks, Web/Text Mining, Software Engineering,

Database/Object Oriented Technology.

Page 393: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1027 Vol. 7, Issue 3, pp. 1027-1037

UNSTEADY MHD THREE DIMENSIONAL FLOW OF

MAXWELL FLUID THROUGH A POROUS MEDIUM IN A

PARALLEL PLATE CHANNEL UNDER THE INFLUENCE OF

INCLINED MAGNETIC FIELD

L.Sreekala1, M.VeeraKrishna2*, L.HariKrishna3 and E.KesavaReddy4 1Assistant Professor, Department of Mathematics, CRIT, Anantapur, Andhra Pradesh, India

2Department of Mathematics, Rayalaseema University, Kurnool, Andhra Pradesh, India 3Assistant Professor, Department of Mathematics, AITS, Kadapa, Andhra Pradesh, India

4Professor, Department of Mathematics, JNTU, Anantapur, Andhra Pradesh, India

ABSTRACT In this paper, we discuss the unsteady hydro magnetic flow of an electrically conducting Maxwell fluid in a

parallel plate channel bounded by porous medium under the influence of a uniform magnetic field of strength

Ho inclined at an angle of inclination with the normal to the boundaries. The perturbations are created by a

constant pressure gradient along the plates. The time required for the transient state to decay and the ultimate

steady state solution are discussed in detail. The exact solutions for the velocity of the Maxwell fluid consists of

steady state are analytically derived, its behaviour computationally discussed with reference to the various

governing parameters with the help of graphs. The shear stresses on the boundaries are also obtained

analytically and their behaviour is computationally discussed in detail.

KEYWORDS: Maxwell fluids, unsteady flows, porous medium, parallel plate channels, MHD flows

I. INTRODUCTION

Several fluids including butter, cosmetics and toiletries, paints, lubricants, certain oils, blood, mud,

jams, jellies, shampoo, soaps, soups, and marmalades have rheological characteristics and are referred

to as the non-Newtonian fluids. The rheological properties of all these fluids cannot be explained by

using a single constitutive relationship between stress and shear rate which is quite different than the

viscous fluids [1, 2]. Such understanding of the non-Newtonian fluids forced researchers to propose

more models of non-Newtonian fluids. In general, the classification of the non-Newtonian fluid

models is given under three categories which are called the differential, the rate, and the integral types

[3]. Out of these, the differential and rate types have been studied in more detail. In the present

analysis we discuss the Maxwell fluid which is the subclass of rate-type fluids which take the

relaxation phenomenon into consideration. It was employed to study various problems due to its

relatively simple structure. Moreover, one can reasonably hope to obtain exact solutions from

Maxwell fluid. This motivates us to choose the Maxwell model in this study. The exact solutions are

important as these provide standard reference for checking the accuracy of many approximate

solutions which can be numerical or empirical in nature. They can also be used as tests for verifying

numerical schemes that are being developed for studying more complex flow problems [4–9]. On the

other hand, these equations in the non-Newtonian fluids offer exciting challenges to mathematical

physicists for their exact solutions. The equations become more problematic, when a non-Newtonian

fluid is discussed in the presence of MHD and porous medium. Despite this fact, various researchers

are still making their interesting contributions in the field (e.g., see some recent studies [1–15]). Few

investigations which provide the examination of non-Newtonian fluids in a rotating frame are also

Page 394: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1028 Vol. 7, Issue 3, pp. 1027-1037

presented [1–19]. Recently Faisal Salah [20] discussed two explicit examples of acceleration subject

to a rigid plate are taken into account. Constitutive equations of a Maxwell fluid are used and

modified Darcy’s law has been utilized. The exact solutions to the resulting problem are developed by

Fourier sine transform. With respect to physical applications, the graphs are plotted in order to

illustrate the variations of embedded flow parameters. The mathematical results of many existing

situations are shown as the special cases of that study. Such studies have special relevance in

meteorology, geophysics, and astrophysics. Hayat et.al [21] investigated to analyze the MHD rotating

flow of a Maxwell fluid through a porous medium in parallel plate channel. M.V. Krishna [22]

discussed analytical solution for the unsteady MHD flow is constructed in a rotating non-Newtonian

fluid through a porous medium taking hall current into account. In this paper, we examine the MHD

flow of Maxwell fluid through a porous medium in a parallel plate channel with inclined magnetic

field, the perturbations in the flow are created by a constant pressure gradient along the plates. The

time required for the transient effects to decay and the ultimate steady state solution are discussed in

detail. The exact solutions of the velocity in the Maxwell fluid consists of steady state are analytically

derived, its behaviour computationally discussed with reference to the various governing parameters

with the help of graphs. The shear stresses on the boundaries are also obtained analytically and their

behaviour is computationally discussed.

II. FORMULATION AND SOLUTION OF THE PROBLEM

We consider the unsteady flow of an electrically conducting Maxwell fluid through porous medium in

a parallel plate channel subjected to a uniform transverse magnetic field of strength Ho inclined at an

angle of inclination normal to the channel walls. The boundary plates are assumed to be parallel to

xy-plane and the magnetic field to the z-axis in the transverse xz-plane. The component along z-

direction induces a secondary flow in that direction while its x-components changes perturbation to

the axial flow. At 0t the fluid is driven by a prescribed pressure gradient parallel to the channel

walls. We choose a Cartesian system O(x, y, z) such that the boundary walls are at 0z and lz ,

since the plates extends to infinity along x and y directions, all the physical quantities except the

pressure depend on z and t alone. The unsteady hydro magnetic equations governing the electrically

conducting Maxwell fluid under the influence of transverse magnetic field with reference to a frame

are

RBJSdivpVV.t

)( (2.1)

0V. (2.2)

0B. (2.3)

JB m (2.4)

t

BE

(2.5)

Where, J is the current density, B is the total magnetic field, E is the total electric field, m is the

magnetic permeability, V = (u, v, w) is the velocity field, T is the Cauchy stress tensor, B is the total

magnetic field so that B=B0 Sin + b, where B0 is the applied magnetic field parallel to the z-axis and

b is the induced magnetic field. The induced magnetic field is negligible so that the total magnetic

field B = (0, 0, B0 Sin ), the Lorentz force VSinBBJ 22

0 , is the electrical conductivity

of the fluid, ρ is the density of the fluid, and Dt

D is the material derivative and R is the Darcy

resistance. The extra tensor S for a Maxwell fluid is

SI p T (2.6)

μASLLSDt

DSλS T

(2.7)

Page 395: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1029 Vol. 7, Issue 3, pp. 1027-1037

where I p is the stress due to constraint of the impermeability, here p is the static fluid pressure, I

is the identity tensor, μ is the viscosity of the fluid, λ is the material time constants referred to as

relaxation time, it is assumed that 0 . The first Rivlin-Ericksen tensor A1 is defined as

A1 = (grad V) + (grad V)T (2.8)

It should be noted that this model includes the viscous Navier-Stokes fluid as a special case for 0. Let us indicate the stress tensor and the velocity component as

V(z, t) = (u, 0, w) (2.9)

According to Tan and Masuoka [4] Darcy’s resistance in an Oldroyd-B fluid satisfies the following

expression:

Vt

1k

Rt

1r

(2.10)

where 𝜆𝑟 is the retardation time, is the porosity (0< <1), and 𝑘 is the permeability of the porous

medium. For Maxwell fluid 𝜆𝑟= 0, and hence,

Vk

Rt

1

(2.11)

Making use of the equations (2.6), (2.7) and (2.8), the equation (2.1) reduces to

x

22

0

xz RuSinσBz

S

x

p

t

(2.12)

z

22

0

yzRwSinσB

z

S

t

(2.13)

Where 𝑅𝑥 and 𝑅z are 𝑥 and z-components of Darcy’s resistance 𝑅;

z

uS

t1

xz

and

z

wS

t1

yz

(2.14)

The equations (2.12) and (2.13) reduces to

uk

φμuSinσB

z

S

x

p

t

uρ 22

0

xz

(2.15)

wk

φμvSinσB

z

S

t

wρ 22

0

yz

(2.16)

Let iwuq Combining equations (2.15) and (2.16), we obtain

qk

φμqSinσBiSS

zx

p

t

qρ 22

0yzxz

)( (2.17)

Since

z)(

t1

qiSS

yzxz (2.18)

Substituting the equation (2.18) in the equation (2.17), we obtain the equation for the governing the

flow through a porous medium with respect to the rotating frame is given by

2

22

0

z

x

p

tλ1

ρ

1q

tλ1

k

ν

ρ

SinσB

t

q

tλ1

)(

2 (2.19)

The boundary and initial conditions are

0z0,tq 0 (2.20)

lz,0t,0q (2.21)

zallfor,0t,0dt

t)dq(z,,0t)(z,q (2.22)

We introduce the following non dimensional variables are

2

2

**

2

*

2

***

ρν

PlP,

l

ξξ,

ν

ωlω,

l

tνt,

ν

lqq,

l

zz

Page 396: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1030 Vol. 7, Issue 3, pp. 1027-1037

Using non dimensional variables the governing equations are (dropping asterisks in all forms)

2

2

11

12

1z

qP

tβ1

ρ

1q

tβ1DSinM

t

q

tβ1

)( 2 (2.23)

where, ρv

lHσμM

22

0

2

e2 is the Hartmann number, k

lD

21

is the inverse Darcy Parameter,

21l

λνβ is the material parameter related to relaxation time and

x

pP

is the pressure gradient.

The corresponding initial and boundary conditions are

0z0,tq 0 (2.24)

1z,0t,0q (2.25)

zallfor,0t,0dt

t)dq(z,,0t)(z,q (2.26)

supposing the pressure is given by

zt

ztePPP

ti

0,0

,0,1

10

(2.27)

Taking Laplace transforms of equations (2.23) and (2.27) using initial conditions (2.26) the

governing equations in terms of the transformed variable reduces to

qDSinMsDSinM1sdz

qd 1212

1

2

12

2

)())(( 22

s

P

iω-s

Pωi1 0

1

1

11 )( (2.28)

Solving equation (2.28) subjected to the conditions (2.24) and (2.25), we obtain

)iω(sλ

z)λCosh()ωiβ(1Pq

1

2

1

1111

z)(λCoshP2

1

10

) Sinh(λs.λ

z)(λSinh).λCosh(P

)Sinh(λ).iω(sλ

z)(λ)Sinhλ)Cosh(ωiβ(1P

1

2

1

110

11

2

1

11111

)Sinh(λs.λ

z)(λ.SinhP

)Sinh(λ).iω(sλ

z)(λ).Sinhωiβ(1P

1

2

1

10

11

2

1

1111

)ss(1λ

P

)iω(sλ

)ωiβ(1P2

1

0

1

2

1

111

(2.29)

Where φ)DSin(Mφ))sDSin(Mβ(1sβλ 1212

1

2

1

2

1

22

Taking the inverse Laplace transforms to the equations (2.29) on both sides, We obtain

2

0

0

0

2

0

00

0

2

0

000

2

0

00

b

P

)Sinh(bb

z)Sinh(bP

)Sinh(bb

z)Sinh(b).Cosh(bP

b

z)Cosh(bPq

)Sinh(b

z)Sinh(b).Cosh(bz)Cosh(b

)s)(iωs(iω

)ωiβ(1P

4

44

4

2111

111

)iω)(ss(s

z)Cosh(b).ωiβ(1Pe1

)Sinh(b

z)Sinh(b

1121

3111tωi

4

4 1

)Sinh(b)iω)(ss(s

z)Sinh(b).ωiβ(1P

)Sinh(b).iω)(ss(s

z)Sinh(b).Cosh(b).ωiβ(1P

31121

3111

31121

33111

)Sinh(b).)(ss(s

z)Sinh(b).Cosh(bP

s)s(s

z)Cosh(bP

)iω)(ss(s

)ωiβ(1P

3121

330

121

30

1121

111

ts

121

0

3121

30 1e))(ss(s

P

)Sinh(b).)(ss(s

z)Sinh(b.P

Page 397: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1031 Vol. 7, Issue 3, pp. 1027-1037

)iω)(ss(s

z)Cosh(b).ωiβ(1P

1212

7111

)Sinh(b)iω)(ss(s

z)Sinh(b).ωiβ(1P

)Sinh(b).iω)(ss(s

z)Sinh(b).Cosh(b).ωiβ(1P

71212

7111

71212

77111

)Sinh(b.s)s(s

z)Sinh(b).Cosh(bP

s)s(s

z)Cosh(bP

)iω)(ss(s

)ωiβ(1P

7212

770

212

70

1212

111

ts

212

0

7212

70 2es)s(s

P

)Sinh(b.s)s(s

z)Sinh(b.P

0n

)iω)(ss(sb

z)Sinh(b).ωiβ(1P

)iω)(ss(sb

z)Sinh(b).Cosh(b).ωiβ(1P

1343

2

6

6111

1343

2

6

66111

ts

343

2

6

60

343

2

6

660 3e))(ss(sb

z)Sinh(bP

))(ss(sb

z)Sinh(b).Cosh(bP

0n

)iω)(ss(sb

z)Sinh(b).ωiβ(1P

)iω)(ss(sb

z)Sinh(b).Cosh(b).ωiβ(1P

1434

2

5

5111

1434

2

5

55111

ts

434

2

5

50

434

2

5

550 4e))(ss(sb

z)Sinh(bP

))(ss(sb

z)Sinh(b).Cosh(bP

(2.30)

(Where the constants are mentioned in the appendix)

The shear stresses on the upper and lower plate are given by

1z

Udz

dqτ

and

0z

Ldz

dqτ

(2.31)

III. RESULTS AND DISCUSSION

We discuss the unsteady flow of an electrically conducting Maxwell fluid through a porous medium

in parallel plate channel subjected to uniform magnetic field. In unperturbed state the perturbation are

created by performing to imposition of constant pressure gradient along the axis (OX) of the channel

walls the velocity component along the imposed pressure gradient and normal to it. Under the

boundary layer assumptions these velocity components are to functions of z and t alone, where z

corresponds to the direction of axis of the channel. The transverse magnetic field once arising give

rise to Lorentz forces resisting the flow along normal to the channel wall.

The constitutive equations relating the stress and rate of strain are chosen to depict the Maxwell fluid.

The Brinkman’s model has been chosen to analyses the flow through a porous medium. The equation

governing the velocity components and with reference to frame ultimately can be combined into a

single equation by defining the complex velocity iwuq The expression for the components of

the stresses are manipulated from the stress and strain relationships. Under these assumptions the

ultimate governing equations for the unsteady flow through a porous medium with reference to frame

is formulated the corresponding boundary and initial conditions. This boundary value problem has

been solved using non-dimensional variables making use of Laplace transform technique.

The solution for the combined velocity q consists of two kinds of terms 1. Steady state 2. The

transient terms involving exponentially varying time dependence. The analysis of transient terms

indicates that this transient velocity decay exponentially in dimensionless time to of order i.e.,

43

1

1,

1,max

sst . This decay in the transient term depends on the non-dimensional parameters

β1, M and D-1 . When these transient terms decay the ultimate velocity consists of steady and

oscillatory components.

Page 398: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1032 Vol. 7, Issue 3, pp. 1027-1037

2

0

0

0

2

0

00

0

2

0

000

2

0

00

steadyb

P

)Sinh(bb

z)Sinh(bP

)Sinh(bb

z)Sinh(b).Cosh(bP

b

z)Cosh(bPq )( (3.1)

The flow governed by the non-dimensional parameters namely viz. M the magnetic field parameter

(the Hartmann number), D-1 the inverse Darcy parameter, 1 is the material time parameter referred

as relaxation time. The computational analysis has been carried out to discuss the behaviour of

velocity components u and w on the flow in the rotating parallel plate channel and the lower plate

executes non-torsional oscillations in its own plane with reference to variations in the governing

parameters may be analyzed from figures (1-3) and (4-6) respectively (P0=P1=10, t =0.1,

3/,4/1

).

We may note that the effect of the magnetic field on the flow from figures (1 and 4). The magnitude

of the velocity component u reduces and the velocity component w increases with increase in the

Hartmann number M. However, the resultant velocity reduces throughout the fluid region with

increase in the intensity of the magnetic field (the Hartmann number M). The figures (2 and 5)

represent the velocity profiles with different variation in the inverse Darcy parameter D-1. We find that

the magnitude of u reduces with decrease in the permeability of the porous medium, while the

magnitude of w experiences a slight enhancement with increase in the inverse Darcy parameter D-1. It

is interesting to note that lesser the permeability of the porous medium lower the magnitude of the

resultant velocity. i.e., the resultant velocity reduces throughout the fluid region with increase in the

inverse Darcy parameter D-1. Both the velocity components u and w enhances with increase in the

relaxation time entire fluid region. These displayed in the figures (3 and 6). The resultant velocity

enhances throughout the fluid region with increase in the relaxation time. The shear stresses on the

upper and lower plates have been calculated with reference to variations in the governing parameters

and are tabulated in the tables (I-IV). On the upper plate the magnitude of the stresses x enhances

with increase in M and 1 , while it reduces with increase in the inverse Darcy parameter D-1. The

magnitude of the stresses y

enhances with increase in for all governing parameters M, D-1and 1

(tables. I-II). On the lower plate the magnitude of the stresses x and y

enhances with increase in M

and 1 , while these reduces with increase in the inverse Darcy parameter D-1 (tables. III-IV).

IV. CONCLUSIONS

1. The resultant velocity reduces throughout the fluid region with increase in the intensity of the

magnetic field (the Hartmann number M).

2. Lesser the permeability of the porous medium lower the magnitude of the resultant velocity. i.e.,

the resultant velocity reduces throughout the fluid region with increase in the inverse Darcy

parameter D-1.

3. Both the velocity components u and w and the resultant velocity enhances with increase in the

relaxation time in the entire fluid region.

4. On the upper plate the magnitude of the stresses x enhances with increase in M and1

, while it

reduces with increase in the inverse Darcy parameter D-1.

5. The magnitude of the stresses y

enhances with increase in for all governing parameters M, D-1

and 1 . On the lower plate the magnitude of the stresses x and y

enhances with increase in

M, and 1 , while these reduces with increase in the inverse Darcy parameter D-1.

Page 399: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1033 Vol. 7, Issue 3, pp. 1027-1037

V. GRAPHS AND TABLES

Fig. 1: The velocity profile for u with M.

11 , D

1=2000, E=0.01

Fig. 2: The velocity profile for u with D-1.

,11 E=0.01, M=2

Fig. 3: The velocity profile for u with 1 .

E=0.01, D1

=2000, M=2

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0 0.2 0.4 0.6 0.8 1

u

z

M=2

M=3

M=4

M=5

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0 0.2 0.4 0.6 0.8 1

u

z

D‾¹=2000

D‾¹=3000

D‾¹=4000

D‾¹=5000

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 0.2 0.4 0.6 0.8 1

u

z

β1=1

β1=2

β1=3

β1=4

Page 400: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1034 Vol. 7, Issue 3, pp. 1027-1037

Fig. 4: The velocity profile for w with M.

,11 D1

=2000, E=0.01

Fig. 5: The velocity profile for w with D-1.

,11 E=0.01, M=2

Fig. 6: The velocity profile for w with 1 .

E=0.01, D1

=2000, M=2

-0.14

-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0

0 0.2 0.4 0.6 0.8 1

v

z

M=2

M=5

M=8

M=10

-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0

0 0.2 0.4 0.6 0.8 1

v

z

D‾¹=2000

D‾¹=3000

D‾¹=4000

D‾¹=5000

-0.16

-0.14

-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0

0 0.2 0.4 0.6 0.8 1

v

z

β1=1

β1=2

β1=3

β1=4

Page 401: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1035 Vol. 7, Issue 3, pp. 1027-1037

Table I: The shear stresses (x

) on the upper plate

P0 = P1 I II III IV V VI VII

2 0.084673 0.156783 0.246352 0.062501 0.046782 0.107466 0.145336

4 0.121453 0.186299 0.268751 0.116002 0.083146 0.144236 0.181673

6 0.146755 0.208888 0.278752 0.118208 0.121482 0.180083 0.256335

10 0.163752 0.408755 0.544799 0.127436 0.118442 0.207853 0.501652

I II III IV V VI VII

M 2 5 8 2 2 2 2

D1

2000 2000 2000 3000 4000 2000 2000

1 5 5 5 5 5 6 8

Table II: The shear stresses (y

) on the upper plate

P0 = P1 I II III IV V VI VII

2 -0.01467 -0.02561 -0.03216 -0.01565 -0.01682 -0.01512 -0.01811

4 -0.01814 -0.02848 -0.04821 -0.01255 -0.02845 -0.02147 -0.02533

6 -0.02107 -0.03245 -0.04552 -0.02856 -0.03215 -0.02658 -0.03275

10 -0.04251 -0.06837 -0.07550 -0.05478 -0.06253 -0.05865 -0.08314

I II III IV V VI VII

M 2 5 8 2 2 2 2

D1

2000 2000 2000 3000 4000 2000 2000

1 5 5 5 5 5 6 8

Table III: The shear stresses (x

) on the lower plate

P0 = P1 I II III IV V VI VII

2 0.000048 0.000054 0.000064 0.000041 0.000032 0.000052 0.000084

4 0.000066 0.000072 0.000084 0.000042 0.000035 0.000062 0.000098

6 0.000072 0.000078 0.000089 0.000052 0.000042 0.000082 0.000099

10 0.000084 0.000094 0.000132 0.000062 0.000048 0.000092 0.000147

Table IV: The shear stresses (y

) on the lower plate

P0 = P1 I II III IV V VI VII

2 -0.00467 -0.00599 -0.00653 -0.00321 -0.00301 -0.00546 -0.00675

4 -0.00521 -0.00684 -0.00744 -0.00427 -0.00357 -0.00584 -0.00748

6 -0.00633 -0.00744 -0.00831 -0.00524 -0.00427 -0.00752 -0.00846

10 -0.00801 -0.00856 -0.00946 -0.00622 -0.00582 -0.00942 -0.00999

I II III IV V VI VII

M 2 5 8 2 2 2 2

D1

2000 2000 2000 3000 4000 2000 2000

1 5 5 5 5 5 6 8

I II III IV V VI VII

M 2 5 8 2 2 2 2

D1

2000 2000 2000 3000 4000 2000 2000

1 5 5 5 5 5 6 8

Page 402: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1036 Vol. 7, Issue 3, pp. 1027-1037

ACKNOWLEDGEMENTS

The authors very much thankful to authorities of JNTU, Anantapur, Andhra Pradesh, India, providing

necessary facilities to have done this work and IJAET journal for the support to develop this

document.

REFERENCES

[1]. P. Puri, “Rotary flowof an elastico-viscous fluid on an oscillating plate,” Journal of Applied Mathematics

and Mechanics, Vol. 54, No. 11, pp. 743–745, 1974.

[2]. M. Hussain, T. Hayat, S. Asghar, and C. Fetecau, “Oscillatory flows of second grade fluid in a porous

space,” Nonlinear Analysis: Real World Applications, Vol. 11, No. 4, pp. 2403–2414, 2010.

[3]. C. Fetecau, S. C. Prasad, and K. R. Rajagopal, “A note on the flow induced by a constantly accelerating

plate in an Oldroyd-B fluid,” Applied Mathematical Modelling, Vol. 31, No. 4, pp. 647– 654, 2007.

[4]. W. Tan and T. Masuoka, “Stokes’ first problem for a second grade fluid in a porous half-space with

heated boundary,” International Journal of Non-Linear Mechanics, Vol. 40, No. 4, pp. 515–522, 2005.

[5]. C. Fetecau, M. Athar, and C. Fetecau, “Unsteady flow of a generalized Maxwell fluid with fractional

derivative due to a constantly accelerating plate,” Computers andMathematics with Applications, Vol.

57, No. 4, pp. 596–603, 2009.

[6]. M. Husain, T. Hayat, C. Fetecau, and S. Asghar, “On accelerated flows of an Oldroyd—B fluid in a

porous medium,” Nonlinear Analysis: Real World Applications, Vol. 9, No. 4, pp. 1394–1408, 2008.

[7]. M. Khan, E. Naheed, C. Fetecau, and T. Hayat, “Exact solutions of starting flows for second grade fluid

in a porous medium,” International Journal of Non-LinearMechanics, Vol. 43, No. 9, pp. 868–879, 2008.

[8]. F. Salah, Z. A. Aziz, and D. L. C. Ching, “New exact solution for Rayleigh-Stokes problem ofMaxwell

fluid in a porous medium and rotating frame,” Results in Physics, Vol. 1, No. 1,pp. 9–12, 2011.

[9]. M. Khan, M. Saleem, C. Fetecau, and T. Hayat, “Transient oscillatory and constantly accelerated non-

Newtonian flow in a porous medium,” International Journal of Non-Linear Mechanics, Vol. 42, No. 10,

pp. 1224–1239, 2007.

[10]. F. Salah, Z.Abdul Aziz, andD. L.C.Ching, “New exact solutions for MHD transient rotating flow of a

second-grade fluid in a porous medium,” Journal of Applied Mathematics, Vol. 2011, Article ID 823034,

8 pages, 2011.

[11]. C. Fetecau, T. Hayat, M. Khan, and C. Fetecau, “Erratum: Unsteady flow of an Oldroyd-B fluid induced

by the impulsive motion of a plate between two side walls perpendicular to the plate,” Acta Mechanica,

Vol. 216, No. 1–4, pp. 359–361, 2011.

[12]. C. Fetecau, T. Hayat, J. Zierep, and M. Sajid, “Energetic balance for the Rayleigh-Stokes problem of an

Oldroyd-B fluid,” Nonlinear Analysis: Real World Applications, Vol. 12, No. 1, pp. 1–13, 2011.

[13]. K. R. Rajagopal and A. S.Gupta, “On a class of exact solutions to the equations of motion of a second

grade fluid,” International Journal of Engineering Science, Vol. 19, No. 7, pp. 1009–1014, 1981.

[14]. M. E. Erdoˇgan and C. E. Imrak, “On unsteady unidirectional flows of a second grade fluid,”

International Journal of Non-Linear Mechanics, Vol. 40, No. 10, pp. 1238–1251, 2005.

[15]. F. Salah, Z. A. Aziz, and D. L. C. Ching, “Accelerated flows of a magnetohydrodynamic (MHD) second

grade fluid over an oscillating plate in a porous medium and rotating frame,” International Journal of

Physical Sciences, Vol. 6, No. 36, pp. 8027–8035, 2011.

[16]. C. Fetecau andC. Fetecau, “Starting solutions for some unsteady unidirectional flows of a second grade

fluid,” International Journal of Engineering Science, Vol. 43, No. 10, pp. 781–789, 2005.

[17]. T. Hayat, K. Hutter, S. Asghar, and A.M. Siddiqui, “MHD flows of an Oldroyd-B fluid,” Mathematical

and Computer Modelling, Vol. 36, No. 9-10, pp. 987–995, 2002.

[18]. T.Hayat, S.Nadeem, S.Asghar, andA.M. Siddiqui, “Fluctuating flow of a third-grade fluid on a porous

plate in a rotating medium,” International Journal of Non-Linear Mechanics, Vol. 36, No. 6, pp. 901–

916, 2001.

[19]. S. Abelman, E. Momoniat, and T. Hayat, “Steady MHD flow of a third grade fluid in a rotating frame

and porous space,” Nonlinear Analysis: Real World Applications, Vol. 10, No. 6, pp. 3322–3328, 2009.

[20]. Faisal Salah, Zainal Abdul Aziz, Mahad Ayem, and Dennis Ling Chuan Ching, “ MHD Accelerated

Flow of Maxwell Fluid in a Porous Medium and Rotating Frame,” ISRN Mathematical Physics, Vol.

2013, Article ID 485805, 10 pages, 2013, http://dx.doi.org/ 10.1155/2013/485805.

[21]. Hayat.T, C.Fetecau, M.Sajid., Physics Letters A, Vol. 372, pp. 1639-1644, 2008.

[22]. M.VeeraKrishna, S.V.Suneetha and R.SivaPrasad, “Hall current effects on unsteady MHD flow of

rotating Maxwell fluid through a porous medium,” Ultra Scientist of Physical Sciences, Vol. 21(1)M, pp.

133-144, 2010.

Page 403: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1037 Vol. 7, Issue 3, pp. 1027-1037

Appendix

,ωiβ1

zωizωβb,

β

zbφ),D(Mβ1zφ,DMz

1

12

2

1

1

1

1

0

12

12

12

1

1

22

11

2

22

4

1

22

11

2

22

3

1

11

2

22

2

1

11

2

22

1122

2

217

22

132

2

316

22

142

2

415

11

112

2

11

4

12

112

2

11

3

1

12

2

1

2

)πn(z4βzzs

)πn(z4βzzs

,2β

z4βzzs,

z4βzzszszsβb

)πn(zszsβb),πn(zszsβb

,ωiβ1

zωizωβb,

sβ1

zszsβb,

ωiβ1

zωizωβb

,

,

AUTHORS BIOGRAPHY

L. Sreekala, presently working as a Asst Professor in the Department of Mathematics in

Chiranjeevi Reddy Institute of Technology, Anantapur, Andhra Pradesh India, I have seven

years of experience in teaching and three years in Research. I am doing my Ph.D in area of

fluid dynamics. I have published many papers in national and international well reputed

journals.

M. Veera Krishna received the B.Sc. degree in Mathematics, Physics and Chemistry from

the Sri Krishnadevaraya University, Anantapur, Andhra Pradesh, India in 1998, the M.Sc. in

Mathematics in 2001, the M.Phil and Ph.D. degree in Mathematics from same, in 2006 and

2008, respectively. Currently, He is an in-charge of Department of Mathematics at

Rayalaseema University, Kurnool, Andhra Pradesh, India. His teaching and research areas

include Fluid mechanics, Heat transfer, MHD flows and Data mining techniques. He

awarded 1 Ph.D from Monad University, Hapur (U.P), India and 28 M.Phils from DDE,

S.V. University, Tirupati, A.P., He has published 52 research papers in national and international well reputed

journals. He has presented 18 papers in National and International seminars and conferences. He attended four

national level workshops. He is a life member of Indian Society of Theoretical and Applied Mechanics

(ISTAM).

L. Hari Krishna acquired M.Sc. degree in mathematics from S.V. University, Tirupati in

the year 1998. He obtained his M.Phil from M.K. University, Madhurai in the year 2004. He

obtained his Ph.D in the area of “Fluid Dynamics” from JNTU, Anantapur in 2010. He has

got 14 years of experience in teaching to engineering students and seven years of research

experience. He has published 8 international publications. He has presented 8 papers in

National and International seminars. He has attend two national level workshops. He was a

Editorial and Article review board member for AES Journal in Engineering Technology and

Sciences (2010-2014). He has periyar university Guide ship approval and successfully guided 3 students to take

their M.Phils. He has memberships in professional bodies.

K. Keshava Reddy, presently working as professor of Mathematics in JNT University

college of Engineering Anantapur, He has 14 years of experience in teaching and 10 years

in research, He obtained his Ph.D degree in mathematics from prestigious University

Banaras Hindu University varanasi, His areas of interest include functional Anaysis.

Optimization Techniques, Data mining, Neural Networks and Fuzzy logic. He produced 2

Ph.D, 1 M.Phil and has published more than 35 Research papers in National and

International Journals and conference. He authored 06 books on engineering mathematics

and Mathematical Methods for various Mathematics for JNTUA both at UG level and PG level presently he is

the chairman, PG Board of studies for Mathematics of JNTUA. He is a member of Board of studies for

Mathematics of various universities in India.

Page 404: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1038 Vol. 7, Issue 3, pp. 1038-1043

VIDEO STREAMING ADAPTIVITY AND EFFICIENCY IN

SOCIAL NETWORKING SITES

G. Divya1 and E R Aruna2 1Department of CSE, Vardhaman College of Engineering, India.

2Associate Professor, Department of IT, Vardhaman College of Engineering, India.

ABSTRACT While hassle on video traffic over cell phone networks have been unpleasant the wireless link capability cannot

keep up with the traffic exact. The space between the traffic demand and the link capability, all along with time-

varying link position, results in reduced service excellence of video streaming over cell phone like as lengthy

buffering time and blinking disturbance Leveraging the cloud compute technology, we suggest a new video

streaming structure of mobile, AMES-Cloud dubbed, which has couple of parts: ESoV (efficient social video

sharing) and AMoV (adaptive mobile video streaming). ESoV and AMoV create a private mediator to give video

streaming services capably for every mobile client. For a particular client, AMoV lets her secret mediator

adaptively alter her streaming pour with a coding of a scalable video technique depending upon the reaction of

link excellence. Similarly efficient social video sharing observes the social network acquaintances between

mobile clients and their private mediators try to pre hold video pleased in advance. We realize a trial product of

the AMES-Cloud framework to expose its presentation. It is shown that the confidential agents in the clouds can

efficiently provide the adaptive streaming, and achieve prefetching (i.e. video sharing,) depending upon the

social network research.

KEYWORDS: Adaptive video streaming, cloud computing, mobile networks, scalable video coding, social

video sharing.

I. INTRODUCTION

In excess of the past decade, more and more traffic is accounted by video streaming and downloading.

In exacting, video tributary services over mobile networks have turn out to be widespread over the

earlier period little years. Although the video streaming/ prefetching is not so demanding in wired

networks, mobile networks have been affliction from video traffic communication over inadequate

bandwidth of wireless links. Regardless of network operators’ anxious efforts to improve the wireless

connection bandwidth (e.g., 3G and LTE), soaring video traffic burden from mobile customers

quickly devastating the wireless link capability. While receiving video streaming traffic through

3G/4G mobile networks, mobile customer often put up with from long buffering time and irregular

disruptions due to the partial bandwidth and link circumstance fluctuation caused by multi-path loss

and user mobility [2]–[4]. Thus, it is vital to pick up the service quality of mobile video streaming

while via the networking and computing assets competently [5]–[8]. Recently there have been

numerous lessons on how to improve the service excellence of mobile video streaming on two

aspects: • Scalability: Mobile video streaming services should sustain a broad spectrum of mobile

devices; they have unlike video resolutions, different computing authority various wireless links (like

LTE and 3G) and therefore. Also, the accessible link ability of a mobile device may differ over time

and break based upon signal potency, other client’s traffic in the identical cell, and link circumstance

difference. Preserving various versions (with variant bit rates) of the similar video contented may earn

high transparency in terms of storage space and announcement. To tackle this concern, the Scalable

Video Coding (SVC) modus operandi of the H.264 AVC video solidity standard classify a BL (base

Page 405: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1039 Vol. 7, Issue 3, pp. 1038-1043

layer) with multiple boost layers (ELs). These sub streams can be programmed by developing 3

scalability qualities: (i) by utilizing layering image motion spatial scalability (screen pixels), (ii) with

the help of layering the casing rate temporal scalability, and (iii) With the help of layering the image

density quality scalability. By the SVC, a video can be played / decoded at the buck quality if only the

BL is delivering. Yet, the more ELs can be deliver, the enhanced excellence of the video stream is

attained.

• Adaptability: conventional video streaming method intended by considering relatively stable traffic

links among servers and users achieve poorly in mobile environment [2]. Thus the unpredictable

wireless link status supposed to be accurately compact with to provide ’tolerable” video streaming

services. To deal with this situation, we have to alter the bit rate settle of video at present time-varying

available link bandwidth of every mobile user. Such adaptive streaming technique can efficiently

diminish packet losses and bandwidth dissipates.

Scalable video coding and adaptive streaming techniques can be jointly combined to accomplish

effectively the best possible quality of video streaming services. That is, we can dynamically adjust

the number of SVC layers depending on the current link status [9], [12].

Figure 1: Cloud framework usage growth periodically

However most of the proposals seeking to jointly utilize the video scalability and adaptability rely on

the active control on the server side. That is, every mobile user needs to individually report the

transmission status (e.g., packet loss, delay and signal quality) periodically to the server, which

predicts the available bandwidth for each user. Thus the problem is that the server should take over

the substantial processing overhead, as the number of users increases.

II. PROBLEM STATEMENT

Existing system: Cloud computing assure lower expenses, fast scaling, easier safeguarding, and

service ease of use anywhere, anytime; a key confront is how to make certain and build pledge that

the cloud can knob user data firmly. A topical Microsoft review says that “public percentage of fifty

eight and business percentage is eighty six best are excited about the cloud computing potential.

Although almost ninety percent of those people are worried about protection, ease of use, and

confidentiality of their data as it respite in the cloud computing.”

Proposed system: We put forward an mobile video pour which is adaptive out and allocation

framework, identify AMES-Cloud, which capably provisions videos in the clouds (VC), and exploit

Page 406: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1040 Vol. 7, Issue 3, pp. 1038-1043

cloud figure to construct personal mediator (subVC) for every mobile client to attempt to present

video streaming adapt which is “non-terminating” to the changeability of link superiority based on the

Scalable Video Coding procedure. Also AMES-Cloud can auxiliary seek to provide “nonbuffering”

familiarity of video streaming by surroundings near enough occupation among the localVB, subVBs

and VB of mobile clients. We appraise the AMES-Cloud by prototype achievement and shows that

the cloud computing procedure transport noteworthy perfection on the mobile streaming adaptivity.

We unseen the encoding workload worth in the cloud computing while realize the sample.

III. SYSTEM DEVELOPMENT

1. Module of Admin

2. Module of User one

3. Module of User two

1. Module of Admin: In this unit, Admin contains sub modules count of three. They are,

Video Uploading: In this Admin is capable of adding a new video. It’s used for user for

screening more sets

Details of User: Admin is capable of watching the client’s details h which have record in

this website.

Videos Rating: This Videos Rating module for stay away from surprising videos from

clients. Later reject/accept videos then only user can/cannot view their individual videos.

2. Module of user one: In this Module of user one, it enclose the subsequent modules which are

sub ones and those are,

Feeding News: Here client of this social network can vision status from his friends like

communication or videos.

Search Friends: Here they can explore for a acquaintances and send a application to them

and also capable of watching their information.

Video Sharing: They can allocate videos with his associates by adding new videos also

they contribute to their standing by sending communication to friends.

Update Details: In this component, the user can modernize their own information.

3. Module of User two: In this Module of User two, client can catalog their information like

name, gender, password, age, and followed by. There the client can create associates by

accepting friend request or sending friend request.

They can share their standing by messages/chat also videos sharing with pals and acquire

comments/remarks from those.

IV. RELATED WORK

A. Adaptive Video Streaming Techniques

In the adaptive streaming, the video transfer rate is in tune on the fly so that a user can incident the

maximum possible video excellence based on his or her link’s time-varying bandwidth capability [2].

There are mostly two types of adaptive torrent techniques, depending on whether the adaptivity is

proscribed by the user or the head waiter. The Microsoft’s Smooth Streaming [27] is a live adaptive

streaming tune which can toggle among diverse bit rate section encoded with configurable bit rates

and video declaration at servers, while clients energetically request videos based on local monitoring

of link superiority. Adobe and Apple also industrial client-side HTTP adaptive live streaming

resolution operating in the similar manner.

Page 407: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1041 Vol. 7, Issue 3, pp. 1038-1043

Figure2: Client downloading a tagged video (direct recommendation video file)

.There are also some similar adaptive streaming services where servers controls the adaptive

transmission of video segments, for example, the Quavlive Adaptive Streaming. However, most of

these explanation maintain numerous copies of the video content with unusual bit rates, which brings

gigantic trouble of storage on the server. Regarding rate adaptation controlling techniques, TCP

friendly rate control methods for streaming services over mobile networks are proposed [28], [29],

where TCP throughput of a flow is predicted as a function of packet loss rate, round trip time, and

packet size. Considering the estimated throughput, the bit rate of the streaming traffic can be adjusted.

A rate adaptation algorithm for conversational 3G video streaming is introduced by [30]. Then, a few

cross-layer adaptation techniques are discussed [31], [32], which can acquire more accurate

information of link quality so that the rate adaptation can be more accurately made. However, the

servers have to always control and thus suffer from large workload. Recently the H.264 Scalable

Video Coding (SVC) technique has gained a momentum [10]. An adaptive video streaming system

based on SVC is deployed in [9], which studies the real-time SVC decoding and encoding at PC

servers. The work in [12] proposes a quality-oriented scalable video delivery. using SVC, but it is

only tested in a simulated LTE Network. Regarding the encoding performance of SVC, Cloud Stream

mainly proposes to deliver high-quality streaming videos through a cloud-based SVC proxy [20],

which discovered that the cloud computing can significantly improve the performance of SVC coding.

The above studies motivate us to use SVC for video streaming on top of cloud computing.

B. Mobile Cloud Computing Techniques

The cloud computing has been well located to make available video streaming services, especially in

the agitated Internet because of its scalability and ability [13]. For example, the quality-assured

bandwidth auto-scaling meant for VoD streaming foundation on the cloud computing is planned [14],

and the CALMS framework [33] is a cloud-assisted live media streaming service for internationally

dispersed users. yet, enlarge the cloud computing-based services to mobile environments necessitate

more factors to believe: wireless link dynamics, user mobility, the imperfect capability of mobile

devices [34], [35]. More recently, new designs for users on top of mobile cloud computing

environments are proposed, which virtualize private agents that are in charge of satisfy in the

requirements (e.g., QoS) of individual users such as Cloudlets [21] and Stratus [22]. Thus, we are

Page 408: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1042 Vol. 7, Issue 3, pp. 1038-1043

motivated to design the AMES-Cloud framework by using virtual a gents in the cloud to provide

adaptive video streaming services.

ames-cloud framework

In this section we explain the AMES-Cloud framework includes the Adaptive Mobile Video

streaming (AMoV) and the Efficient Social Video sharing (ESoV).

As shown in Fig. 1, the whole video stockpile and streaming system in the cloud is describe the Video

Cloud (VC). In the VC, there is a large-scale video bottom (VB), which stores the most of the trendy

video clips for the video service contributor (VSPs).A chronological video support (tempVB) is used

to accumulation new aspirant for the popular videos, while tempVB tot up the access regularity of

each video. The VC keeps running a satellite dish to seek videos which are previously popular in

VSPs, and will re-encode the peaceful videos into SVC format and store into tempVB first. By this 2-

tier cargo space, the AMES-Cloud can keep allocation most of popular videos perpetually. Note that

supervision employment will be handle by the regulator in the VC. specific for each transportable

user, a sub-video cloud (subVC) is fashioned vigorously if there is any video torrent require from the

user. The sub-VC has a sub video base (subVB), which provisions the in recent times obtain video

segments. Note that the video deliveries among the subVCs and the VC in most cases are actually not

“copy”, but just “link” operations on the same file ternally within the cloud data center [36]. There is

also encoding function in subVC (actually a smaller-scale encoder instance of the encoder in VC), and

if the mobile user demands a new video, which is not in the subVB or the VB in VC, the subVC will

fetch, encode and transfer the video. During video streaming, mobile users will always report link

conditions to their corresponding subVCs, and then the subVCs offer adaptive video streams. Note

that each mobile device also has a temporary caching storage, which is called local video base

(localVB), and is used for buffering and prefetching.

Note that as the cloud service may across different places, or even continents, so in the case of a video

delivery and prefetching between different data centers, an transmission will be carried out, which can

be then called “copy”. And because of the optimal deployment of data centers, as well as the capable

links among the data centers, the “copy” of a large video file takes tiny delay [36].

V. FUTURE WORK

As one important future work, we will carry out large-scale implementation and with serious

consideration on energy and price cost. In the future, we will also try to improve the SNS-based

prefetching, and security issues in the AMES-Cloud.

VI. CONCLUSION

In this paper, we discus about our app which indicates an adaptive mobile video streaming and

allocation framework, called AMES-Cloud, which proficiently supplies videos in the billows (VC),

and utilize cloud compute to erect classified agent (subVC) for each mobile customer to try to offer

“non-terminating” video streaming get used to the instability of link quality pedestal on the Scalable

Video Coding technique. Also AMES-Cloud can auxiliary seek to offer “non-buffering” practice of

video brook by background almost function among the VB, subVBs and localVB of mobile user. We

evaluate the AMES-Cloud by trial product execution and shows that the cloud compute technique

brings momentous improvement on the adaptivity of the mobile streaming. The focal point of this

document is to confirm how cloud computing can get better the program compliance and prefetching

for mobile client We ignored the cost of programming workload in the cloud while implement the

model. As one important possible work, we will carry out large-scale awareness and with somber

consideration on force and price cost. In the opportunity, we will also try to advance the SNS-based

pre-setting, and security issue in the AMES-Cloud.

REFERENCES

[1] “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2011–2016,” CISCO, 2012.

Page 409: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1043 Vol. 7, Issue 3, pp. 1038-1043

[2] Y. Li, Y. Zhang, and R. Yuan, “Measurement and analysis of a large scale commercial mobile Internet TV

system,” in Proc. ACM Internet Meas. conf., 2011, pp. 209–224.

[3] T. Taleb and K. Hashimoto, “MS2: A novel multi-source mobile- streaming architecture,” IEEE Trans.

Broadcasting, vol. 57, no. 3, pp. 662–673, Sep. 2011.

[4] X. Wang, S. Kim, T. Kwon, H. Kim, and Y. Choi, “Unveiling the bittorrent performance in mobile WiMAX

networks,” in Proc. Passive Active Meas. Conf., 2011, pp. 184–193.

[5] A. Nafaa, T. Taleb, and L. Murphy, “Forward error correction adaptation strategies for media streaming over

wireless networks,” IEEE Commun. Mag., vol. 46, no. 1, pp. 72–79, Jan. 2008.

[6] J. Fernandez, T. Taleb, M. Guizani, and N. Kato, “Bandwidth aggregation- aware dynamic QoS negotiation

for real-time video applications in next-generation wireless networks,” IEEE Trans. Multimedia, vol. 11, no. 6,

pp. 1082–1093, Oct. 2009.

[7] T. Taleb, K. Kashibuchi, A. Leonardi, S. Palazzo, K. Hashimoto, N. Kato, and Y. Nemoto, “A cross-layer

approach for an efficient delivery of TCP/RTP-based multimedia applications in heterogeneous wireless

networks,” IEEE Trans. Veh. Technol., vol. 57, no. 6, pp. 3801–3814, Jun. 2008.

[8] K. Zhang, J. Kong, M. Qiu, and G. L. Song, “Multimedia layout adaptation through grammatical

specifications,” ACM/SpringerMultimedia Syst., vol. 10, no. 3, pp. 245–260.

[9] M. Wien, R. Cazoulat, A. Graffunder, A. Hutter, and P. Amon, “Real-time system for adaptive video

streaming based on SVC,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, pp. 1227–1237, Sep. 2007.

[10] H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the scalable video coding extension of the

H.264/AVC standard,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, pp. 1103–1120, Sep. 2007.

[11] H. Schwarz and M. Wien, “The scalable video coding extension of the H. 264/AVC standard,” IEEE Signal

Process. Mag., vol. 25, no. 2, pp. 135–141, Feb. 2008.

[12] P. McDonagh, C. Vallati, A. Pande, and P. Mohapatra, “Quality-oriented scalable video delivery using H.

264 SVC on an LTE network,” in Proc. WPMC, 2011.

AUTHORS BIOGRAPHY

G. Divya, pursuing her M.tech in computer science from Vardhaman college of

Engineering, Kacharam village, Shamshabad Mandal, Ranga Reddy District A.P, India.

Affiliated to Jawaharlal Nehru Technological University, Hyderabad. Approved by AICTE,

NEW DELHI.

E R Aruna Asst. Prof. of IT, in Vardhaman college of Engineering, Kacharam village,

Shamshabad Mandal, Ranga Reddy District, A.P, India. She graduated from Computer

Science Dept., at JNTUH, Hyderabad in 2004. She finished her master degree (M.tech) from

Sathyabama University, Chennai, TN in 2008. She Pursuing her Ph.D in Computer Science

and Engineering at JNTU, Hyderabad

Page 410: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1044 Vol. 7, Issue 3, pp. 1044-1052

INTRUSION DETECTION SYSTEM USING DYNAMIC AGENT

SELECTION AND CONFIGURATION

Manish Kumar1, M. Hanumanthappa2 1Assistant Professor, Dept. of Master of Computer Applications,

M. S. Ramaiah Institute of Technology, Bangalore

and Research Scholar, Department of Computers Science and Applications,

Bangalore University, Bangalore, India 2Dept. of Computer Science and Applications, Jnana Bharathi Campus, Bangalore University,

Bangalore -560 056, India

ABSTRACT Intrusion detection is the process of monitoring the events occurring in a computer system or network and

analysing them for signs of possible incidents, which are violations or imminent threats of violation of computer

security policies, acceptable use policies, or standard security practices. An intrusion detection system (IDS)

monitors network traffic and monitors for suspicious activity and alerts the system or network administrator. It

identifies unauthorized use, misuse, and abuse of computer systems by both system insiders and external

penetrators. Intrusion detection systems (IDS) are essential components in a secure network environment,

allowing for early detection of malicious activities and attacks. By employing information provided by IDS, it is

possible to apply appropriate countermeasures and mitigate attacks that would otherwise seriously undermine

network security. However, Increasing traffic and the necessity of stateful analysis impose strong computational

requirements on network intrusion detection systems (NIDS), and motivate the need of architectures with

multiple dynamic sensors. In a context of high traffic with heavy tailed characteristics, static rules for

dispatching traffic slices among sensors cause severe imbalance. The current high volumes of network traffic

overwhelm most IDS techniques requiring new approaches that are able to handle huge volume of log and

packet analysis while still maintaining high throughput. This paper shows that the use of dynamic agents has

practical advantages for intrusion detection. Our approach features unsupervised adjustment of its

configuration and dynamic adaptation to the changing environment, which improvises the performance of IDS

significantly.

KEYWORDS—Intrusion Detection System, Agent Based IDS, Dynamic Sensor Selection.

I. INTRODUCTION

Intrusion Detection is the process of monitoring and analysing the information sources, in order to

detect malicious information. It has been an active field of research for over two decades. John

Anderson’s “Computer Security Threat Monitoring and Surveillance” was published in 1980 and has

embarked upon this field. It was one of the earliest and most famous papers in the field. After that in

1987, Dorothy Denning published “An Intrusion Detection Model”, provided a methodological

framework that inspired many researchers around the world and has laid the groundwork for the early

commercial products like Real Secure, Trip Wire, Snort, Shadow, and STAT etc.

Intrusion Detection technology has evolved and emerged as one of the most important security

solutions. It has several advantages and it is unique compared to other security tools. As information

systems have become more comprehensive and a higher value asset of organizations, intrusion

detection systems have been incorporated as elements of operating systems and network.

Intrusion detection systems (IDS) have a few basic objectives. Among these objectives are

Confidentiality, Integrity, Availability, and Accountability.

Page 411: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1045 Vol. 7, Issue 3, pp. 1044-1052

Intrusion Detection Systems (IDS) are important mechanisms which play a key role in network

security and self-defending networks. Such systems perform automatic detection of intrusion attempts

and malicious activities in a network through the analysis of traffic captures and collected data in

general. Such data is aggregated, analysed and compared to a set of rules in order to identify attack

signatures, which are traffic patterns present in captured traffic or security logs that are generated by

specific types of attacks. In the process of identifying attacks and malicious activities an IDS parses

large quantities of data searching for patterns which match the rules stored in its signature database.

Such procedure demands high processing power and data storage access velocities in order to be

executed efficiently in large networks. The next part of the paper discuss about classification of

Intrusion Detection Systems. The section II of the paper discuss about dynamic sensor agents for

improvising the performance of IDS. Section III discuss about the algorithm for using the dynamic

agent for improvising the performance of IDS. Section IV analyse and show the improvement in

performance of IDS implementation using agent followed by conclusion and future work in section V.

1.1 Classification of Intrusion Detection Systems

Intrusions can be divided into 6 main types:-

1. Attempted break-ins, which are detected by a typical behaviour profiles or violations of

security constraints.

2. Masquerade attacks, which are detected by atypical behaviour profiles or violations of

security constraints.

3. Penetration of the security control system, which are detected by monitoring for specific

patterns of activity.

4. Leakage, which is detected by atypical use of system resources.

5. Denial of service, which is detected by atypical use of system resources.

6. Malicious use, which is detected by a typical behaviour profiles, violations of security

constraints, or use of special privileges.

However, we can divide the techniques of intrusion detection into two main types. IDSs issue security

alerts when an intrusion or suspect activity is detected through the analysis of different aspects of

collected data (e.g. packet capture files and system logs). Classical intrusion detection systems are

based on a set of attack signatures and filtering rules which model the network activity generated by

known attacks and intrusion attempts [8]. Intrusion detection systems detect malicious activities

through basically two approaches: anomaly detection and signature detection [9][21][20].

i. Anomaly Detection

This technique is based on the detection of traffic anomalies. The deviation of the monitored traffic

from the normal profile is measured. Various different implementations of this technique have been

proposed, based on the metrics used for measuring traffic profile deviation.

Anomaly detection techniques assume that all intrusive activities are necessarily anomalous. This

means that if we could establish a "normal activity profile" for a system, we could, in theory, flag all

system states varying from the established profile by statistically significant amounts as intrusion

attempts. However, if we consider that the set of intrusive activities only intersects the set of

anomalous activities instead of being exactly the same, we find a couple of interesting possibilities:

(1) Anomalous activities that are not intrusive are flagged as intrusive. (2) Intrusive activities that are

not anomalous result in false negatives (events are not flagged intrusive, though they actually are).

This is a dangerous problem, and is far more serious than the problem of false positives.

The main issues in anomaly detection systems thus become the selection of threshold levels so that

neither of the above 2 problems is unreasonably magnified, and the selection of features to monitor.

Anomaly detection systems are also computationally expensive because of the overhead of keeping

track of, and possibly updating several system profile metrics. Some systems based on this technique

are discussed in Section 4 while a block diagram of a typical anomaly detection system is shown in

Fig 1.

Page 412: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1046 Vol. 7, Issue 3, pp. 1044-1052

Fig 1:- IDS Anomaly Detection System

ii. Misuse Detection

This technique looks for patterns and signatures of already known attacks in the network traffic. A

constantly updated database is usually used to store the signatures of known attacks. The way this

technique deals with intrusion detection resembles the way that anti-virus software operates.

The concept behind misuse detection schemes is that there are ways to represent attacks in the form of

a pattern or a signature so that even variations of the same attack can be detected. This means that

these systems are not unlike virus detection systems -- they can detect many or all known attack

patterns, but they are of little use for as yet unknown attack methods. An interesting point to note is

that anomaly detection systems try to detect the complement of "bad" behaviour. Misuse detection

systems try to recognize known "bad" behaviour. The main issues in misuse detection systems are

how to write a signature that encompasses all possible variations of the pertinent attack, and how to

write signatures that do not also match non-intrusive activity. Several methods of misuse detection,

including a new pattern matching model are discussed later. A block diagram of a typical misuse

detection system is shown in Fig 2 below.

Fig 2:-IDS Misuse Detection System

Intrusion detection systems are further can also be classified in two groups, Network Intrusion

Detection Systems (NIDS), which are based on data collected directly from the network, and Host

Intrusion Detection Systems (HIDS), which are based on data collected from individual hosts. HIDSs

are composed basically by software agents which analyse application and operating system logs, file

system activities, local databases and other local data sources, reliably identifying local intrusion

attempts. Such systems are not affected by switched network environments (which segment traffic

flows) and is effective in environments where network packets are encrypted (thwarting usual traffic

analysis techniques). However, they demand high processing power overloading the nodes’ resources

and may be affected by denial-of-service attacks. In face of the growing volume of network traffic and

high transmission rates, software based NIDSs present performance issues, not being able to analyses

all the captured packets rapidly enough. Some hardware based NIDSs offer the necessary analysis

throughput but the cost of such systems is too high in relation to software based alternatives.

From the above, it is clear that as IDS grow in function and evolve in power, they also evolve in

complexity. Agents of each new generation of IDS use agents of the previous generation as data

sources, applying ever more sophisticated detection algorithms to determine ever more targeted

responses. Often, one or more IDS and management system(s) may be deployed by an organization

within its own network, with little regard to their neighbours or the global Internet. Just as all

individual networks and intranets connect to form "The Internet", so can information from stand-alone

Page 413: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1047 Vol. 7, Issue 3, pp. 1044-1052

internal and perimeter host- and network-based intrusion detection systems be combined to create a

distributed Intrusion Detection System (dIDS).

Current IDS technology is increasingly unable to protect the global information infrastructure due to

several problems:

i. The existence of single intruder attacks that cannot be detected based on the observations

of only a single site.

ii. Coordinated attacks involving multiple attackers that require global scope for assessment.

iii. Normal variations in system behaviour and changes in attack behaviour that cause false

detection and identification.

iv. detection of attack intention and trending is needed for prevention

v. Advances in automated and autonomous attacks, i.e. rapidly spreading worms, require

rapid assessment and mitigation, and

vi. The sheer volume of attack notifications received by ISPs and host owners can become

overwhelming.

vii. If aggregated attack details are provided to the responsible party, the likelihood of a

positive response increases.

II. DYNAMIC SENSOR SELECTION

In our proposed architecture, IDS LOG can be collected from multiple sensors or agent. In this

section, we present a trust-based algorithm which dynamically determines the best aggregation agent

and also the optimal number of malicious or legitimate behaviour necessary for the reliable

identification of the best aggregation agent, while taking into account the: (i) past effectiveness of the

individual aggregation agents and (ii) number of aggregation agents and the perceived differences in

their effectiveness. We decided to use a trust-based approach for evaluating the aggregation agents,

because it not only eliminates the noise in the background traffic and randomness of the challenge

selection process, but accounts for the fact that attackers might try to manipulate the system by

inserting misleading traffic flows. An attacker could insert fabricated flows [15] hoping they would

cause the system to select an aggregation agent that is less sensitive to the threat the attacker actually

intends to realize. When using trust, one tries to avoid this manipulation by dynamically adapting to

more recent actions of an attacker [4][16].

The problem features a set of classifier agents },...,{ gA that process a single, shared open-ended

sequence ...,...,.1 i of incoming events and use their internal models to divide these events into

two categories: normal and anomalous. The events are inherently of two fundamental types:

legitimate and malicious, and the goal of the classifier agents is to ensure that the normal class as

provided by the agent is the best possible match to the legitimate traffic class, while the anomalous

class should match the malicious class. The classification thus has four possible outcomes [17] for

each event ϕ, two of them being correct classifications and two of them the errors (see also the

confusion matrix in Table 1). Table 1: Confusion Matrix

actual class

legitimate malicious

classification normal true positive false positive

anomalous false negative true negative

The classifier agents actually provide more information, as they internally annotate the individual

events with a continuous “normality” value in the [0, 1] interval, with the value1 corresponding to

perfectly normal events and the value 0 to completely anomalous ones. This continuous anomaly

value describes an agent’s opinion regarding the anomaly of the event, and the agents apply adaptive

or predefined thresholds to split the [0, 1] interval into the normal and anomalous classes.

Given that the characteristics of the individual classifier agents k are unknown in the dynamically-

changing environment, the system needs to be able to identify the optimal classifier autonomously.

Furthermore, the system can have several users with different priorities regarding the detection of

specific types of malicious events. In the network monitoring use-case, some of the users concentrate

Page 414: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1048 Vol. 7, Issue 3, pp. 1044-1052

on major, infrastructure-type events only (such as denial of service attacks),while the other users seek

information about more subtle attack techniques targeting individual hosts. The users are represented

by the user agents and these agents are assumed to know their users preferences. Their primary goal is

to use their knowledge of user preferences to dynamically identify the optimal information source and

to change the source when the characteristics of the environment or user preferences change. To reach

this goal in an environment where they have no abstract model of classifier agents’ performance, they

rely on empirical analysis of classifier agents’ response to a pre-classified set of challenges [11][19].

In the following, we will analyse the problem from the perspective of a single user agent, which tries

to select the best available classification agent, while keeping the number of challenges as low as

possible. The challenges are events with known classification, which can be inserted into the flow of

background events as observed by the system, processed by the classifier agents together with the

background events and finally removed before the system reports the results to the users. The

processing of the challenges allows the user agents to identify the agent which achieves the best

separation of the challenges that represent known instances of legitimate behaviour from the

challenges that represent known malicious behaviour[13][7][18].

III. ALGORITHM

In this section we present a simple but adaptive algorithm for choosing the best classifier agent. For

each time step i , the algorithm proceeds as follows:

For each time step i ∈ N, the algorithm proceeds as follows:

i. Let each aggregation agent classify a set of known instances of malicious or legitimate

behaviour from different attack classes and selected legitimate known instances of

malicious or legitimate behaviour.

ii. Update the trust value of each aggregation agent, based on its performance on the known

instances of malicious or legitimate behaviour in time step i.

iii. Accept the output of the aggregation agent with the highest trust value as classification of

the remaining events of time step i.

Known instances of malicious or legitimate behaviour detection and aggregation agents in each time

step i with the sets of flows for which we already know the actual class, i.e. whether they are

malicious or legitimate. So, we challenge an aggregation agent α with a set of malicious events,

belonging to K attack classes and a set of legitimate events drawn from a single class. With respect to

each class of attacks k, the performance of the agent is described by a mean and a standard deviation:

),( kx

kx for the set of malicious challenges and ),(

xy for the set of legitimate challenges. Both

means lie in the interval [0, 1], and k

x close to 0 and y close to 1 signify accurate classifications of

the agent respectively.

The system used to perform the experiments described in this paper incorporates five different

anomaly detection [5] techniques presented in literature [10][1] [12][2]. Each of the methods works

with a different traffic model based on a specific combination of aggregate traffic features, such as:

Entropies of flow characteristics for individual source IP addresses.

Deviation of flow entropies from the PCA-based prediction model of individual sources.

Deviation of traffic volumes from the PCA-based prediction for individual major sources.

Rapid surges in the number of flows with given characteristics from the individual sources

and

Ratios between the number of destination addresses and port numbers for individual

sources.

These algorithms maintain a model of expected traffic on the network and compare it with real traffic

to identify the discrepancies that are identified as possible attacks. They are effective against zero-day

attacks and previously unknown threats, but suffer from a comparatively higher error rate

[17][10][11], frequently classifying legitimate traffic as anomalous(false positives), or failing to spot

malicious flows (false negatives). The classifier agents can be divided to two distinct classes:

Detection agents analyse raw network flows by their anomaly detection algorithms,

exchange the anomalies between them and use the aggregated anomalies to build and

Page 415: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1049 Vol. 7, Issue 3, pp. 1044-1052

update the long-term anomaly associated with the abstract traffic classes built by each

agent. Each detection agent uses one of the five anomaly detection techniques mentioned

above. All agents map the same events (flows), together with the same evaluation of these

events, the aggregated immediate anomaly of these events determined by their anomaly

detection algorithms, into the traffic clusters built using different features/metrics, thus

building the aggregate anomaly hypothesis based on different premises. The aggregated

anomalies associated with the individual traffic classes are built and maintained using the

classic trust modelling techniques (not to be confused with the way trust is used in this

work).

Aggregation agents represent the various aggregation operators used to build the joint

conclusion regarding the normality/anomaly of the flows from the individual opinions

provided by the detection agents. Each agent uses a distinct averaging operator (based on

order-weighted averaging or simple weighted averaging) to perform the Rgdet→ R

transformation from the gdet-dimensional space to a single real value, thus defining one

composite system output that integrates the results of several detection agents. The

aggregation agents also dynamically determine the threshold values used to transform the

continuous aggregated anomaly value in the [0, 1] interval into the crisp normal/

anomalous assessment for each flow.

The user agent functionality is implemented as a collection of the agents. The user agent creates

individual challenge agents, each of them representing a specific incident in the past, and these

temporary, single purpose agents interact with the data-provisioning layers of the system in order to

insert the flows relative to the incident into the background traffic and to retrieve and analyse the

detection results provided by the classifier agents.

IV. RESULTS AND PERFORMANCE ANALYSIS OF AGENT BASED IDS

We have simulated and tested the IDS using the KDD Cup 1999 dataset. The implementation gives us

the expected results. Our Agent based IDS prototype we are testing detects the simulated attacks. The

question is : why the realization of the system with agents is advantageous? We implement a

centralized system with local sensor that forward filtered data to a central analysis node and compare

it with Agent IDS.

Agent based IDS has proven itself to be capable of handling very high traffic. In such a design, the

incoming network traffic is disseminated to a pool of agents, which process a fraction of the whole

traffic, reducing the possibility of packet loss caused by overload. Agent IDS could support a load of

up to 56 Mbps (450 packets/second) with zero traffic loss. Moreover, we focus on a second important

criterion for IDS: detection delay which is defined as the duration from the time the attack starts to the

time epoch that the attack is detected. We generate a set of packets varied from 1000 to 8000. For

each set we simulate the attack and we calculate the detection delay. Figure3 plots the measurement

results. The detection delay is significantly reduced; Agent IDS is much faster than the centralized

IDS. For example, in the case of 8000 packets, we observe that detection delay is reduced by 56%

(7.91second vs 4.4 second). This can be explained by the fact that agents operate directly on the host,

where an action has to be taken, their response is faster than systems where the actions are taken by

central coordinator.

In fact, one of the most pressing problems facing current IDSs is the processing of the enormous

amounts of data generated by the network traffic monitoring tools and host-based audit logs. IDSs

typically process most of this data locally. Agents offer an opportunity to reduce the network load by

eliminating the need for this data transfer. Instead of transferring the data across the network, agents

can be dispatched to the machine on which the data resides, essentially moving the computation to the

data, instead of moving the data to the computation. It is obvious to see that the code-shipping versus

data-shipping argument is only valid if, the agent’s code and state that have to be transmitted are not

larger than the amount of data that can be saved by the use of an agent. Agent IDS does not only

perform better in terms of effectiveness but also in terms of detection delay.

Page 416: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1050 Vol. 7, Issue 3, pp. 1044-1052

Fig 3:- Performance of Centralize IDS vs Agent Based IDS

V. CONCLUSION AND FUTURE WORK

We take advantage of the multi-agent paradigm especially concerning reducing Network Load.

Indeed, agents offer the possibility to eliminate the need of transferring a huge amount of data to be

analysed. In this paper we have explained the architectural design and performance analysis of a

Centralize IDS vs. Agent based IDS. The experimental result was positive and we found that this

work can be continued with several other improvement and performance analysis As network attacks

are becoming more and more alarming, exploiting systems faults and performing malicious actions

the need to provide effective intrusion detection methods increases. Network-based, distributed

attacks are especially difficult to detect and require coordination among different intrusion detection

components or systems. The experiments emphasize the aim of applying agent to detect some kind of

intrusions and compete others IDS.

ACKNOWLEDGMENT

I would like to thank MSRIT Management, my colleagues and Dept. of Computer Science and

Applications, Bangalore University, for their valuable suggestion, constant support and

encouragement.

REFERENCES

[1] A. Lakhina, M. Crovella, and C. Diot. Mining Anomalies using Traffic Feature Distributions. In ACM

SIGCOMM, Philadelphia, PA, August 2005,pages 217–228, New York, NY, USA, 2005. ACM Press.

[2] A. Sridharan, T. Ye, and S. Bhattacharyya. Connectionless port scan detection on the backbone.

Phoenix, AZ, USA, 2006.

[3] Axelsson, Stefan, “Intrusion Detection Systems: A Taxonomy and Survey”, Technical Report No 99-

15, Dept. of Computer Engineering, Chalmers University of Technology, Sweden, March 2000.

[4] Chang-Lung Tsai; Chang, A.Y.; Chun-Jung Chen; Wen-Jieh Yu; Ling-Hong Chen, "Dynamic intrusion

detection system based on feature extraction and multidimensional hidden Markov model

analysis," Security Technology, 2009. 43rd Annual 2009 International Carnahan Conference on , vol.,

no., pp.85,88, 5-8 Oct. 2009.

[5] D. E. Denning. An intrusion-detection model. IEEE Trans. Softw. Eng., 13(2):222–232, 1987.

[6] F. A. Barika & N. El Kadhi & K. Gh´edira, “Agent IDS based on Misuse Approach”, Journal of

Software, Vol. 4, No. 6, 495-507, August 2009.

[7] Guangcheng Huo; Xiaodong Wang, "DIDS: A dynamic model of intrusion detection system in wireless

sensor networks," Information and Automation, 2008. ICIA 2008. International Conference on , vol.,

no., pp.374,378, 20-23 June 2008.

Page 417: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1051 Vol. 7, Issue 3, pp. 1044-1052

[8] H.Debar, M. Dacier, and A. Wespi, “Towards a taxonomy of Intrusion Detection Systems”, The

International Journal of Computer and Telecommunications Networking - Special issue on computer

network security, Volume 31 Issue 9, Pages 805 – 822, April 23, 1999,

[9] Jun Wu; Chong-Jun Wang; Jun Wang; Shi-Fu Chen, "Dynamic Hierarchical Distributed Intrusion

Detection System Based on Multi-Agent System," Web Intelligence and Intelligent Agent Technology

Workshops, 2006. WI-IAT 2006 Workshops. 2006 IEEE/WIC/ACM International Conference on ,

vol., no., pp.89,93, Dec. 2006

[10] K. Xu, Z.-L. Zhang, and S. Bhattacharrya. Reducing Unwanted Traffic in a Backbone Network. In

USENIX Workshop on Steps to Reduce Unwanted Traffic in the Internet (SRUTI), Boston, MA, July

2005.

[11] Kumar, G.V.P.; Reddy, D.K., "An Agent Based Intrusion Detection System for Wireless Network with

Artificial Immune System (AIS) and Negative Clone Selection," Electronic Systems, Signal Processing

and Computing Technologies (ICESC), 2014 International Conference on , vol., no., pp.429,433, 9-11

Jan. 2014.

[12] L. Ertoz, E. Eilertson, A. Lazarevic, P.-N. Tan,V. Kumar, J. Srivastava, and P. Dokas. Minds -

minnesota intrusion detection system. In NextGeneration Data Mining. MIT Press, 2004.

[13] Lin Zhao-wen; Ren Xing-tian; Ma Yan, "Agent-based Distributed Cooperative Intrusion Detection

System," Communications and Networking in China, 2007. CHINACOM '07. Second International

Conference on , vol., no., pp.17,22, 22-24 Aug. 2007.

[14] Martin Rehak, Eugen Staab, Michal Pechoucek, Jan Stiborek, Martin Grill, and Karel Bartos. 2009.

Dynamic information source selection for intrusion detection systems. In Proceedings of The 8th

International Conference on Autonomous Agents and Multiagent Systems - Volume 2(AAMAS '09),

Vol. 2. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1009-

1016.

[15] Martin Rehak, Eugen Staab, Volker Fusenig, Michal Pechoucek, Martin Grill, Jan Stiborek, Karel

Bartos, and Thomas Engel, “Runtime Monitoring and Dynamic Reconfiguration for Intrusion

Detection Systems”, Proceedings of 12th International Symposium, RAID 2009, Saint-Malo, France,

September 23-25, pp 61-80, 2009.

[16] Paez, R.; Torres, M., "Laocoonte: An agent based Intrusion Detection System," Collaborative

Technologies and Systems, 2009. CTS '09. International Symposium on , vol., no., pp.217,224, 18-22

May 2009.

[17] S. Northcutt and J. Novak. Network Intrusion Detection: An Analyst’s Handbook. New Riders

Publishing, Thousand Oaks, CA, USA, 2002.

[18] Sun-il Kim; Nwanze, N.; Kintner, J., "Towards dynamic self-tuning for intrusion detection

systems," Performance Computing and Communications Conference (IPCCC), 2010 IEEE 29th

International, vol., no., pp.17,24, 9-11 Dec. 2010.

[19] Weijian Huang; Yan An; Wei Du, "A Multi-Agent-Based Distributed Intrusion Detection

System," Advanced Computer Theory and Engineering (ICACTE), 2010 3rd International Conference

on , vol.3, no., pp.V3-141,V3-143, 20-22 Aug. 2010.

[20] Yinan Li; Zhihong Qian, "Mobile Agents-Based Intrusion Detection System for Mobile Ad Hoc

Networks," Innovative Computing & Communication, 2010 Intl Conf on and Information Technology

& Ocean Engineering, 2010 Asia-Pacific Conf on (CICC-ITOE) , vol., no., pp.145,148, 30-31 Jan.

2010.

[21] Yu Cai, Hetal Jasani, “Autonomous Agents based Dynamic Distributed (A2D2) Intrusion Detection

System”, Innovative Algorithms and Techniques in Automation, Industrial Electronics and

Telecommunications 2007, pp 527-533

AUTHORS

Manish Kumar is working as Asst. Professor in Department of Computer Applications, M.

S. Ramaiah Institute of Technology, Bangalore, India. His areas of interest are

Cryptography and Network Security, Computer Forensic, Mobile Computing and

eGovernance. His specialization is in Network and Information Security. He has also

worked on the R&D projects relates on theoretical and practical issues about a conceptual

framework for E-Mail, Web site and Cell Phone tracking, which could assist in curbing

misuse of Information Technology and Cyber Crime. He has published several papers in

International and National Conferences and Journals. He has delivered expert lecture in various academic

Institutions.

Page 418: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1052 Vol. 7, Issue 3, pp. 1044-1052

M Hanumanthappa is currently working as Associate Professor in the Department of

Computer Science and Applications, Bangalore University, Bangalore, India. He has over

17 years of teaching (Post Graduate) as well as Industry experience. He is member of Board

of Studies /Board of Examiners for various Universities in Karnataka, India. He is actively

involved in the funded research projects and guiding research scholars in the field of Data

Mining and Network Security.

Page 419: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1053 Vol. 7, Issue 3, pp. 1053-1059

EVALUATION OF CHARACTERISTIC PROPERTIES OF RED

MUD FOR POSSIBLE USE AS A GEOTECHNICAL MATERIAL

IN CIVIL CONSTRUCTION

Kusum Deelwal1, Kishan Dharavath2, Mukul Kulshreshtha3 1Ph.D. Scholar, 2Assistant Professor, 3Professor,

MANIT, Bhopal, M.P., India

ABSTRACT Red mud is a byproduct produced in the process of extraction of alumina from bauxite. The process is called

Bayers Process. It insoluble product and is generated after bauxite digestion with sodium hydroxide at elevated

temperature and pressure. This paper describes the characteristic properties of Red Mud and possible use as a

geotechnical material. Basics properties like Specific gravity, Particle size distribution, Atter Berg’s limit, OMC

and MDD are determined. Engineering properties like shear strength, permeability and CBR values are also

determined in conformity with the Indian Standard Code and test results are discussed in geotechnical point of

view. It revealed that the behavior of red mud is likely as clay soil with considerably high strength compared to

conventional clay soil.

KEY WORDS: Red mud, Bayer’s process, Bauxite residue.

I. INTRODUCTION

Industrialization and urbanization are the two world wide phenomena. Though these are the necessity

of the society and are mostly inevitable, one has to look into their negative impacts on the global

environment and social life. The major ill effect of these global processes is the production of large

quantities of industrial wastes and the problems related with their safe management and disposal.

Second problem is the scarcity of land, materials and resources for ongoing developmental activities,

including infrastructure.

Red Mud is produced during the process for alumina production. Depending on the raw material

processed, 1–2.5 tons of red mud is generated per ton of alumina produced [1]. In India, about 4.71

million tons/annum of red mud is produced which is 6.25% of world’s total digestion with sodium

hydroxide at elevated temperature and pressure [2]. It is a mixture of compounds originally present in

the parent mineral bauxite and of compounds formed or introduced during the Bayer cycle. It is

disposed as slurry having a solid concentration in the range of 10-30%, pH in the range of 10-13 and

high ionic strength.

Considerable research and development work for the storage, disposal and utilization of red mud is

being carried out all over the world [3]. This article provides an overview of the basic characteristics

of red mud. The main ways of comprehensive utilization are also summarized. It describes the

progress of experimental research and comprehensive utilization. The aim is to provide some valuable

information to further address the comprehensive utilization of red mud.

II. ORGANIZATION OF MANUSCRIPT

The present work is divided into two main two stages. i) Experimental work and ii) Utilization of red

mud as a geotechnical construction material. In experimental works, it is further divided into two

parts; first, tests determining index properties, second, tests determining the engineering properties. In

second stage of the work, it is described the possible utilization of red mud as a geotechnical material.

Page 420: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1054 Vol. 7, Issue 3, pp. 1053-1059

III. MATERIAL: RED MUD

The red mud is one of the major solid wastes coming from Bayer process of alumina production. For

the present work it was collected from HINDALCO, At Renukoot, Uttar Pradesh. The conventional

method of disposal of red mud in ponds has often adverse environmental impacts as during monsoons,

the waste may be carried by run-off to the surface water courses and as a result of leaching may cause

contamination of ground water: Further disposal of large quantities of Red mud dumped, poses

increasing problems of storage occupying a lot of space.

IV. CHARACTERISTIC PROPERTIES OF RED MUD

Index Properties:

Specific gravity of the red mud has been carried out as per the IS: 2720 (Part II) 1980. The

experiment was performed from both pycnometer method and density bottle. The specific gravity of

the red mud was found to be 3.04.

Particle Size distribution of the red mud was carried out as per the IS: 1498 – 1970 As the materials

consists near about 90 percent silt and clay, wet sieve analysis is carried out. The particles passing

through 75 micron was collected and allowed to Hydrometer analysis to determine the particle size

variation. About 87.32% percentage of the total mass was passed through the 75 micron sieve. Fig:1

shows the graph plot between the particle diameter and the percentage finer.

Fig-1: Graph showing particle size distribution with % finer

Standard Proctor test was carried out to determine the maximum dry density and optimum moisture

content of the red mud. The test is carried out as per the IS: 2720 (Part VII) Light compaction was

adopted. The variation of water content and the corresponding water content variation is shown in

thefig:2

.

Fig-2: Graph showing water content and dry density.

Page 421: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1055 Vol. 7, Issue 3, pp. 1053-1059

Atter Berg’s limits of the red mud was determined as per the IS: 2720 (Part V). Corresponding liquid

limit, Plastic limit and plasticity index is shown in the table: 1

Table: 1. Strength and physical parameters of red mud

S.NO Tests Values

1 Maximum dry Density (g/cc) 1.53

2 Optimum moisture Content (%) 33.5

3 Specific Gravity 3.04

4 Liquid Limit (%) 45.5

5 Plastic Limit (%) 32.3

6 Classification ML

7 Cohesion (kg/cm2) 0.123

8 Angle of internal friction(in Degree’s) 26.8

9 CBR (%) Soaked 4.2

Unsoaked 7.8

Chemical and Mineral Compositions of Red Mud

Red mud is mainly composed of fine particles of mud. Its composition, property and phase vary with

the origin of the bauxite and the alumina production process, and will change over time when stocked.

Chemical analysis shows that red mud contains silicium, aluminium, iron, calcium, titanium, sodium

as well as an array of minor elements namely K, Cr, V, Ba, Cu, Mn, Pb, Zn, P, F, S, As, and etc.

Tables 2 and 3 list the chemical and mineral compositions of red mud that are produced by the Bayer

process [4].

Table 2. Typical composition of red mud

Composition Percentage

Fe2O3 30-60%

Al2O3 10-20%

SiO2 3-50%

Na2O 2-10%

CaO 2-8%

TiO2 Trace-25%

Mineralogical Phases:

Mineralogical phases of red mud are listed below [5]

Hematite Fe2O3

Goethite FeO(OH)

Gibbsite AlOH3

Diaspore AlO(OH)

Quartz SiO2

Cancrinite (NaAlSiO4)6CaCO3

Kaolinite Al2O3 2SiO2 2H20

Calcilte [CaCO3]

Engineering Properties.

Permeability test is carried out as per the IS: 2720 (Part XVII). The coefficient of permeability of the

red mud specimen is found out using falling head method. Coefficient of permeability was found to be

5.786e-7cm/s.

Triaxial Compression Test is best suited for Clayey soil. The sample of size 38mm dia. x 76mm

height. After applying confining pressure (e.g. 0.5, 1.0 or 1.5 kg/cm2) deviator stress is applied till

failure. Having minimum two readings Mohr’s stress circles are plotted. A line tangent to the Mohr’s

circles is failure envelope and shear parameters: Cohesion and Angle of Internal Friction. Results are

shown in the table 1

Unconfined Compressive strength: The samples of sizes 38 mm diameter and height of 76 mm were

prepared by static compaction method to achieve maximum dry density at their optimum moisture

contents. Unconfined compressive strength tests were conducted at a strain rate of 1.25 mm/min. The

results obtained are tabulated in table 1

Page 422: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1056 Vol. 7, Issue 3, pp. 1053-1059

California Bearing ratio test: The sample of nearly 4.5 to 5 kg was compacted in a mould of volume

2250cc with 5 layers and 56 blows were given for each layer. For soaked CBR value, the different

sample of identical size is prepared and kept soaking for 4 days with the surcharge. This test was

conducted as per IS: 2720 (Part XXXI). The test results are entered in the table 1

V. COMPREHENSIVE UTILIZATION OF RED MUD IN CONSTRUCTION

A. Red mud in cement replacement

Dicalcium silicate in red mud is also one of the main phases in cement clinker, and red mud can play

the role of crystallization in the production of cement clinker. Fly ash is mainly composed of SiO2 and

Al2O3, thus can be used to absorb the water contained in the red mud and improve the reactive silica

content of the cement. Scientists conducted a series of studies into the production of cement using red

mud, fly ash, lime and gypsum as raw materials. Use of red mud cement not only reduces the energy

consumption of cement production, but also improves the early strength of cement and resistance to

sulfate attack [6]

B. Concrete industry

Red mud from Birac Alumina Industry, Serbia was tested as a pigment for use in the building material

industry for standard concrete mixtures. Red mud was added as a pigment in various proportions

(dried, not ground, ground, calcinated) to concrete mixes of standard test blocks (ground limestone,

cement and water) [7]. The idea to use red mud as pigment was based on extremely fine particles of

red mud (upon sieving: 0.147 mm up to 4 wt%, 0.058 mm up to 25 wt% and the majority smaller than

10 microns) and a characteristic red colour. Compressive strengths from 14.83 to 27.77 MPa of the

blocks that contained red mud between 1 and 32% were considered satisfactory. The reported tests

have shown that neutralized, dried, calcined and ground red mud is usable as pigment in the building

materials industry. Red oxide pigment containing about 70 % iron oxide was prepared from NALCO

red mud by [8] after hot water leaching filtration, drying and sieving.

C. Red mud in the brick industry

D. Dodoo- Arhin, et al [9] have been investigated bauxite red mud-Tetegbu clay composites for their

applicability in the ceramic brick construction industry as a means of recycling the bauxite waste. The

initial raw samples were characterized by X-ray diffraction (XRD) and thermo gravimetric (TG)

analysis. The red mud-clay composites have been formulated as 80%-20%, 70%-30%, 60%-40%,

50%-50% and fired at sintering temperatures of 800ºC, 900ºC and 1100ºC. Generally, mechanical

strengths (modulus of rupture) increased with higher sintering temperature. The results obtained for

various characterization analyses such as bulk densities of 1.59 g/cm3 and 1.51 g/cm3 compare very

well with literature and hold potential in bauxite residue eco-friendly application for low-cost

recyclable constructional materials. Considering the physical and mechanical properties of the

fabricated brick samples, the batch formulation which contained 50% each of the red mud and

Tetegbu clay is considered the best combination with optimal properties for the construction bricks

application and it could be employed in lighter weight structural applications.

VI. UTILIZATION OF RED MUD AS FILLING MATERIAL

A. Road Base Material

High-grade road base material using red mud from the sintering process is promising, that may lead to

large-scale consumption of red mud. Qi [10] suggest using red mud as road material. Based on the

work of Qi, a 15 m wide and 4 km long highway using red mud as a base material was constructed in

Zibo, Shandong. A relevant department had tested the sub grade stability and the strength of road and

concluded that the red mud base road meets the strength requirements of the highway [11].

B. Mining

Yang et al. [12], from the Institute of Changsha Mining Research, have studied the properties,

preparation and pump pressure transmission process of red mud paste binder backfill material. Based

Page 423: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1057 Vol. 7, Issue 3, pp. 1053-1059

on this study, a new technology named “pumped red mud paste cemented filling mining” has been

developed by the Institute of Changsha Mining Research, in cooperation with the Shandong

Aluminum Company. They mixed red mud, fly ash, lime and water in a ratio of 2:1:0.5:2.43, and then

pumped the mixture into the mine to prevent ground subsidence during bauxite mining. The tested 28-

day strength can reach to 3.24 MPa. This technology is a new way not only for the use of red mud, but

also for non-cement cemented filling, successfully resolving the problem of mining methods in the

Hutian bauxite stop. Underground exploitation practice on the bauxite has proved that cemented

filling technology is reliable and can effectively reduce the filling costs, increase the safety factor of

the stop and increase the comprehensive benefits of mining [13].

VII. RECOVERY OF COMPONENTS FROM RED MUD

Red mud primarily contains elemental compositions such as Fe2O3, Al2O3, SiO2, CaO, Na2O and K2O.

Besides, it also contains other compositions, such as Li2O, V2O5, TiO2 and ZrO2. For instance, the

content of TiO in red mud produced in India can be as much as 24%. Because of the huge amount of

red mud, value elements like Ga, Sc, Nb, Li, V, Rb, Ti and Zr are valuable and abundant secondary

resources. Therefore, it is of great significance to recover metals, especially rare earth elements, from

red mud.

Due to the characteristics of a high iron content, extensive research into the recovery of iron from

Bayer process red mud have been carried out by scientists all over the world. The recycling process of

iron from red mud can be divided into roasting magnetic recovery, the reducing smelting method, the

direct magnetic separation method and the leaching extraction method, according to the different

ways of iron separation. Researchers in Russia, Hungary, America and Japan have carried out iron

production experiments from red mud. Researchers from the University of Central South have made

steel directly with iron recovered from red mud [14]. The Chinese Metallurgical Research Institute

has enhanced the iron recovery rate to 86% through making a sponge by red mud-magnetic separation

technology. Sun et al. [15] researched magnetic separation of iron from Bayer red mud and

determined the process parameters of the magnetic roasting-magnetic selecting method to recover

concentrated iron ore.

VIII. SUMMARY AND CONCLUSION

Specific gravity of the red mud is 3.04 which is very high compared to the soil solids. So the

density of red mud will be more and so the strength is more.

From the fig 1 graph showing particle size distribution mud indicated grains are fine and it is

well graded. So the soil can be used as an embankment material, backfill material etc.

From the Atter berg’s limits it is concluded that the plasticity Index of the red mud is13.2. So,

according to the IS classification based on plasticity A-line, the soil falls under ML. Means it

is silt with low compressibility.

The maximum dry density and optimum moisture content of the red mud is 1.53gm/cc and

33.5% Respectively

Co-efficient of permeability of red mud is 5.786e-7cm/s which shows that permeability is

very low. Low permeable materials can be used for construction of earthen dams, road

embankments etc.

The cohesive strength and the angle of shear resistance obtained from the triaxial test are

0.123kg/cm2 and 26.80. The strength value of the red mud is higher than the conventional clay

material.

CBR value of the red mud in soaked condition is 4.2% which is greater than the 3%, so we

can use the red mud as a road material in village side.

By seeing all these properties of the red mud we can utilize the red as geotechnical material

like Backfill material, road sub-grade material embankment material. Red mud is further

stabilized to enhance the more strength with lime, gypsum, fly ash etc

Utilization of red mud is established in brick manufacturing, partial cement refilling, in

concrete industry and stabilization process.

Page 424: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1058 Vol. 7, Issue 3, pp. 1053-1059

A wide variety of potential uses of red mud have been reviewed, yet there is no economically

viable and environmentally acceptable solution for the utilization of large volumes of red

mud.

There is urgent need to undertake research and development for studying the metal speciation

and the changes associated with red mud reuse in the construction purposes and during the

wet storage of red mud in ponds.

IX. SCOPE FOR FUTURE WORK

In the present work, all the tests are worked on the unstabilized red mud. After these tests, it is

concluded that, the red mud can be used as a geotechnical material for varies purposes. In future, one

can also work with the red mud by stabilizing by adding lime, fly ash, gypsum.

REFERENCES

[1]. R. K. Paramguru, P. C. Rath, and V. N. Misra, “Trends in red mud utilization - a review,” Mineral

Processing & Extractive Metallurgy Review, vol. 26, no. 1, pp. 1–29, 2005.

[2]. U. V. Parlikar, P. K. Saka, and S. A. Khadilkar, “Technological options for effective utilization of

bauxite residue (Red mud) — a review,” in International Seminar on Bauxite Residue (RED MUD),

Goa, India, October 2011.

[3]. Suchita Rai, K.L. Wasewar, J. Mukhopadhyay, Chang Kyoo Yoo, Hasan Uslu “Neutralization and

utilization of red mud for its better waste management” ARCH. ENVIRON. SCI. (2012), 6, 13-33

[4]. A. R. Hind, S. K. Bhargava, Stephen C. Grocott, “The surface chemistry of Bayer process solids: a

review”, Colloids and Surfaces A : Physicochem. Eng. Aspects, 146 (1999) 359–374

[5]. E. Balomenos, I. Gianopoulou, D. Panias, I. Paspaliaris “A Novel Red Mud Treatment Process :

Process design and preliminary results” TRAVAUX Vol. 36 (2011) No. 40.

[6]. Qiu XR, Qi YY. Reasonable utilization of red mud in the cement industry. Cem. Technol.

2011;(6):103–105.

[7]. Cablik V (2007). Characterization and applications of red mud from bauxite processing. Gospodarka

Surowcami Mineralnymi (Mineral Resource Management) 23 (4): 29-38.

[8]. Satapathy BK, Patnaik SC, Vidyasagar P (1991). Utilisation of red mud for making red oxide paint.

INCAL-91, International Conference and Exhibition on Aluminium at Bangalore, India 31st July-2nd

Aug. 1991 (1): 159-161.

[9]. D. Dodoo-Arhin*, D. S Konadu, E. Annan, F. P Buabeng, A. Yaya, B. Agyei-Tuffour “Fabrication

and Characterisation of Ghanaian Bauxite Red Mud-Clay Composite Bricks for Construction

Applications. American Jour of Materials Science 2013, 3(5): 110-119.

[10]. Qi JZ. Experimental Research on Road Materials of Red Mud; University of Huazhong Science and

Technology: Wuhan, China; 2005.

[11]. Yang JK, Chen F, Xiao B. Engineering application of basic level materials of red mud high level

pavement (In Chinese). China Munic. Eng. 2006;(5):7−9.

[12]. Yang LG, Yao ZL, Bao DS. Pumped and cemented red mud slurry filling mining method (In Chinese).

Mining Res. Develop. 1996;(16):18–22.

[13]. Wang HM. The comprehensive utilization of red mud (In Chinese). Shanxi Energy Conserv.

2011;(11):58–61.

[14]. Li, WD. New Separation Technology Research of Iron from Bayer Progress Red Mud; Central South

University Library: Changsha, China; 2006.

[15]. Sun YF, Dong FZ, Liu JT. Technology for recovering iron from red mud by Bayer process (In

Chinese). Met. Mine. 2009;(9):176–178.

AUTHOR’S BIOGRAPHY

Kusum Deelwal was born in Karnal, Haryana, India, in 1971. She received the Bachelor

degree in Civil Engineering from the Barkatullah University Bhopal, in 1996 and the Master

in Environmental engineering degree from MANIT University, Bhopal, in 2003, both in

Civil engineering. She is currently pursuing the Ph.D. degree with the Department of Civil

Engineering, Bhopal. Her research interests is in Environmental engineering.

Page 425: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1059 Vol. 7, Issue 3, pp. 1053-1059

Mukul Kulshreshtha was born in Uttar pradesh, in 1967. He received both the Bachelor

degree in Civil Engineering and Masters Degree in Environmental from the IIT Kanpur. He

completed his Doctors degree from IIT Delhi in Environmental Engineering. Presently he

is working as a professor in MANIT, Bhopal, Madhya Pradesh.

Kishan Dharawath was born in Hyderabad, Andra Pradesh, India, in 1974. He received the

Bachelor degree in Civil Engineering from the JNTU Hyderabad, Andra Pradesh, in 1998

and the Master in Geotechnical Engineering from IIT Madras in 2001, Civil engineering. He

received Ph.D. from MANIT, Bhopal in 2013. He is currently working as an assistant

professor in MANIT, Bhopal. His area of interest is Geotechnical and Geoenvironmental.

Page 426: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1060 Vol. 7, Issue 3, pp. 1060-1066

PERFORMANCE ANALYSIS OF IEEE 802.11E EDCA WITH

QOS ENHANCEMENTS THROUGH ADAPTING AIFSN

PARAMETER

Vandita Grover and Vidusha Madan

Department of Computer Science, University of Delhi, New Delhi, 110007, India

ABSTRACT Enhanced Distributed Channel Access is a priority mechanism which supports access category (AC) wise QoS

by differentiating initial contention window (CW) size and arbitration inter-frame space (AIFS) time. EDCA has

fixed range of CW and AIFS that has been chosen initially. In contemporary EDCA model AIFS parameter for

each AC remains static, irrespective of number of stations in a network. We propose a scheme where we vary

AIFS time of each access category, depending on current congestion on network. The proposed model

incorporates a mechanism where AIFS wait is smaller if there are fewer stations contending and moves to more

AIFS wait time in case of many stations willing to transmit. Simulations are also conducted to validate our

suggested enhancements for AIFSN parameter.

KEYWORDS: Enhanced Distributed Channel Access (EDCA), Access Category (AC), Arbitration Interframe

Spacing Number (AIFSN), Contention Window (CW)

I. INTRODUCTION

IEEE 802.11 [6] WLAN is a widely used, robust, scalable communication network which transmits

information over wireless links. It is a protocol for best effort service designed to work in two modes.

Distributed Coordination Function (DCF) [6] employs carrier sense multiple access with collision

avoidance (CSMA/CA) with binary back-off for asynchronous data transmission.

Point Coordination Function (PCF) [6] is optional mechanism and uses a centrally controlled channel

access mechanism to support time-sensitive traffic flows. PCF has not been implemented in most

current products.

With evolving customer needs like requirement of video/audio streaming, wireless phone over IP,

real-time applications Quality of Service (QoS) is particularly important. DCF doesn’t provide QoS,

PCF not being widely implemented and these applications do not differentiate between different data

streams with different QoS requirements. To support such applications the IEEE 802.11e [5] MAC

employs contention based channel access EDCA and centrally controlled channel access function

Hybrid Coordination Function (HCF).

This paper has been organised in five sections. We discuss legacy DCF and EDCA models for

contention and transmission in Wireless LANs in sections II, III. In section IV we propose an adaptive

model which will react according to present load conditions in the network by attuning AIFS

parameter. Section V studies simulation results of the proposed scheme.

II. DISTRIBUTED COORDINATION FUNCTION

2.1. The Basic Access Mechanism [6]

A station (ST) with a new frame to transmit monitors the channel activity. When the channel is idle

for distributed interframe space(DIFS) period, it backs-off for random number of slots. The backoff

counter is frozen, if during backoff the medium is sensed busy. When the medium is free; the backoff

counter resumes and ST transmits when the backoff counter reaches zero. Otherwise, if the channel is

sensed busy (either immediately or during the DIFS), the station persistently monitors the channel

Page 427: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1061 Vol. 7, Issue 3, pp. 1060-1066

until it is measured idle for DIFS time. ST generates the random backoff interval before transmitting

to avoid collision with frames being transmitted by other stations. To evade channel capture, ST waits

a random backoff time between two consecutive new frame transmissions, even if the medium is

sensed idle in the DIFS time.

DCF employs a discrete-time backoff scale. The time immediately following an idle DIFS is slotted,

and a station is allowed to transmit only at the beginning of each slot time (size σ). Slot time is set to

the time needed at any station to detect the transmission from any other station.

The back off procedure is exponential. During each transmission, the backoff time is uniformly

chosen in the range (0, w-1) where w is called the contention window, and depends on the number of

failed transmissions for the packet. At the first transmission attempt, w is initialized to CWmin

(minimum contention window). After each unsuccessful transmission, w is doubled, up to a maximum

value CWmax = 2mCWmin. When the channel is sensed idle, the backoff time counter is decremented; “frozen”, when a

transmission is detected on the channel, and reactivated when the channel is sensed idle again for

DIFS period. The station transmits when the backoff time reaches zero.

2.2. RTS/CTS Mechanism [6]

This is an optional four way handshake technique to avoid the hidden terminal problem. Hidden

terminals are those stations which are sensing the channel for being idle but are far away from the

sending/receiving channel. Since they cannot sense the channel activity (being at a distance), they

transmit and hence resulting in collision. To address this problem the following mechanism is

employed. A station (ST) willing to transmit a frame, waits until the channel is sensed idle for a DIFS, and

follows the backoff conditions, and transmits a special short frame called request to send (RTS) before

sending the frame. When the receiving station (DST) detects an RTS frame, it responds, after a SIFS,

with a clear to send (CTS) frame. ST transmits its frame only if it receives CTS frame correctly and

CTS frames carry information of the length of the frame to be transmitted. This information can be

read by any listening station, which can update its network allocation vector (NAV). NAV has

information about the period of time when the channel will remain busy. Therefore, when a terminal is

hidden from either the transmitting or the receiving station, by detecting just one frame among the

RTS and CTS, it can delay further transmission accordingly, and thus avoid collision.

The RTS/CTS mechanism reduces the frames involved in contention process and hence is very

effective in terms of system performance. If stations wanting to transmit at the same time employ the

RTS/CTS mechanism, collision occurs only on the RTS frames, and it is detected easily by the

transmitting stations by the lack of CTS responses making the system adapt accordingly.

III. ENHANCED DISTRIBUTED CHANNEL ACCESS[5]

DCF provides the best effort service and real-time multimedia applications (like voice, video) are not

differentiated with data applications. EDCA is designed to enhance the DCF mechanism to provide

prioritized QoS and to provide a distributed access method to support service differentiation among

traffic classes. Traffic is categorised in four Access Categories (ACs). Smaller CWs are assigned to

ACs with higher priorities so that probability of successful transmission is biased towards high-

priority ACs. The initial CW size can be set differently for different priority ACs, yielding higher

priority ACs with smaller.

Table 1. Contention Window Boundaries [5] for different ACs.

AC CWmin[AC] CWmax[AC] AIFSN

0 CWmin CWmax 7

1 CWmin CWmax 3

2 (CWmin+1)/2-1 CWmax 2

3 (CWmin+1)/4-1 (CWmin+1)/2-1 2

Page 428: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1062 Vol. 7, Issue 3, pp. 1060-1066

Differentiation is achieved by applying AIFS wait instead of using fixed DIFS as in the DCF. The

AIFS for a given AC is determined by the following equation:

AIFS = SIFS + AIFSN * slot time

AIFSN: AIFS Number and determined by the AC and physical settings Slot time: Time slot duration.

Table 2. Default EDCA Parameters [5] for different ACs.

AC CWmin[AC] CWmax[AC] AIFSN

AC_BK 15 1023 7

AC_BE 15 1023 3

AC_VI 7 15 2

AC_VO 3 7 2

The AC with the smallest AIFS has the highest priority. The physical carrier sensing and the virtual

sensing methods are similar to those in the DCF. The countdown procedure when medium is sensed

idle is different in EDCA. After the AIFS period, the backoff counter decreases by one at the

beginning of the last slot of the AIFS. In DCF, this is done at the beginning of the first time slot

interval following the DIFS period. For each station, different ACs have different queues for buffering frames. Each AC within a station

acts like a virtual station and contends for channel access. AC independently starts its backoff after

sensing the medium idle for at least AIFS period. During virtual collision: different ACs finish AIFS

wait simultaneously, the AC with higher priority has the opportunity for physical transmission, while

the lower priority AC follows the backoff rules.

A transmission opportunity (TXOP) limit has been defined which is the time interval for a station

when it can initiate transmissions. During a TXOP, a station may be allowed to transmit multiple data

frames from the same AC with a SIFS wait between an ACK and next frame. This is also referred to

as contention free burst (CFB).

IV. ADAPTING AIFSN PARAMETER IN EDCA

Joe Nauom Sawaya, Bissan Ghaddar in [1] and Joe Nauom Sawaya, Bissan Ghaddar and others in [3]

suggest a scheme where based on current load conditions CW value is estimated and the system

outputs CW value at which node will be able to transmit. However, the legacy EDCA, [1] and [3] and

many other proposed adaptive models do not take into account the time each AC has to wait before

transmitting. In EDCA, each access category has a fixed value of AIFSN using which it computes the

AIFS time for which that access category must wait. The drawback of this scheme is that load

conditions of the system are not taken into account. The load conditions include two main factors:

Number of stations in the system

Probability of collision

In this section we propose a scheme where AIFS wait time will vary according to the current

congestion in the network. The system is assumed to be in condition where there are many stations

contending for the slot and each station always has a frame to transmit. So the number of stations is an

important part when load is being determined. The probability of collision determines the amount of

congestion in the system at any time. It gives the number of frames that have to be retransmitted.

The stations must wait for a long time even when the load is less. In case the load is more the stations

wait for a predefined time leading to more collisions. For example, if there are only 2 stations in the

system and the AIFS for the access category with least priority is, say 7, it has to wait unnecessarily

for a long time. This will only in increase the load as collisions would increase. To reduce the delay

or number of collisions, depending on the system state, the AIFSN values may be adapted

dynamically.

To have variable AIFSN values we define a range of values for each access category from which the

actual AIFSN value is computed. We denote this computed AIFSN value by AIFSN'. The range of

values for different access categories must be non-overlapping so that an access category with higher

priority does not have to wait for a longer period than an access category with low priority. The

selected value of AIFSN' lies at the lower end of the range if load is less, thereby selecting a smaller

Page 429: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1063 Vol. 7, Issue 3, pp. 1060-1066

value. In case of greater load the value selected lies at the far end of the predefined interval. This

results in stations having a smaller waiting time when load conditions are light and larger waiting time

when the system is congested.

Let the range of values for each access category is denoted by [Mi, Ni]. For example, the interval

could be [2, 5] for some access category. Thus the possible values from which the access category can

choose its AIFN value lie in range 2 to 5. This range of values for each access category is

predetermined depending on the optimal values that would be obtained after simulation. The AIFSN'

values are chosen from amongst these values. The load condition is determined by the probability of

collision at that time. We take Pc since it includes the no of stations in the system at any point of time

as well as the amount of congestion in the system. We assume that each station always has a frame to

transmit. Each time before transmitting a frame, the station must compute the collision probability Pc

and the AIFSN' value to be used. The computation is illustrated as follows Let the interval of values for access category i from which the AIFSNi' value will be chosen be [2, 5]. Let number of possible values to be chosen from be mi mi =5-2+1 = 4 The number of slots in which the probability of collision can be divided is thus 4.

The length of each slot is

ki = 100/4 =25 i.e. we have four slots from 0-25%, 25-50%, 50-75% and 75-100%

If the probability of collision lies in the first slot the AIFSN value 2 is chosen. Similarly if the

collision probability lies in the range 50-75% AIFSN value 4 is chosen and so on.

Say the value of Pc is 0.3, then its quantified as 30%. AIFSNi' = 2 + (0.3 X 100) / 25 AIFSi = SIFS + AIFSN'i * slot time Generalising the above illustration

AIFSN range: [Mi, Ni] The number of possible values AIFSN' can choose from is calculated as

mi = AIFSN[Ni-Mi+1] (1) Let ki denote slot length corresponding to mi ki = 100 / mi (2) We have a one to one correspondence between the possible Pc values and the AIFSN values to be

chosen from. We choose that value of AIFSN which corresponds to the percentage slot that contains

the current Pc value. This we can obtain by adding the minimum AIFSN value of the range of values

to the corresponding slot of the probability. AIFSNi' = minAIFSNi + (Pc X100)/ ki (3) Using this AIFSNi' we compute the AIFSi values.

V. SIMULATION ANALYSIS OF SUGGESTED MODEL

5.1. configuration

We used Pamvotis Network Simulator [8] (a free and open source simulator) to test various scenarios

of dynamically adjusting AIFSN number. The source code of Pamvotis supports EDCA with static

values of AIFS number for each category that can be configured during set up using GUI. We

enhanced Pamvotis code with additional Java code to support AIFSN range which could be set-up

before the simulation run. Our suggested mathematical formula using Equations 1, 2 and 3 was

implemented to adapt AIFSN number depending on network congestion and many stations contending

for a slot to transmit.

To achieve load condition, we have varied the number of stations in the system from 10 to 35 and data

rate to be 2Mbps. Each station has a frame ready to send, with packet generation rate being constant

and packet length (8000 bits) being uniform for all packets. Each simulation run was of 300 seconds.

We had run simulations for various ranges for AIFS parameter. We discuss the scenario which reduced

delay significantly.

Page 430: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1064 Vol. 7, Issue 3, pp. 1060-1066

Scenario 1: Classic EDCA

AC0 : AIFS [7-7], CWmin : 15 CWmax: 1023

AC1 : AIFS[3-3], CWmin : 15 CWmax : 1023

AC2 : AIFS[2-2], CWmin : 7 CWmax : 15

AC3 : AIFS[2-2], CWmin : 3 CWmax : 7

Scenario 2: Adapted EDCA

AC0 : AIFS [6-9], CWmin : 15 CWmax: 1023

AC1 : AIFS[3-6], CWmin : 15 CWmax : 1023

AC2 : AIFS[2-2], CWmin : 7 CWmax : 15

AC3 : AIFS[1-1], CWmin : 3 CWmax : 7

5.2. ANALYSIS OF DELAY AND THROUGHPUT GRAPHS

Pamvotis simulator randomly generates an Access Category for a station. We have calculated the

mean delay for each AC for stations varying from 10 to 35.

Following are the mean delay graphs for each Access Category.

Figure 1. Delay Comparison (AC0)

Figure 2. Delay Comparison (AC1)

Figure 3. Delay Comparison (AC2)

1011121314151617181920212223242526272829303132333435

0

100

200

300

400

NodesAcc

ess

Del

ay(m

sec)

Scenario 1

Scenario 2

Page 431: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1065 Vol. 7, Issue 3, pp. 1060-1066

Figure 4. Delay Comparison (AC3)

In both the models, in accordance with service differentiation DelayAC0 > DelayAC1 > DelayAC2 >

DelayAC3. As the number of stations increase the average delay per access category increases due to

system load and collisions. We observe noticeable decrease in delay for Scenario 2 with range of

AIFS parameters in comparison to fixed AIFSN in Scenario 1. For throughput analysis we consider average throughput of each AC in both the scenarios, since

Pamvotis generates ACs randomly. We then compare average throughput of the entire system from 10

to 35 nodes for the adapted and legacy EDCF model.

Figure 5. Scenario 1

Figure 6. Scenario 2

We also see that the suggested model achieves service differentiation and ThroughputAC3 >

ThroughputAC2 > ThroughputAC1 >= ThroughputAC0 where the access category is present.

10 12 14 16 18 20 22 24 26 28 30 32 34 0

0

20

40

60

80

100

Nodes

Thro

ughp

ut(

kb

ps)

AC3

AC2

AC1

AC0

10 12 14 16 18 20 22 24 26 28 30 32 34 0

0

20

40

60

80

100

Nodes

Thro

ughp

ut(

kb

ps)

AC3

AC2

AC1

AC0

1011121314151617181920212223242526272829303132333435

0

20

40

60

80

100

NodesA

cces

s

Del

ay(m

sec)

Scenario 1

Scenario 2

Page 432: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1066 Vol. 7, Issue 3, pp. 1060-1066

Figure 7. Average System Throughput

A marginal increase in average system throughput is also observed for Scenario 2.

Simulation results validate our model by maintaining service differentiation, showing a significant

decline in delay for each AC and an increase in average system throughput.

VI. CONCLUSIONS AND FUTURE WORK

In this paper we had accustomed the AIFS parameter based on the load conditions in the system. The

simulation results show improvement in system performance in terms noticeable decrease in media

access delay and increase in throughput of each access category in comparison to the static AIFS

values used in conventional EDCA algorithm.

Various scenarios were tested and we found through simulation that AC0: 6 – 9 AC1: 3 - 6 AC2: 2 – 2 and AC3: 1 – 1 are best suited for the model we proposed. We also conclude that using

this range of AIFS parameters will continue to give higher priority to voice/video traffic and low

priority ACs will not have to wait too long to transmit when there is less congestion in the system. In future, it can be explored how adapting both CW[1] and AIFS on the network depending on current

load conditions will improve the quality of service.

ACKNOWLEDGEMENTS

This work was a part of M.Sc. (C.S.) curriculum of Department of Computer Science, University of

Delhi. The authors would like to thank Mr. Pradyot Kanti Hazra (Associate Professor,Deptt. of Comp,

Sc., University of Delhi) for his valuable guidance.

REFERENCES

[1]. Joe Naum Sawaya, Bissan Ghaddar, A Fuzzy Logic Approach for Adjusting the Contention Window

Size in IEEE 802.11e Wireless Ad Hoc Networks

[2]. Stefan Mangold, Sunghyun Choi, Peter May, Ole Klein, Guido Heirtz, Lothar Stibor, Performance

Analysis and Enhancements for IEEE 802.11e Wireless Networks

[3]. Joe Naoum-Sawaya, Bissan Ghaddar, Sami Khawam, Haidar Safa, Hassan Artail, and Zaher Dawy,

Adaptive Approach for QoS Support in IEEE 802.11e Wireless LAN.

[4]. Qiang Ni, IEEE 802.11e Wireless LAN for Quality of Service.

[5]. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std

802.11TM-2007-Revision of IEEE Std 802.11-1999

[6]. IEEE 802.11 WG, Reference number ISO/IEC 8802-11:1999 (E) IEEE STD 802.11, 1999 edition.

International Standard [for] Information Technology-Telecommunications and information exchange between

systems-Local and metropolitan area networks-Specific Requirements- “Part 11: Wireless LAN Medium Access

Control (MAC) and Physical Layer (PHY) specifications” 1999

[7]. Byron W Putman, 802.11 WLAN Hands-On Analysis: Unleashing the Network Monitor for

Troubleshooting a and Optimization

[8]. Pamvotis – IEEE 802.11 WLAN Simulator - http://pamvotis.org/

AUTHORS

Vandita Grover has been associated with University of Delhi as an Assistant Professor since 2011. Prior to that

she worked with Aricent Technologies for around three years. She did M.Sc.(CS) from Delhi University in 2008.

Vidusha Madan did M.Sc.(CS) from Delhi University in 2008 and is presently working at CSC India.

10 12 14 16 18 20 22 24 26 28 30 32 34

0

100

200

300

400

NodesThro

ughp

ut(

Kb

p

s)

Scenario 1

Scenario 2

Page 433: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1067 Vol. 7, Issue 3, pp. 1067-1074

DATA DERIVATION INVESTIGATION

S. S. Kadam, P.B. Kumbharkar Department of Computer Engineering, SCOE, Sudumbare, Pune

University of Pune, Pune, Maharashtra, India

ABSTRACT Malicious software is a major issue in today’s computer world. Such software can silently reside in user’s

computer and can easily interact with computing resources. It is necessary to improve the honesty of host and

its system data. For improvement in security and honesty of host, this work is introduced. This mechanism

ensures the correct origin or provenance of critical system information and prevents utilization of host

resources by malware. Using this mechanism the source where a piece of data is generated can be identified. A

cryptographic origin approach ensures system properties and system- data integrity at kernel level. A frame

work is used for restricting outbound malware traffic. This frame work identifies network activities of malware.

This frame work can be used as powerful personal firewall for investigating outgoing traffic of a host.

Specifically, our derivation verification scheme requires outgoing network packets to flow through a checkpoint

on a host, to obtain proper origin proofs for later verification

INDEX TERMS— Authentication, malware, cryptography, derivation, networking.

I. INTRODUCTION Compared to the first generation of malicious software in late 1980’s, modern attacks are more

stealthy and pervasive. Kernel-level root-kits are a form of malicious software that compromises the

integrity of the operating system. Such root-kits stealthily modify kernel data structures to achieve a

variety of malicious goals, which may include hiding malicious user space objects, installing

backdoors and Trojan horses, logging keystrokes, disabling firewalls, and including the system into a

botnet [2]. So, host-based signature-scanning approaches alone were proven inadequate against new

and emerging malware [6]. We view malicious software or malware in general as entities silently

residing on a user’s computer and interacting with the user’s computing resources. For example, the

network calls may be issued by malware to send outbound traffic for denial-of-service attacks, spam.

Goal of our work is to improve the reliability of the OS-level data flow; specifically, we provide

mechanisms that ensure the correct origin or derivation of critical system data, which prevents

antagonist from utilizing host resources [1]. We define a new security mechanism – data-derivation

honesty. It verifies the source from which a piece of data is generated. For outbound network packets,

we deploy special cryptographic kernel modules at strategic positions of a host’s network stack, so

that packets need to be generated by user-level applications and cannot be injected in the middle of

the network stack. It gives low overhead. The implication of network-packet origin is that one can

deploy a sophisticated packet monitor or firewall at the transport layer such as [7] without being

bypassed by malware. The application of this system is for distinguishing user inputs from malware

inputs, which is useful in many scenarios.

Contribution Work: A new cryptographic derivation verification approach is presented here. And

its applications in understanding strong host-based traffic-monitoring are demonstrated.

The key exchange between the two modules is performed using asymmetric keys which is expensive

due of their storage and computation cost. This requires RSA algorithm for public key generation and

encryption which has high time complexity, so we replace this algorithm with general three-tier

security framework for authentication and pair wise key establishment. This three-tier security

architecture consists of three separate modules i.e. sign, verify, access module. Two polynomial

Page 434: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1068 Vol. 7, Issue 3, pp. 1067-1074

identifier pools of size M and S are created. Sign and access module are randomly given Km (Km>1)

and 1 identifiers from M respectively, similarly verify module and access module are randomly given

Ks and Ks-1 identifiers from S respectively. To establish a direct pair wise key between sign module

and verify module, a sign module needs to find a stationary access module in its neighborhood, such

that, access module can establish pair wise keys with both sign module and verify module. In other

words, a stationary access module needs to establish pair wise keys with both the sign module and the

verify module. It has to find a common polynomial m (from M) with the sign module and a common

polynomial k (from K) with the verify module.

II. LITERATURE REVIEW

Existing root-kit detection work includes identifying suspicious system call execution patterns,

discovering vulnerable kernel hooks, exploring kernel invariants, or using a virtual machine to enforce

correct system behaviors. For example, Christodorescu, Jha, and Kruegel collected malware behaviors

like system calls and compared execution traces of malware against benign programs [9]. They

proposed a language to specify malware behavior and an algorithm to mine malicious behaviors from

execution traces. A malware analysis technique was proposed and described based on hardware

virtualization that hides itself from malware. Although existing OS level detection methods are quite

effective, they typically require sophisticated and complex examination of kernel instruction

executions. To enforce the integrity of the detection systems, a virtual machine monitor (VMM) is

usually required in particular for root-kit detection. TPM is available on most commodity computers.

Information flow control has been an active research area in computer security. As early as in the 70s,

Denning et al [3][4] has proposed the lattice model for securing the information flow and applied it to

the automatic certification of information flow through a program. Data tainting, as an effective

tracking method, is widely used for the purposes of information leak prevention and malware

detection. Taint tracking can be performed at different levels.

Here use of TPM as a signature generator may be viewed as a special type of data tainting. In addition

to conventional taint tracking solutions such as hardware memory bit or extended software data

structure, here TPM-based solution uniquely supports the cryptographic operations to enforce data

confidentiality and the integrity of taint information. The important feature about TPM is its on-chip

secret key. Therefore, the client device can be uniquely authenticated by a remote server. Our paper

focuses on a host-based approach for ensuring system-level data integrity and demonstrates its

application for malware detection. In comparison, network trace analysis typically characterizes

malware communication behaviors for detection. Such solutions usually involve pattern-recognition

and machine learning techniques, and have demonstrated effectiveness against today’s malware. Our

work provides a hardware-based integrity service to address that problem. In comparison to NAB

which is designed specifically for browser input verification, this work provides a more general

system-level solution for keystroke integrity that is application-oblivious.

The element of human behavior has not been extensively studied in the context of malware detection,

with a few notable exceptions including solutions by Cui, Katz, and Tan and Gummadi [5][8]. They

investigated and enforced the temporal correlation between user inputs and observed traffic. BINDER

describes the correlation of inputs and network traffic based on timestamps. It does not provide any

security protection against the detection system itself, e.g., how to prevent malware from forging

input events.

The work by Srivastava and Giffin on application aware blocking of malware traffic may bear

superficial similarity to our solution [10]. They used a virtual machine monitor (VMM) to monitor

application information of a guest OS without using any cryptographic scheme. Existing system use

root kit-detection work which includes identifying suspicious system call execution patterns,

discovering vulnerable kernel hooks, exploring kernel invariants or using a virtual machine to enforce

correct system behaviors. The lattice model for securing the information flow through a system as an

effective tracking method is widely used for the purposes of information leak prevention and malware

detection and can be performed at different levels, for example within an application, within a system,

or across distributed hosts. But this system lacks in cryptographic operations to enforce data

confidentiality and the integrity.

Page 435: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1069 Vol. 7, Issue 3, pp. 1067-1074

III. DESIGN

Introduced derivation verification mechanism has a essential difference from the traditional

cryptographic signature scheme. In most signature schemes the signer is assumed to be a person who

exercises judgment in signing documents and also in protecting his or her signing keys. In the

environment of malware detection, the signer and verifier are programs. Prevention against these

attacks is critical. For network-traffic monitoring, malware may attempt to send traffic by directly

raising functions at the network-layer, but not at the lower level data-link or physical layers. We

assume that the Trusted Platform Module (TPM) is corrupt opposing; the cryptographic operations are

applied suitably; and the remote server is trusted and secure. TPM provides the guarantee of load-time

code integrity. It does not provide any detection ability for run-time compromises such as buffer

overflow attacks [11]. Advanced attacks [12], [13] may still be active under this assumption,

indicating the importance of our solutions.

We describe three actions for data-derivation investigation on a host: setup, sign and verify.

• Setup: the data producer sets up its signing key k and data consumer sets up its verification key k0 in

a secure fashion that prevents malware from accessing the secret keys.

• Sign(D, k): the data producer signs its data D with a secret key k, and outputs D along with its proof

sig.

• Verify(sig,D, k0): the data consumer uses key k0 to verify the signature sig of received data D to

ensure its origin, and rejects the data if the investigation fails. Although simple, the cryptographic

derivation investigation method can be used to ensure and impose correct system and network

properties and appropriate workflow under a trusted computing environment.

IV. INVESTIGATING DERIVATION OF OUTBOUND TRAFFIC

The cryptographic derivation verification technique in a network setting, for ensuring the reliability of

outbound packets, as they flow through the host’s network stack is illustrated here. Fig. 1 shows the

network stack. Genuine traffic origins from the application layer whereas malware traffic may be

injected to the lower layers.

Fig. 1. Network stack. Genuine traffic origins from the application layer whereas malware traffic may be

injected to the lower layers. Traffic checkpoints are placed at the Sign and Verify modules.

Traffic checkpoints are placed at the Sign and Verify modules. The design of a lightweight traffic

monitoring framework is described. It can be used as a building block for constructing powerful

personal firewalls or traffic-based malware detection tools. We demonstrate the effectiveness of our

traffic supervising framework in identifying the network activities of quiet malware.

The network stack is part of the host’s operating system and consists of five layers, application,

transport, network, data link, and physical layers. User outbound traffic travels all five layers on the

stack from the top to the bottom before being sent out. System services are typically implemented as

applications, thus their network flow also traverses the entire Internet protocol stack. Specifically, our

Page 436: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1070 Vol. 7, Issue 3, pp. 1067-1074

derivation investigation scheme requires outgoing network packets to flow through a checkpoint on a

host, to obtain proper origin proofs for later investigation. Any traffic sent through disabling or

bypassing the firewall can be detected, as the packets are unable to provide their origin proofs. And

we can effectively prevent any traffic to be sent without passing through a certain checkpoint

appreciably improving the assurance of traffic-based malware detection on hosts. Such a simple yet

powerful traffic-monitoring framework can defer advanced detection on application-level traffic such

as [14]. Genuine outbound network traffic passes through the entire network stack in the host’s

operating system. We develop a strong cryptographic procedure for enforcing the proper origin of a

packet on a host.

A. Architecture of Traffic Provenance Verification

Here a general approach is described for improving the assurance of system data and properties of a

host, which has applications in preventing and identifying malware activities. The host-based system

security solutions against malware complement network-traffic-based analysis. We demonstrated

application in identifying quiet malware activities of a host, in particular how to distinguish

malicious/unauthorized data flow from valid one on a computer that may be compromised.

Fig. 2. System Architecture

Our design of the traffic-monitoring framework extends the host’s network stack and deploys two

kernel modules, Sign and Verify modules, as illustrated in Figure 2. Both signing and verification of

packets take place on the same host but at different layers of the network stack – the Sign module is at

the transport layer, and the Verify module is at the network layer. The two modules sharing a secret

cryptographic key monitor the integrity of outbound network packets. All legitimate outgoing network

packets first pass through the Sign module, and then the Verify module. The Sign module signs every

outbound packet, and sends the signature to the Verify module on the same host, which later verifies

the signature with a shared key. The signature proves the provenance of an outgoing packet. If a

packet’s signature cannot be verified or is missing, then the packet is labeled as suspicious.

V. IMPLEMENTATION

The system follows three-tier architecture. Our main checkpoints are created by polynomial identifier

pools.

The three-tier security architecture consists of three separate modules i.e. sign, verify, access module.

Two polynomial identifier pools of size M and S are created. Sign and access module are randomly

given Km (Km>1) and 1 identifiers from M respectively, similarly verify module and access module

are randomly given Ks and Ks-1 identifiers from S respectively. To establish a direct pair wise key

between sign module and verify module, a sign module needs to find a stationary access module in its

neighborhood, such that, access module can establish pair wise keys with both sign module and verify

module. In other words, a stationary access module needs to establish pair wise keys with both the

sign module and the verify module. It has to find a common polynomial m (from M) with the sign

module and a common polynomial k (from K) with the verify module.

Page 437: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1071 Vol. 7, Issue 3, pp. 1067-1074

Summary:

Input: In this module we will design GUI and Database of the system and we complete input module.

Sign: In this module we create sign module and transfer data input module to sign module and we add

private key and public key of sign module and encrypt data.

Verify: In this module we create verify module, using this module and sign module we exchange the

key and decrypt the data. In this module we use UMAC.

Output: In this module we develop output module it is use to get data and in that we will some option

for save and reject data.

Fig. 3 Process flow.

RSA algorithm is used here for key generation, encryption and decryption.

RSA Key Generation Algorithm: Begin

Select two large prime numbers p, q

Compute

n = p q

v = (p-1) (q-1)

Select small odd integer k relatively prime to v

gcd(k, v) = 1

Compute d such that

(d k)%v = (k d)%v = 1

Public key is (k, n)

Private key is (d, n)

End

RSA Encryption Algorithm:

Begin

Let Input: integers k, n, M

M is integer representation of plaintext message

Let C be integer representation of cipher text

Compute

C = (Mk)%n

end

Output: integer C

– cipher text or encrypted message

RSA Decryption Algorithm:

Begin

Let Input: integers d, n, C

C is integer representation of cipher text message

Let D be integer representation of decrypted cipher

text.

Compute

D = (Cd)%n

end

Input Data Sign Module Verified DataVerify ModuleData Data

key Exchange

add public and private key add public and private key

Generate signing key

and symmetric key

b0,b1 a0,a1

Page 438: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1072 Vol. 7, Issue 3, pp. 1067-1074

Output: integer D

- decrypted message

VI. STATISTICAL MODEL

Set Theory Analysis:

Identify the Input User Data:

IN= {in1, in2, in3….}

Where ‘IN’ is main set of Input User Data like in1, in2, in3…inn

Identify the public key:

PK= {pk1, pk2, pk3….}

Where ‘PK’ is main set of public key User like pk1, pk2, pk3…pkn

Identify the Private key:

PriK= {prik1, prik2, prik3….}

Where ‘PriK’ is main set of private key User like prik1, prik2, prik3…prikn

Identify the Key Exchange:

KE= {ke1, ke2, ke3….}

Where ‘KE’ is main set of key Exchange like ke1, ke2, ke3…ken

Identify the Key Generation:

KG= {kg1, kg2, kg3….}

Where ‘KG’ is main set of Key Generation like kg1, kg2, kg3…kgn

Identify the Symmetric key:

SK= {sk1, sk2, sk3….}

Where ‘SK’ is main set of symmetric key like sk1, sk2, sk3…skn

Identify the Signing Key:

SIK= {sik1, sik2, sik3….}

Where ‘SIK’ is main set of singing key like sik1, sik2, sik3…sikn

Process:

We define a security property data origin integrity. It states that the source from which a piece of data

is generated can be verified. We give the concrete illustration of how data-provenance integrity can be

realized for system-level data.

Identify the processes as P.

P= {Set of processes}

P = {P1, P2, P3, P4……Pn}

P1 = {e1, e2, e3, e4, e5}

Where

{e1= input Data}

{e2= key generation}

{e2= key exchange}

{e3= create signature}

{e4= verify signature}

{e5=display Message}

Create Signature:

We use UMAC algorithm to generate signature for each packet.

UMAC signature = H_K(S) XOR F(nonce)

Where H = hash algorithm

K = Signing Key

S = Source

F = Pseudorandom number generator

Signature Encryption:

We use Advance cryptography algorithm for signature encryption.

CB1 = P (XOR) KB1

CB2= CB1 >> 3

CB3 = CB2 (XOR) KB2

CB4 = CB3 (XOR) KB3

Page 439: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1073 Vol. 7, Issue 3, pp. 1067-1074

CB4 is the encrypted data.

P – Plain text

CB – Cipher block

KB – Key block

The Three Tier Security Architecture:

Polynomial Identifier pool M= {m1, m2, m3…} of size m.

Polynomial Identifier pool S= {s1, s2, s3…} of size s.

Sign and access module are randomly given Km (Km>1) and 1 identifiers from M respectively.

Verify and access modules are randomly given Ks and Ks-1 identifiers from S respectively.

VII. EXPECTED RESULTS

Table 1. Expected Results

The above table shows the expected results of system. The given system improves the security of data

as well as kernel module.

VIII. CONCLUSION

Here a general approach for improving the assurance of system data and properties of a host is

described, which has applications in preventing and identifying malware activities. Defined host-

based system security solutions against malware complement network-traffic based analysis. Here

application of derivation investigation mechanism is demonstrated in identifying quiet malware

activities of a host, in order to distinguish malicious/unauthorized data flow from genuine one on a

computer.

Technical contributions are, the model and operations of cryptographic derivation identification in a

host based security setting is proposed. It’s important usage for achieving highly assured kernel data

and application data of a host, and associated technical challenges are pointed out. And, origin

investigation approach is demonstrated by a framework for ensuring the honesty of outbound packets

of a host. This traffic-monitoring framework creates checkpoints that cannot be bypassed by malware

traffic.

IX. FUTURE SCOPE

In this project key stroke integrity also can be verified.

REFERENCE

[1]. Kui Xu, Huijun Xiong, Chehai Wu, Deian Stefan, Danfeng Yao Data-Provenance Verification For Secure

Hosts In IEEE Transactions on Dependable and Secure Computing Vol.9 No.2 Year 2012

[2]. A. Baliga, V. Ganapathy, and L. Iftode. Automatic inference and enforcement of kernel data structure

invariants. In 24th Annual Computer Security Applications Conference (ACSAC), 2008.

[3]. D. E. Denning. A lattice model of secure information flow. Commun. ACM, 19:236–243, May 1976.

[4]. D. E. Denning and P. J. Denning. Certification of programs for secure information flow. Commun. ACM,

20:504–513, July, 1977.

[5]. W. Cui, R. H. Katz, andW. tian Tan. Design and Implementation of an extrusion-based break-in

detector for personal computers. In ACSAC, pages 361–370. IEEE Computer Society, 2005.

Input Sign Module Verify Module Output

Any input file Enabled Enabled Verifies the origin and

generates result

Any input file Disabled Enabled Fails

Page 440: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1074 Vol. 7, Issue 3, pp. 1067-1074

[6]. M. G. Jaatun, J. Jensen, H. Vegge, F. M. Halvorsen, and R. W. Nergard. Fools download where angels

fear to tread. IEEE Security & Privacy, 7(2):83–86, 2009.

[7]. H. Xiong, P. Malhotra, D. Stefan, C. Wu, and D. Yao. Userassisted host-based detection of outbound

malware traffic. In Proceedings of International Conference on Information and Communications Security

(ICICS), December 2009.

[8]. R. Gummadi, H. Balakrishnan, P. Maniatis, and S Ratnasamy. Not-a-Bot: Improving service

availability in the face of botnet attacks. In Proceedings of the 6th USENIX Symposium on Networked

Systems Design and Implementation (NDSI), 2009.

[9]. M. Christodorescu, S. Jha, and C. Kruegel. Mining specifications of malicious behavior. In ESEC-FSE

’07: Proceedings of the 6th joint meeting of the European software engineering conference and the ACM

SIGSOFT symposium on the foundations of software engineering, pages 5–14, New York, NY, USA,

2007. ACM.

[10]. A. Srivastava and J. Giffin. Tamper-resistant, Application-aware blocking of malicious network

connections. In RAID ’08: Proceedings of the 11th international symposium on Recent Advances in

Intrusion Detection, pages 39–58, Berlin, Heidelberg, 2008. Springer-Verlag

[11]. S. Garriss, R. C´aceres, S. Berger, R. Sailer, L. van Doorn, and X. Zhang. Trustworthy and personalized

Computing on public kiosks. In MobiSys ’08: Proceeding of the 6th international conference on Mobile

systems, applications, and services, pages 199–210, New York, NY, USA, 2008. ACM.

[12]. A. Baliga, P. Kamat, and L. Iftode. Lurking in the shadows: Identifying systemic threats to kernel data.

In IEEE Symposium on Security and Privacy, pages 246–251. IEEE Computer Society, 2007.

[13]. J. Wei, B. D. Payne, J. Giffin, and C. Pu. Soft-timer driven transient kernel control flow attacks and

defense. In ACSAC ’08: Proceedings of the 2008 Annual Computer Security Applications Conference,

pages 97–107, Washington, DC, USA, 2008. IEEE Computer Society.

[14]. Z. Wang, X. Jiang, W. Cui, and X. Wang. Countering persistent kernel rootkits through systematic hook

discovery. In RAID ’08: Proceedings of the 11th international symposium on Recent Advances in

Intrusion Detection, pages 21–38, Berlin, Heidelberg, 2008. Springer-Verlag.

BIBLIOGRAPHY

Sharda Kadam is currently a student M. E. in the department of Computer Engineering at

Siddhant College of Engineering, Sudumbare, Pune under Pune University. She received

her B.E. degree from SVMEC, Nasik, under Pune university.

P.B. Kumbharkar is currently working as a H.O.D. He is a Ph.D. student at Pune University. He received

M.E. degree in the Department of Computer Engineering at Pune University

Page 441: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1075 Vol. 7, Issue 3, pp. 1075-1081

DESIGN AND IMPLEMENTATION OF ONLINE PATIENT

MONITORING SYSTEM

Harsha G S Department of Electronics & Communication

Channabasaveshwara Institute of Technology, Gubbi, 572216, India

ABSTRACT Patient’s condition is continuously monitored and web access functionality is embedded in a device to enable

low cost widely accessible and enhance user interface functions for the device. A web server in the device

provides access to the user interface functions for the end user through the device web page. This will be

dynamically updated with the recent monitored patient values from the module. Upon which the end user in the

network can avail live values on the webpage.

KEYWORDS - Embedded; Ethernet; TCP/IP Protocols; LPC1768; Cortex M3.

I. INTRODUCTION

Online data monitoring system is one of the promising trends in the era of computing in today’s

system automation industry. The proposed project is one such attempt of signing online patient

condition monitoring system using Cortex-M3 core [2]. In this project we will develop Ethernet device

drivers or Cortex-M3 core to transmit the monitored sensor data (patient condition) to internet [1]. The

system can complete the remote monitoring and maintenance operations of equipment through the

network using Web browser [7]. By introducing Internet into control network, it is possible to break

through the spatial temporal restriction of traditional control network and effectively achieve remote

sensing, monitoring and real time controlling of equipment. The main essence of this project is to

design and implement online data monitoring system using ARM CORTEX M3 CORE and

TCP/IP Ethernet connection for data monitoring applications [3].

II. ORGANIZATION OF THE PAPER

Introduction tells about the problem statement, existing and proposed systems, and disadvantages of

old system as well as advantages of newly designed system is given. Related work explains the

different terminology which are quite useful to understand Ethernet protocol which is the main

communication protocol used in the system. Requirement lists the hardware and software

specification for this project. It also describes the overall description of project, product perspective,

user characteristics and specific requirements. Here different design constraints, interface and

performance requirements explained. System Definition deals with the clear definition of the

properties and characteristics of the embedded system prior to starting hardware and software

development essential to achieve a final result that matches its target specifications. System Design

deals with the advanced software engineering where the entire flow of the project is represented by

professional data flow diagrams and sequence diagrams. System Implementation section explains

coding guidelines and system maintenance for the project. Results explains in details about the

outcome of the experiment and compares it with the result obtained in existing system. Conclusion

and Future work section describe the summary of the related work and future enhancements of the

proposed system. Reference section contains the papers and other documents referred in the research.

Author Profile gives a brief introduction about the author.

Page 442: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1076 Vol. 7, Issue 3, pp. 1075-1081

III. RELATED WORK

Traditional method of deployment of Monitoring devices is based on the simple circuitry or low level

controllers. This makes them inefficient to cope up with the speed they are needed to be processing the

tasks and lacks functionalities like Ethernet support, SD card support, UART, Timers and Counters [6].

So they fail to communicate with the hospital personnel or the doctor directly and results in serious

situations. But automating this process is one of the modern approaches for this problem. This is

already implemented in many hospitals. Although the data is being monitored automatically in those

systems, it is not capable of reaching the Doctor or concerned person or to bring it to their notice[8].

IV. REQUIREMENTS

These requirements are required in order to carry out the execution of project.

A. Hardware Requirements

Hardware requirements for this project are listed as follows:

LPC 1768 H-Plus Ex Header Board: Cortex M3 Development board brought up by Coinel

Technologies Ltd., Bangalore.

Graphical LCD: 128*68 Pixels display with white backlit LED.

Sensors: Pulse Rate (NS13), Temperature (LM35) and Gas sensor used to obtain the patient

body conditions.

Buzzer: Used as an actuator to produce alarm when the threshold is reached.

GSM module – SIM300: To send SMS alert to the Doctor/concerned person.

CooLinkEx J-Tag Debugger: To debug the program or to burn the program hex code into the

processor.

Ethernet Cable: To connect the network with the board.

USB-Power Supply Adapter: To power up the LPC1768 HPlus ex board.

D-Link Wi-Fi Router: To create wireless network.

B. Software Requirements

Software requirements for this projects are listed as follows:

Keil IDE : Keil development environment to edit , compile and debug code. It can even burn

the code to the processor using plugins required for J-Tag code burner.

Flash Magic : To burn the Hex file into the microcontroller.

Coocox CoLinkEX J-Tag Debugger : To debug code live on the hardware with step by step

feature. We can also use software breakpoints.

WebConverter V1.0 : To convert the webpage from HTML format into array of strings.

MyPublicHotspot : To create WLAN using the Wi-Fi modem of the Laptop.

V. SYSTEM DEFINITION

Defining a system based on the hardware and software is an important stage. This contains four steps in

defining a system with Ethernet connectivity for an embedded product. In these four stages, we

consider the embedded system as a black box. Which has a clear definition of the properties and

characteristics of the embedded system prior to starting hardware and software development is

essential to achieving a final result that matches its target specifications.

A. Specifying Required Functionality

The first thing is to focus on the aim of our system being designed. This is related to the application

and can encompass virtually anything that can be done using a MCU with considerably high speed.

One such example can be seen from the figure 1.

Page 443: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1077 Vol. 7, Issue 3, pp. 1075-1081

Figure 1. System Functionality Example.

B. Specifying Access Method

After we specify required functionality the next thing is to specify the accessing method of the

requirement. That is how we can access the output being generated from the controller.

Figure 2. User Interface Options.

Figure 2 shows the black box representation of the embedded system and how the various access

methods can be used to monitor and control the embedded system.

In our project we can go for one or more of the commonly used access methods used, they are:

Using a web browser.

Using HyperTerminal.

Having the embedded system send e-mail.

Using a custom application.

C. Specifying Configuration Method

Each and every device which is connected within a network will comprise of both a MAC address as

well as IP address which will be unique for that system. These are used to identify, locate and

communicate each other. Embedded systems with LPC1768 IC needs to obtain only the IP address

because the MAC address of the same will be preprogrammed in the flash memory of the IC.

There are four common configuration methods available to choose from:

Automatic Network Configuration.

Automatic Network Configuration with Netfinder.

Static Network Configuration.

Static Network Configuration with Netfinder.

D. Specifying Field Re-Programmability Requirements

The final part of the system definition is determining the field re-programmability requirements of the

embedded system.

Page 444: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1078 Vol. 7, Issue 3, pp. 1075-1081

The options for field re-programmability are:

No support for field re-programmability.

Re-programmability using a 3 or 5 pin header.

Re-programmability using a 10-pin header.

Re-programmability using a bootloader.

VI. SYSTEM DESIGN

System design is a transmission phase from a user oriented documented system to a purely

programmatic oriented system for programmer’s database personnel. The system design makes the

high level decision about the overall architecture of the system. The system design phase provides the

understanding and procedure details necessary for implementing the system recommended study. The

target system is arranged into subsystems based on the analysis structure and the proposed architecture.

A. System Architecture

The figure 3 represents the functional architecture of the system. Here the pulse rate, temperature and

gas sensors are the inputs for the system which will be used to obtain the parameters which are the

input for the system. Using the analog input from these sensors the processor computes for the data

required and handles the situation based on the requirements of our system.

Figure 3. Block Diagram of the System.

After obtaining digital values for the sensor outputs, they are checked for the threshold levels based on

the normal values expected and if the threshold is passed beyond then an alert is sent to doctor using

GSM modem. Then the web page which is dynamically updated with the values shall be pushed to the

network. This can then be accessed on a device within that network using its web browser pointed to

the defined IP address.

B. System Flow Diagram

Figure 4. Flowchart of the Project.

Page 445: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1079 Vol. 7, Issue 3, pp. 1075-1081

The process starts with initialization of the input outputs and configuring system parameters like

frequency. Then the peripheral devices are powered and configured for the purpose. Declaring and

initialization of the variables are to be made along with the prototype inclusion. After initialization of

the peripherals like external interrupt, timer interrupts, Ethernet, ADC etc. The sensor data is acquired

in analog form and are converted into digital values as per the unit of the measured variable and are

checked with threshold levels.

Figure 5. Flowchart of the Project (Cont..).

The values are updated on the local Display. If the threshold level is reached then the SMS alert is

sent to the Doctor number using GSM modem then dynamically updated webpage with recent values

is pushed to the Ethernet. This can be accessed from browsers of the devices in the same network.

VII. SYSTEM IMPLEMENTATION

Implementation is the realization of the application or execution of the plan given in the design. This

section deals with detailed description of how the project goal is achieved.

This phase continues until the system is operating in production accordance with the defined user

requirements. Successful implementation may not guaranteed improvement but it will prevent

improper installation.

The whole Implementation part of this project is divided into three different levels, they are:

Hardware Design

Software Generation

Application Development

A. Hardware Design

With a system definition in place, it is now time to start designing the hardware. The hardware design

flow consists of 5 steps corresponding to the 5 sections of a schematic for an embedded system with

Ethernet connectivity.

They are listed as follows:

Custom application circuitry - sensors, indicators, and other application-specific circuitry.

MCU - the main system controller.

Ethernet Controller - provides the MCU with the capability to send and receive data over a

network.

Ethernet Connector - the RJ-45 connector, magnetics, and link/activity LEDs.

Power circuit - provides the embedded system with regulated 3.3 V power.

B. Software Generation

In this step of the system design process, we will be generating the software that interacts with the

LPC1768 Ethernet module to provide the embedded system with Ethernet connectivity. Using the

TCP/IP Configuration Wizard, this step is one of the easiest steps in the entire system design process.

Page 446: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1080 Vol. 7, Issue 3, pp. 1075-1081

C. Application Development

The application code that will implement the required system functionality specified in the first

question of the system definition must co-exist and share resources with the TCP/IP Library. To

develop the code, we will need a good understanding of how the TCP/IP Stack operates.

Figure 6. Adding Dynamic HTML Content.

The figure 6 shows how we can add the HTML code on to the loop and it also tells how we can

update the values dynamically onto the webpage.

VIII. RESULTS

The output is seen on any of the devices within the same network just by accessing the IP address

assigned for the webpage from any web browser. Snapshot of the result webpage can be seen in the

figure 7.

Figure 7. Adding Dynamic HTML Content.

Page 447: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1081 Vol. 7, Issue 3, pp. 1075-1081

IX. CONCLUSION & FUTURE WORK

A. Conclusion

Three efficient sensors are interfaced with the controller & seen working fine.

TCP-IP libraries are used to communicate through Ethernet.

Cortex controller is proved to be highly flexible & efficient.

Tasks are efficiently managed in the loop so as to perform all functions properly using

available CPU time.

Software sanity is obtained on interfacing with all the peripherals.

WLAN is created and found to be working fine with all the clients in the network.

Outputs are checked on GLCD, mobile phones, tablets, laptops and desktop pcs.

B. Future Work

Actuators are automatically controlled based on the body conditions measured.

Provisions to be made for the doctor to control these actuations from the webpage.

REFERENCE

[1] Dhruva R. Rinku, Mohd arshad , “Design and implementation of online data acquisition and controlling

system using cortex m3core.” IJESAT - Volume-3, Issue-5,259-263 ,©2013.

[2] S.Navaneethakrishnan, T.Nivethitha, T.Boobalan, “Remote patient monitoring system for rural population

using ultra low power embedded system”. ISSN: 2319-5967 ISO 9001:2008 Certified ,©2013.

[3] IEEE Standards for Local Area Networks Carrier Sense Multiple Access With Collision Detection

(CSMA/CD) Access Method and Physical Layer Specifications, ANSI/IEEE Std 802.3-1985, 1985.

[4] Jean J Labrosse, “Embedded Systems Building Blocks” R&D Books, pp 61-100.

[5] An Axelson “Embedded Ethernet and Internet Complete” Lakeview Research LLC, 2003.

[6] Josephy Yiu “The Definitive Guide to ARM Cortex M3” Newness,©2007.

[7] Pont, Michael J “Embedded C”, Pearson Education ©2007.

[8] “LPC 17xx User Manual”, NXP Semiconductors.

[9] Arul Prabahar A, Brahmanandha Prabhu, “Development of a Distributed Data Collection System based on

Embedded Ethernet” 978-1-4244-9799-71111$26.00 ©2011 IEEE.

[10] “GSM basics History” on Tutorial point, Wikipedia and private line.

AUTHOR PROFILE

Harsha G S (M’23) was born in Karnataka, India in 1990. He received the B.E. and

M.Tech. Degrees from Visvesvaraya Technological University, Belgaum, India, in 2012

and 2014 respectively on Electronics. He is the founder of many technology blogs

www.GoHarsh.com & www.TecLogs.com, and he also founded a webdesign company in

the early 2012, www.WebsCrunch.com. His main areas of research interest are Embedded

systems, VHDL, Image Processing and Web Design.

Page 448: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1082 Vol. 7, Issue 3, pp. 1082-1090

COMPARISON BETWEEN CLASSICAL AND MODERN

METHODS OF DIRECTION OF ARRIVAL (DOA) ESTIMATION

Mujahid F. Al-Azzo 1, Khalaf I. Al-Sabaawi2 Department of Communications Engineering,

Faculty of Electronics Engineering, Mosul University, Iraq

ABSTRACT In this paper, a comparison between the classical method; Fast Fourier Transform (FFT), and modern

methods; MUltiple SIgnal Classification (MUSIC) method and EigenVector method (EV) for direction of arrival

estimation are investigated. The simulation results of single source and two sources DOA estimation are realized

for three methods (FFT, MUSIC, and EV). After DOA estimation is realized an investigation is making to show

the performance of the (FFT, MUSIC, and EV) methods for DOA estimation of min. difference between angles of

sources versus SNR at fixed number of elements. The experimental results are achieved for single source DOA

estimation using set of ultrasonic transducers.

KEYWORDS: Array Signal Processing, DOA, MUSIC, EV.

I. INTRODUCTION

Increase of demand for the wireless technology service have spread into many areas such as, sensor

network, public security, environmental monitoring, mobile in smart antenna, and search and rescues.

All these applications can be counted the main reason for determine the direction of arrival (DOA)

estimation of incoming signal in wireless systems. The DOA also, used in other applications such as

radar, sonar, seismology, and strategy of defense operation. In smart antenna technology, a DOA

estimation algorithm is usually incorporated to develop systems that provide accurate location

information for wireless services [1]. The DOA technique one branch of array signal processing [2].

This paper based on uniform linear array (ULA) of multiple sensors that deals with array receiving

antenna to extract a useful information from it.

Many algorithms are founded to solve the problem of DOA [3]. Initially Beam forming, ESPRIT,

Maximum likelihood algorithm, subspace methods (Pisarenko Harmonic Decomposition (PHD)[4],

MUltiple SIgnal Classification (MUSIC)[5], and Eigenvector method (EV) [6]) and other algorithms.

We are used in this paper classical method (FFT) [7] and modern methods (MUSIC and EV) and

compare between them. This paper include the problem formulation of DOA estimation. After that the

theoretical and mathematical expression for MUSIC and EV methods are introduced. Then the

simulation and experimental results has been realized. The conclusions and suggestion for future work

are the last sections of this paper.

II. PROBLEM FORMULATION

We assume that a system consist of uniform linear array (ULA) with N-elements and M-sources and

the distance between elements is d. The first element of the array consider as a reference element. The

scenario for this system is the sources in the far field and the incoming data is plane wave. This system

are shown in figure (1).

Page 449: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1083 Vol. 7, Issue 3, pp. 1082-1090

The data received at the array antenna is

𝑋 = 𝐴𝑠 + 𝑛 (1)

Where

𝐴 = [𝑎(𝜃1), 𝑎(𝜃2), … 𝑎(𝜃𝑀)] Array steering vector.

𝑠: Signal source.

𝑛: is an additive noise term whose mean is 0 and variance is 𝜎2I.

The algorithms will be using in this paper are based on the autocorrelation matrix of the received data

[8]. These algorithms are MUltiple SIgnal Classification (MUSIC) and EigenVector method (EV), and

compare these with classical Method Fourier Transform (FT).

III. MULTIPLE SIGNAL CLASSIFICATION (MUSIC)

MUSIC method is a high-resolution algorithm that based on eigen-decompostion of the

autocorrelation matrix. This method is decompose the covariance matrix into two subspaces, signal

subspace and noise subspace. Estimation the direction of arrival of incoming signal is determined from

steering vectors that orthogonal to the noise subspace, which is by finding the peak in spatial power

spectrum.

Suppose that there are M sources, the receiving signal of N elements uniform linear antenna array is

given by

𝑋 = ∑ 𝑎(𝜃𝑖)𝑠𝑖𝑀𝑖=1 + 𝑛 = 𝐴𝑆 + 𝑛 (2)

The signal auto covariance matrix can be written as the average of N array output samples:

𝑅 = 𝐸{𝑋𝑋𝐻} The eigen-decomposition is

𝑅 = 𝑄𝛬𝑄𝐻 = ∑ 𝜆𝑖𝑞𝑖𝑞𝑖𝐻𝑁

𝑖=1 (3)

Where 𝛬= 𝑑𝑖𝑎𝑔 (𝜆1, 𝜆2,…, 𝜆𝑁) it is eigenvalues and sorting in ascending sequence: 𝜆1 ≥ 𝜆2, … , ≥𝜆𝑀 > 𝜆𝑀+1=… = 𝜆𝑁. That is the first M eigenvalues are in connection with the signal and their numeric

value are all more than 𝜎2. The signal divided into two subspace signal subspace and noise subspace.

The signal subspace is the eigenvector (𝑞1, 𝑞2, … , 𝑞𝑀) corresponding to the largest eigenvalues (𝜆1

, 𝜆2,…, 𝜆𝑀), so the signal subspace is: 𝑄𝑠 = [𝑞1, 𝑞2, … , 𝑞𝑀]. 𝛬𝑠is the diagonal matrix consist of the m

larger eigenvalues. The later N-P eigenvalues are totally depended on the noise and their numeric value

are 𝜎2 . The noise subspace is the eigenvector corresponding to the remaining eigenvalues (𝜆𝑀+1

, 𝜆𝑀+2,…, 𝜆𝑁), so the noise subspace 𝑄𝑛 = [𝑞𝑀+1, 𝑞𝑀+2, … , 𝑞𝑁]. 𝛬𝑛is the diagonal matrix consist of

the m larger eigenvalues. So 𝑅 could be divided into:

𝑅 = 𝑄𝑠𝛬𝑠𝑄𝑠𝐻 + 𝑄𝑛𝛬𝑛𝑄𝑛

𝐻 (4)

Duo to each column vector is orthogonal to noise subspace: 𝑄𝑛𝐻𝑎(𝜃𝑖), 𝑖 = 1,2, … 𝑀, the spectrum

of the MUSIC are derived:

𝑃(𝜃)𝑀𝑈𝑆𝐼𝐶 =1

𝒂𝐻(𝜃)𝑸𝑛𝑸𝑛 𝐻 𝒂(𝜃)

(5)

From eq. (5), we can estimating the DOA by searching the peak value [9].

Fig. 1. Uniform Linear Array (ULA) and the Direction

of Arrival (DOA).

𝜃

1 2 N 4 3

𝑆(𝑡)

𝑑 𝐴𝑛𝑡𝑒𝑛𝑛𝑎 𝑒𝑙𝑒𝑚𝑒𝑛𝑡

Page 450: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1084 Vol. 7, Issue 3, pp. 1082-1090

IV. EIGEN VECTOR METHOD (EV)

In addition to the MUSIC algorithm, a number of other eigenvector methods have been proposed for

estimation the DOA. One of these, the EigenVector (EV) method. The EigenVector is closely relate to

the MUSIC algorithm. Specifically, the EV method estimates the exponential frequencies from the

peaks of the eigenspectrum:

𝑃(𝜃)𝐸𝑉 =1

∑1

𝜆𝑖|𝒂𝐻(𝜃)𝑄𝑛|

2𝑀𝑖=𝑝+1

(6)

Where 𝜆𝑖 is the eigenvalue associated with eigenvector 𝑄𝑛.

𝑎(𝜃): Array steering victor

The only difference between the EV method and MUSIC is the use of inverse eigenvalue (the 𝜆𝑖are the

noise subspace eigenvalues of R) weighting in EV and unity weighting in MUSIC, which causes EV to

yield fewer spurious peaks than MUSIC. The EV Method is also claimed to shape the noise spectrum

better than MUSIC [9].

V. SIMULATION RESULTS

The DFT, MUSIC, and EV are simulated for estimation the (DOA) using software MATLAB program.

We are using in this paper ULA 10 elements, the distance between the elements is half wavelength

(0.5λ), and SNR is (30 dB), and snapshot 1000. The figures (2, 3, and 4) shows the estimation of angle

of arrival for single source for three methods.

Fig.2 shows the result of using FFT algorithm. It is estimation the DOA for single source at 20° degree.

A high sidelobe is clear and this is one of the disadvantages of using the FFT method. The peak of main

beam at 20° and it’s very wide (5°) to(38°). The wide beam causes an ambiguity in estimation the DOA.

This ambiguity causes problem in locate the accurate angle, especially in military application that need

accurate angle. Another disadvantage for the wide beam is causes loss power.

The result shown in fig.3 is that of using MUSIC algorithm. It is able to estimation the DOA for single

source at 20° degree. There is very small (negligible) sidelobe and these are the advantages of MUSIC

method. The main beam of MUSIC method narrow than the main beam in FFT method so it is overcome

the ambiguity in estimation the DOA and loss of the power.

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30

35

40

45

50FFT

Angles of Arrival

M

agnitude (

dB

)

Figure 2. DOA Estimation for one source (20) degree using FFT method. (SNR=30dB, N=10, d=0.5λ)

Page 451: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1085 Vol. 7, Issue 3, pp. 1082-1090

The result shown in fig.4 is that of using EV algorithm. It is capable to estimation the DOA for single

source at 20° degree. There is very small (negligible) sidelobe and these are the advantages of EV

method. The main beam of EV method is very sharp and narrow than the beams in FFT and MUSIC

method so it is overcome the ambiguity in estimation the DOA and give accurate estimation.

The figures (5, 6, and 7) shows the estimation of angle of arrival for two sources for three methods. We

are using the same parameters of the single source. Fig.5 shows the result of using FFT algorithm. It is

able to resolve between two adjacent sources but the difference between them is high more than (20°)

degree. Additionally, a high side lobe is clear and this is one of the disadvantage of the FFT method.

The falling between peak and vale of two sources not enough and equal to about 5dB because the wider

beam of the FFT method.

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30

35

40MUSIC

Angle of Arrival AOA(degree)

m

agnitude(d

B)

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30

35

40

45Eignvector

Angle of Arrival AOA(degree)

m

agnitude(d

B)

Figure 3. DOA Estimation for one source (20) degree using MUSIC method. (SNR=30dB, N=10,

d=0.5λ)

Figure 4. DOA Estimation for one source (20) degree using EV method. (SNR=30dB, N=10, d=0.5λ)

Page 452: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1086 Vol. 7, Issue 3, pp. 1082-1090

Fig.6 shows the result of using MUSIC algorithm. It is able to recognize between two adjacent sources

(20°, 26° ) the deference between them more small than FFT about (6)° degree. There is very small

(negligible) sidelobe and these are the advantages of MUSIC method. The falling between peak and

vale of two sources better than FFT and equal to about 10dB.

Fig.7 shows the result of using EV algorithm. It is able to recognize between two adjacent sources

(20°, 24° ) the deference between them smaller than FFT and MUSIC about (4)° degree. There is very

small (negligible) sidelobe and these are the advantages of EV method. The falling between peak and

vale of two sources the best over than FFT and MUSIC methods and equal to about 18dB because it is

very narrow beamwidth.

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30

35FFT

Angle of Arrival (AOA)

M

agnitude (

dB

)

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30

35

40MUSIC

Angle of Arrival AOA(degree)

m

agnitude(d

B)

Fig.5 DOA Estimation for two sources (20, 40) degree using FFT method. (SNR=30dB, N=10, d=0.5λ)

Fig.6 DOA Estimation for two sources (20, 26) degree using MUSIC method. (SNR=30dB, N=10, d=0.5λ)

Page 453: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1087 Vol. 7, Issue 3, pp. 1082-1090

Fig.8 shows the performance of the (FFT, MUSIC, and EV) to recognize between sources when the

SNR was changed at fixed number of elements (10 elements). In FFT, method for low SNR needs high

difference between angles to resolve between sources because of low resolution for this method, but

when SNR high it needs small difference to resolve between sources. For the same parameters, the

result of the MUSIC method is the best when compared with FFT methods. The result of EV, method

is the best when compared with both MUSIC and FFT methods.

Fig.8. DOA Estimation for minimum difference between angles versus SNR.

VI. EXPERIMENTAL RESULTS

The ultrasonic transducers are used in DOA estimation experiment set for single source. The FFT,

MUSIC, and EV methods are used for DOA estimation. Then a comparison is made between high-

resolution and classical methods for different values of the system parameter. The system parameters

are N (number of samples), Δx (distance between samples and equal to d in equation), Zo (distance

between transmitter and receiver).

In this experiment, the parameters used are N=20 samples Zo=72 cm, Δx=0.2cm, f=40 MHz, λ= 0.8

cm, θ=220 (Direction of arrival).

The results shown in figures (9, 10, 11)

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30

35

40

45

50Eignvector

Angle of Arrival AOA(degree)

m

agnitude(d

B)

0

5

10

15

20

25

30

35

40

0 5 10 15 20 25 30 35

Δθ

SNR

Investigation on FFT ,MUSIC ,and EV

FFT EV MUSIC

Fig.7 DOA Estimation for two sources (20, 24) degree using EV method. (SNR=30dB, N=10, d=0.5λ)

Page 454: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1088 Vol. 7, Issue 3, pp. 1082-1090

Fig.9 DOA estimation for single source using FFT method

Figure (9) demonstrate the FFT method. It can estimation the DOA. High sidelobe level is clear about

-8 dB. In addition, the error in resolution is high and equal to 45 %. The beamwidth of the main beam

are wide compared to other methods and equal to (80). The wide beamwidth give error in estimation

and if there is two sources in this range it is difficult to distinguish between them and consider as one

source.

Fig.10 DOA estimation for single source using MUSIC method

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30FFT

Direction of Arrival (DOA)

Pow

er(

dB

)

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30MUSIC

Direction of Arrival (DOA)

P

ow

er(

dB

)

Page 455: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1089 Vol. 7, Issue 3, pp. 1082-1090

Fig.11 DOA estimation for single source using EV method

Figures (10& 11) are shows the results for using MUSIC and EV methods. It is capable to estimate the

DOA with small sidelobe equal to -9.3 dB in MUSIC and -9.5 dB in EV. there is no error in resolution

of estimation. These methods are the best. The performance for the above method can be indicated

using the table (1).

Table (1) comparison between the performances of the DOA estimation methods for single source

VII. CONCLUSIONS

From the simulation results and experimental results about the performance of classical method (FFT)

and modern method (MUSIC, and EV). We can conclude that the classical method is work properly at

high SNR and long data but this work began falling when the data or the SNR is decreasing. In short

data the (FFT) method need high difference between angles of sources to resolve between them because

it is low resolution and high sidelobe level. The modern method (MUSIC and EV) is better than (FFT)

method. For same parameter that used in (FFT) method the (MUSIC and EV) need much smaller

difference between angles of sources to resolve between them and negligible sidelobe because it high

resolution algorithms and the beam width of the main is very narrow and this beam give accurate

estimation of DOA.

VIII. SUGGESTION FOR FUTURE WORK

We are suggestion for future work to realize the experimental results of two sources DOA estimation.

Make investigation performance of three methods for two sources DOA estimation. Compare between

the simulation and experimental results.

-80 -60 -40 -20 0 20 40 60 800

5

10

15

20

25

30Eignvector

Direction of Arrival (DOA)

P

ow

er(

dB

)

Method Estimation Error SLL

FFT 120 45% -8 dB

MUSIC 220 0% -9.3 dB

EV 220 0% -9.5 dB

Page 456: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1090 Vol. 7, Issue 3, pp. 1082-1090

REFERENCES [1]. Z.Chen, G. Gokeda, Y.Yu, “Introduction to Direction of Arrival Estimation”, Artech House, 2010

[2]. D.H. Johnson and, D.E. Dudgeon, “Array Signal Processing: Concepts and Techniques”, Prentice-Hall,

Englewood Cliffs NJ,1993.

[3]. S. Chandran, “Advances in Direction-of-Arrival Estimation”, Artech House2006.

[4]. V. F. Pisarenko, “The retrieval of harmonics from a covariance function Geophysics”, J. Roy. Astron.

Soc., vol. 33, pp. 347-366, 1973.

[5]. R. Schmidt, “Multiple emitter location and signal parameter estimation”. IEEE Transactions on Antenna

and Propagation.Vol. 34, p 276 – 280, 1986.

[6]. D. H. Johnson and S. R. DeGraaf, “Improving the Resolution of Bearing in Passive Sonar Arrays by

Eigenvalue Analysis”, IEEE Transactions On Acoustics, Speech, And Signal Processing, Vol. Assp-30,

No. 4, August 1982

[7]. W. Zhu, “DOA estimation for broadband signal that use FFT interpolation method”, IEEE 4th

International Conference on Software Engineering and Service Science (ICSESS), 2013.

S. F. Cotter, “A Two Stage Matching Pursuit Based Algorithm for DOA Estimation in Fast Time-Varying

Environments”, IEEE Proc. of the 15th Intl. Conf. on Digital Signal Processing, 2007.

[8]. Z. Xiaofei1, L. Wen, S. Ying, Z. Ruina, and X. Dazhuan, “A Novel DOA estimation Algorithm Based

on Eigen Space”, IEEE International Symposium on Microwave, Antenna, Propagation, and EMC

Technologies For Wireless Communications,2007.

[9]. M. H. Hayes, "Statistical Digital Signal Processing and Modeling", John Wiley & Sons, Inc., Georgia

Institute of Technology, 2007.

Authors:

Mujahid F. Al-Azzo received the B.Sc. M.Sc. in electrical engineering /electronic and

communication in 1982, and 1985 respectively, and the Ph.D. in communication engineering

in 1999 all from Mosul University/ Iraq. His interest is in the fields of signal processing,

spectral analysis, and direction of arrival estimation.

Khalaf I. Al-Sabaawi received the B.Sc. in electronics engineering / communication

department in 2011, an M.Sc. student from 2012 and until now all from Mosul University/

Iraq. His interest is in the fields of signal processing, spectral analysis, and direction of arrival

estimation.

Page 457: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1091 Vol. 7, Issue 3, pp. 1091-1108

MODELLING LEAN, AGILE, LEAGILE MANUFACTURING

STRATEGIES: AN FUZZY ANALYTICAL HIERARCHY

PROCESS APPROACH FOR READY MADE WARE

(CLOTHING) INDUSTRY IN MOSUL, IRAQ

Thaeir Ahmed Saadoon Al Samman Department of Management Information System, MOSUL University, Iraq

ABSTRACT The aim of this research was to develop a methodology for test whether an existing strategies can perform as

lean, agile, or leagile manufacturing strategies. The research the available factors and characteristics should

be defines from the literature to build the model based on was sent to company to a quire their responses, the

AHP was developed to aid decision maker or sort information based on a number of criteria . More after,

questionnaires was built to distribute it to internal and external experts in clothing industry according to their

qualifications. We identifying the manufacturing feature that where particularly by independent variable. By

preparing based on the conditions and characteristics that improved solutions to manufacturing strategies in

the clothing manufacturing company in Mosul and examines the three strategies of lean ,agile, leagile by

considering certain features The paper provide evidence that the choice of manufacturing strategies should be

based upon a careful analysis and characteristics The case study and empirical research reported in this paper

are specific to the clothing manufacturing and fashion industries, ,and there would be benefit in extending the

research into other sector given the increasing trend to the global sourcing and high level of price and high

level competition in clothing manufacturing has market characteristic ,such as short product life cycle ,high

volatility ,low predictability and high level of impulse purchase, highly diverse and heterogeneous making such

issues as quick response of paramount importance whilst there is a growing recognition of the need to match

the competitive advantage to the market ,there is still limited research into what criteria should be aid the

choice of manufacturing strategies ,this paper attempt to extend our understanding of issues .

KEY WORDS: lean, agile, leagile, analytical hierarchy process, fuzzy logic

I. INTRODUCTION

The changes occurring in the nature and level of international competition, many companies have

been resorting to new ways of manufacturing. This phenomenon has been called as „new wave

manufacturing strategies‟ [74]. During the development of these various strategies, many kinds were

given to these strategies by the companies which started the development of these strategies and

began implementing them. Some of these strategies were Lean Manufacturing, Agile manufacturing

and leagile manufacturing [6].

1.1. Lean manufacturing conceptual ideology

The term Lean was first used by Krafick[45] in 1988, a principle researcher in the International Motor

Vehicle Program (IMVP) at Massachusetts Institute of Technology (MIT), to describe what today is

known as the Lean Manufacturing or Lean Production paradigm [66]. A major output of the IMVP

research efforts was the publication of the book, The Machine that Changed the World: The story of

lean production [82]. The book chronicled the operations found in the automotive industry, capturing

the dramatic differences in approach and ensuing performance found among the world's leading

Page 458: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1092 Vol. 7, Issue 3, pp. 1091-1108

automakers. In particular, the book examined how the techniques employed by Japanese automakers,

namely Toyota, outpaced the performance achieved by U.S. and European competitors. Much has

been written in the academic and popular business press about Toyota's much envied competitive

weapon, the Toyota Production System (TPS)[27].

Every industrialized country has now fully recognized the importance and benefits of lean ideology

[52]. Lean manufacturing (LM) is it uses half of everything human effort in the factory,

manufacturing space, investment in tools, engineering hours to develop a new product [30] . It makes

use of its tools to strive for zero inventories, zero downtimes, zero defects, and zero delays in the

production process.

The implementation of lean principles in any organization begins by identification of the value

stream, Value is all the aspects of a product that a customer is willing to spend his/her money on it

[27]. i.e., all those activities, required to manufacture a product(goods, services) to a customer. The

numerous activities performed in any organization can be categorized into the following three types

[19]

(i) Value adding Activities (VAA)—which include all of the activities that the customer

acknowledges as valuable.

(ii) Non Value Adding Activities (NVAA–Type II Muda) -These include all the activities that the

customer considers as non-valuable, either in a manufacturing system or in the service system. ,

Waste can be described as the opposite side of value on a Lean coin. These are pure wastes and

involve unnecessary actions that should be eliminated completely.

(iii) Necessary but Non Value Adding Activities (NNVAA - Type I Muda)

These include the activities that are necessary under the current operating conditions but are

considered as non-valuable by the customer[30].

Lean manufacturing focuses on elimination of waste (or "muda" in Japan) Taiichi Ohno believed

that fundamental for any company's success was the elimination of waste. Ohno (1988) developed a

list of seven basic forms of muda

(i) Overproduction (production ahead of demand).

(ii) Transportation (unnecessary transport of products).

(iii) Waiting (waiting for the next production step).

(iv) Inventory (all components, work-in-progress and finished product not being processed).

(v) Motion (unnecessary movement of operator equipment).

(vi) Over processing (unnecessary processing).

(vii) Defects in production (the effort involved in inspecting for and fixing defects)

(viii) Unused creativity[30] .

Womack and Jones (1996) added to this list with the muda of goods and services that fail to meet the

needs of customers.

Lean protest the need for an organization to develop a culture of continuous improvement in quality,

cost, delivery and design [6] Production is agile if it efficiently changes operating states in response

to uncertain and changing demands placed upon it. Production is lean if it is accomplished with

minimal waste due to unneeded operations, inefficient operations, or excessive buffering in

operations. So it is concluded that while agility presumes leanness, leanness might not presume

agility[30]. Lean production can be effectively utilized to remove wastes to improve business

performance. Emphasis on the elimination of loss and the waste of resource factors is largely

associated with lower inventory which is very clearly shown by analyzing the "rock and ship"

analysis. In this analysis, as the stock (water level) is reduced, sources of waste (rocks) appear in the

form of latency, poor quality conformity, prolonged preparations, unreliable processes etc. the

removal of these losses is caused to lower inventory without a negative effect on the flow of materials

(ships). [30][67]. Womack, Jones, and Roos add that in comparison to a mass production approach, a

lean company calls for far less inventory and incurs fewer defects while providing greater variety in

products [27][82[49].

1.2. Agile manufacturing conceptual ideology

The term ‘agile manufacturing’ refers specifically to the operational aspects of a manufacturing

company which accordingly, try to translate into the ability to produce customized products at mass

production prices and with short lead times.[63,72].

Page 459: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1093 Vol. 7, Issue 3, pp. 1091-1108

There are a number of research reports available in the literature that discuss the concept of agile

manufacturing [27],[19],[7][8],[22],[54], [83],[15], [66],[9],[27]

Agile manufacturing is a new idiom that is used to represent the ability of a producer of goods and

services to survive and flourish in the face of continuous change. These changes can occur in markets,

technologies, business relationships and all other facets of the business enterprise [13].

Agile manufacturing can be defined as the capability of surviving and prospering in a competitive

environment of continuous and random change by reacting quickly and effectively to changing

markets, driven by customer-designed products and services. [42]

According to Naylor et al. (1999), “agility means applying market knowledge and a vital corporation

to exploit profitable opportunities in a rapidly changing market place”. The relation between agility

and flexibility is extensively discussed in the literature [13]. It has been proposed that the origins

of agility lie in flexible manufacturing systems [21]

Agility can be obtained by systematically developing and gaining capabilities that can make the

supply chain reflect rapidly and diversely to environmental and competitive changes [41].

Consequently, these firms need a number of distinguishing attributes to promptly deal with the

changes inside their environment. Such attributes include four main elements [70]: responsiveness,

competency, flexibility/adaptability and quickness/ speed. The base for agility is the amalgamation of

information technologies, staff, business process organization, innovation and facilities into main

competitive attributes. The main points of the definition of various authors may be summarized as

follow:

-High quality and highly customized products.

- Products and services with high information and value-adding content.

- Recruitment of core competencies.

- Responsiveness to social and environmental issues.

- Combination of diverse technologies.

- Response to change and ambiguity demand.

- Intra-enterprise and inter-enterprise integration [18][16].

The implementation of agile strategies has some benefits for firms, including quick and efficient

reaction to changing market demands; the ability to customize products and services delivered to

customers, the capability to manufacture and deliver new products in a cost-efficient mode [76],

decreased producing costs, enhanced customer satisfaction, elimination of non-value-added activities

and increased competitiveness.

The most publications on agility strategies can classified into four categories:

1. Conceptual models and framework for achieving agility, this mainly includes, dimension [19]

, enablers[23], aspects[85],theoretical model[70] . As well as methodology to support the

implementation agility through identifying drivers and providers[87].

2. Paths to agility , which considered the flexibility practices in terms of "volume flexibility

,modification flexibility and delivery flexibility " and responsiveness from three facets,

volume , product, process are vital paths to agility.[69][35][55][81].

3. Measuring and assessing the performance of agility these include the exploration of rules for

assessment ,the identification of criteria and the establishment of agility index, which was

proposed as a way of measuring the intensity level of agility attributes was

developed.[85][46][84]

4. The development of agility in a supply chain context.[64][34][71][5][86]

Therefore, agility has been advocated as the commerce paradigm of the century and in addition

agility is considered the winning strategy for becoming a general leader in an increasingly competitive

market of quickly changing customers’ requirements [1][38][17].

Agile manufacturing aims to meet the changing market requirements by suitable alliances based on

core-competencies, by organizing to manage change and uncertainty, and by leveraging people and

information.

[22],[54].

agile manufacturing does not represent a series of techniques much as it represents a elementary

change in management philosophy [48] It is not about small-scale improvements, but an completely

different way of doing business [42] with a primary emphasis on flexibility and quick response to the

changing markets and customer needs.

Page 460: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1094 Vol. 7, Issue 3, pp. 1091-1108

1.3. LEAGILE manufacturing conceptual ideology

Agility in concept comprises; responding to change (predictable or unexpected) in the appropriate

ways and due time, and exploiting and taking advantage of changes, as opportunities. Harrison and

Van Hoek (2005)argue that “where demand is volatile, and customer requirements for variety is high,

the elimination of waste becomes a lower priority than responding rapidly to the turbulent

marketplace” A similar view is shared by Desai et al., [12]that the lean production philosophy with its

current set of tools will not be able to tackle the increasing demand for customer specific products,

favoring , organizations to move towards more agile production philosophies, considered to be better

suitable to handle customer specific requirements with flexibility and responsiveness. According to

Desai et al , Sharifi and Zhang, [63] [12]

With increasingly customized products, the ‘mass markets’ will be split into multiple niche markets in

which the most significant requirements will tend to move towards service level. [43]Recognized cost

as the market winner for systems operating on the lean manufacturing philosophy. While Mason Jones

et al. in Kim et al [13]identified service level as the market winner for agile manufacturing

philosophies. In all cases named above costs, quality, and lead time are market qualifiers where they

are not market winners. Rapid changes in the business environment and uncertainty have been part of

management studies and research for a long time, so managing uncertainties still remains one of the

most important tasks for organizations.

Production is agile if it efficiently changes operating states in response to uncertain and changing

demands placed upon it. Production is lean if it is accomplished with minimal waste due to unneeded

operations, inefficient operations, or excessive buffering in operations. So it is concluded that while

agility presumes leanness, leanness might not presume agility.

Lean production is a broad concept and encompasses terms such as flexible manufacturing, mass

customization and even agile manufacturing” When lean tools are effectively applied taking into

consideration agility, one can increase flexibility by further introducing safety stocks, or operating

with some free capacity This will ensure that the supply chain is robust to changes in end consumers

requirements [57]. However, Naylor et al [57]warns that leanness and agility are mutually exclusive

and cannot be simultaneously applied at the same point in a supply chain. While leanness operates

best when planning horizons are long and products variants few, agility requires reactivity to customer

orders in short uncertain planning horizons and highly customized product variants. This has resulted

to the coining of a new production philosophy; leagile. [57].

According to Agarwal et al [1], leagility, is a philosophy best suited for an entire supply chain and not

for a single point in the supply chain. Leagile blends the lean and agile. Christopher and Towill

expanded the discussion in Naylor et al. (1999) and Mason-Jones et al. (2000a, 2000b). supported the

concept of hybrid manufacturing strategies and identified three practical ways of combing the lean

and agile paradigms: (Christopher, 2000; [10].

The first is via the Pareto curve approach; adopting lean for the 20% class high volume products

having 80% of the demand and agile for the 80% having 20 % of the demand.

The second is the use of a decoupling point or postponement principle. A de coupling point is a

position in the supply chain where one production paradigm takes over from another [57][40][1].

Since the lean philosophy focuses on cost efficiency along the whole value chain its tools can because

to run operations up to the decoupling point [31], [1]in a cost efficient way. While the agile

production principles are applied on the other side of the decoupling point. But then, there are still

some challenges like determining the position of a decoupling point such that the burden is rightfully

divided across the participants in the supply chain. At the same time it is important to have the

decoupling point closer to the customer so that lean practices can be applied to a greater portion of the

value chain. Since its position depends on end user, lead time sensitivity, and further, on where the

variability is greatest in the supply [57].

The third approach is to separating demand into base and surge demand, using lean for base demand

and agile for surge demand .the term Agile or leagile should remind us that we must be responsive to

change and uncertainties. We may need to come up with a number of other metrics required of our

management systems and processes, all of which may exist within the companies already working at

the level of best practice and based on the same basic good manufacturing practices, but the lean tools

still remain the foundation[11]

Page 461: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1095 Vol. 7, Issue 3, pp. 1091-1108

These three strategies were complementary rather than mutually exclusive, and yet it was likely that

each would work better than others in certain contexts. In addition to decoupling point identification,

there are other approaches to achieve a leagile supply chain. One such method is transshipment,

naturally led to coordinated replenishment policies across locations.

Another approach was proposed by Stratton and Warburton (2003). They argued that leanness and

agility were in practice caught in a trade-off conflict. However, if it were possible to define explicitly

the trade-offs, then a solution to resolve this contradiction was possible. In this way, they interpreted

the conflict nature in terms of dependency, fluctuation, inventory and capacity, and then

systematically linked such trade-offs to develop another approach. This approach was combined with

other approaches: the theory of inventive problem solving (TRIZ for short) and the theory of

constraints (TOC for short). TRIZ of a number of principle-based solution systems was applied to

solve the physical contradictions between leanness and agility while TOC was resolving trade-offs for

highly complementing to TRIZ. However, although there may exist various ways to combine them, it

is significantly important that lean production is a necessary but not a sufficient condition for

achieving agility (Kidd, 1994; Robertson and Jones, 1999; Mason-Jones et al., 2000b;[10]

Furthermore, agility cannot be achieved without experiencing relevant stages of leanness. Mason-

Jones et.al., (2000a) presented two reasons for this fact.

First, lean and agile supply chains share many common features that help speed up the achievement of

leagility. Hence, agility may be initiated by building on the relevant features of leanness.

Second, agility requires control of all processes in the supply chain. It is difficult, if not impossible, to

see how agility can be acquired without having first gone through the process enhancement stage of

lean production.[36].

1.4. Analytical hierarchy process ideology

The Analytic Hierarchy Process method (AHP) was developed by Thomas Saaty in the beginning of

1870s and it represents a tool in the decision making analysis [12].

The author of AHP Thomas L. Saaty called a process, and not a method probably because of the

process character of its elements [38].

Analytic Hierarchy Process (AHP) is originally introduced by Saaty in [4] as a excellent MCDM

(multi criteria decision making) tool which tries to satisfy several conflicting criteria[59]

The AHP technique can evaluate qualitative, quantitative and intuitive criteria comprehensively, and

it is possible to raise the level of confidence of it through carrying out consistency testing. The AHP

technique resembles the structure of human brain, and obtains quantitative results by transforming the

comparative weight between elements to ratio scale. The AHP technique is based on three principles;

hierarchical structuring, weighting, logical consistency.[43]

Analytic hierarchy process (AHP) is a methodological approach which implies structuring criteria of

multiple options into a system hierarchy, including relative values of all criteria, comparing

alternatives for each particular criterion and defining average importance of alternatives.

1.5. FUZZY LOGIC conceptual ideology

The philosophy of Fuzzy Logic (FL) may be traced back to the diagram of Taiji created by Chinese

people before 4600 B.C. But the study of Fuzzy Logic Systems (FLS) began as early as the 1960s. In

the 1970s, FL was combined with expert systems to become a FLS, which imprecise information

mimics a human-like reasoning process. FLS make it possible to cope with uncertain and complex

agile manufacturing systems that are difficult to model mathematically [23]. By opinion of [2], in

managerial practice, there are often situations when it is not enough for managers to rely on their own

instincts. With specific fuzzy programs it is even possible to choose suppliers, service providers or to

buy necessary goods.

II. RESEARCH METHODOLOGY

2.1. Modeling the Manufacturing Strategies Using AHP

Since several strategies can structure a particular manufacturing system which in turn provides certain

strategies (lean, agile, or leagile manufacturing), a value should be obtained based on measuring

Page 462: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1096 Vol. 7, Issue 3, pp. 1091-1108

factors and characteristics factors for this particular manufacturing system in order to identify the

strategies. Therefore for a proper decision to be made, these factors modeling using AHP as shown in

figure (1)

Figure 1. A Propose Model

Elimination of

Waste

Manufacturing

Capability

Market

Sensitiveness Information Technology

(Information Driver)

Flexibility

The Most appropriate

Manufacturing System

Lead Time Cost Quality Service Productivity

Delivery Speed (DS)

New Product Introduction

(NPI)

Customer Responsiveness (CR)

Electronic Data Interchange (EDI) Mean of Information (MOI)

Data Accuracy (DA)

Data and Knowledge Bases (DKB)

Source Flexibility (SF)

Manufacturing Flexibility (MF)

Delivery Flexibility (DF)

Overproduction (OP)

Inventory Waiting (IT)

Knowledge Misconection (KM)

Lean Manufacturing Agile Manufacturing LeAgile Manufacturing

Page 463: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1097 Vol. 7, Issue 3, pp. 1091-1108

The main measuring factors for lean, agile and leagile strategies are depended on five measures (lead

time, cost, quality, productivity, service level) and A characteristic can be defined as the feature of

the property which is obtained by considering several parameters(Elimination Of waste(Over-

production , Inventory transportation waiting , Knowledge Misconnects) , Flexibility (Manufacturing

flexibility, Delivery flexibility , Source flexibility), Information Technology(Electronic data

interchange, Means of information and data accuracy , Data and knowledge bases ) Market

Sensitivity(Delivery speed , New product introduction, Customer responsiveness). Hence the

manufacturing system in a described state performs under closely specified conditions that produce a

metric value. [41]

2.2.Data collection

We was chosen General Company for Readymade Wear Manufacturing in Mosul, Iraq, a field of

study, were selected by A committee composed of three directors of departments productivity in the

General Company for the manufacture ready-made clothes in city Mosul in Iraq are the Director of

the Section productive first (line productive - dishdasha) and Director of the Section productive

second (line productive Aalghemsalh) and Director of the Section productive third line productive

(Altruakh) "They evaluates certified manufacturing methods in the company through the scaling

factors of (Lead time, Cost, Quality, Productivity, Service )Where the questionnaire was designs to

seek expert opinion about the requires rating for implementing lean, agile and leagile manufacturing

strategies in industries. The opinions provide the necessary data which are captured from internal and

external experts according to their qualifications. For this purpose to indicate the relative importance

of each criterion according to standard corresponding giving weight relative (1-9) was unloading The

composed data is adjusted using Expert Choice software which is a multi-objective decision support

tool based on the Analytic Hierarchy Process (AHP). and use CGI program ,the feedback data input of

the five measuring factors that were filled out in the questionnaire is shown in table(1)

Table(1):feedback data input of the five measuring factors

metric Production line

1

Production

line2

Production

line3

mean

Lead time 5,7,9 3,5,7 5,7,9 .833 .633 .433

cost 3,5,7 1,3,5 3,5,7 .633 .43 .233

Quality 1,3,5 3,5,7 1,3,5 .566 .366 .166

productivity 5,7,9 3,5,7 5,7,9 .633 .43 .233

Service level 3,5,7 1,3,5 3,5,7 .833 .633 .43

Then , the feedback data input of characteristics factors as shown in tab(2),(3),(4),(5),(6) which

demonstrates the manufacturing performance for the( lead time, cost, quality ,productivity, service

level) with respect to (lean, agile and leagile) manufacturing strategy. a consistency ratio was

calculated by the software to check the applicability of the paired comparisons The value consistency

ratio should be 10 percent or less.

Therefore, all the consistency ratio of the below table is less than 10 %.

III. ANALYSIS AND DISCUSSION

They assess the manufacturing methods adopted in the company through the measurement criteria of (

lead time, cost, quality, productivity, level service) according to the logic FUZZThe linguistic terms

are used to assess the performance rating and importance weights of the integration values since it is

difficult for experts to determine the score of vague values such as training level of personnel [80].

Most of the time linguistic scales include 2 scales [46]. The linguistic variables {Excellent [E], Very

Good [VG], Good [G], Fair [F], Poor [P], Very Poor [VP], Worst [W]} were selected to assess the

performance rating of the integration capability (these are the examined coefficients). Then the

linguistic variables {Very High [VH], High [H], Fairly High [FH], Medium [M], Fairly Low [FL],

Low [L], Very Low [VL]} were selected to assess the importance weights of the integration

capabilities. Using the previous studies [80, 46], the table of fuzzy numbers for the linguistic variable

values was created .

Page 464: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1098 Vol. 7, Issue 3, pp. 1091-1108

The results were obtained as shown in the table (1) which represents the input data feedback, it is

worth noting here that he has been using the program ready minitab v23. For the year 2013 in getting

the results of multiplying matrices while the program is used gci to get the values of functions

belonging according to the logic FUZZY and calculate the eigine value, as well as index consistency,

as shown in Tables (2), (3), (4), (5), (6), (7)

2.2.1. lead time

Tab(2) demonstrates the characteristics factor for the measuring factor of lead time

Tab.(2) Evaluation model to lead time

Measuring

factors

Characteristics

Factors

Sub

characteristics

D1 D2 D3 LEAN AGILE LEAGILE

Lead time Elimination

waste

Over-production 1,3,5 3,5,7 1,3,5 .566 .366 .166

Inventory

transportation

waiting

1,3,5 1,3,5 1,3,5 .5 .3 .1

Knowledge

Misconnects

1,3,5, 3,5,7 5,7,9 .7 .5 .3

Flexibility

Manufacturing

flexibility,

1,3,5 3,5,7 1,3,5 .566 .366 .166

Delivery

flexibility

1,3,5 1,3,5 3,5,7 .566 .366 .166

Source

flexibility

3,5,7 1,3,5 3,5,7 .633 .433 .433

Information

technology

Electronic data

interchange

5,7,9 3,5,7 5,7,9 .566 .633 .233

Means of

information and

data accuracy

3,5,7 1,3,5 3,5,7 .633 .366 .166

Data and

knowledge bases

1,3,5 3,5,7 1,3,5 .633 .366 .166

Market

sensitivity

Delivery speed 1,3,5 3,5,7 3,5,7 .633 .166 .233

New product

introduction

1,3,5 1,3,5 1,3,5 .166 .3 .1

Customer

responsiveness

1,3,5 1,3,5 1,3,5 .5 .3 .1

2.33,4.33,6.33 2,4,6 1.58,3.41.5.25

2.2.2. COST

Moreover, tab(3) demonstrates the characteristics factor for the measuring factor of cost

Tab.(3) Evaluation model of the cost

Measuring

factors

Characteristics

Factors

Sub

characteristics

D1 D2 D3 LEAN AGILE LEAGILE

cost Elimination

waste

Over-production 5,7,9 1,3,5 5,7,9 .766 .566 .366

Inventory

transportation

5,7,9 3,5,7 5,7,9 .833 .633 .433

Page 465: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1099 Vol. 7, Issue 3, pp. 1091-1108

waiting

Knowledge

Misconnects

5,7,9 3,5,7 5,7,9 .833 .633 .566

Flexibility

Manufacturing

flexibility,

5,7,9 3,5,7 3,5,7 .766 .566 .366

Delivery

flexibility

3,5,7 1,3,5 3,5,7 .633 .433 .233

Source

flexibility

3,5,7 1,3,5 1,3,5 .566 .366 .166

Information

technology

Electronic data

interchange

3,5,7 3,5,7 1,3,5 .566 .433 .233

Means of

information and

data accuracy

3,5,7 1,3,5 3,5,7 .566 .433 .233

Data and

knowledge bases

3,5,7 1,3,5 1,3,5 .566 .366 .166

Market

sensitivity

Delivery speed 3,5,7 1,3,5 1,3,5 .566 .366 .166

New product

introduction

5,7,9 3,5,7 3,5,7 .766 .566 .366

Customer

responsiveness

3,5,7 1,3,5 1,3,5 .566 .366 .166

3.83,5.83,7.83 4.08,3.83,5.83 2.66,4.66,6.66

2.2.3. quality

Furthermore, tab(4) shows the characteristics factor for the measuring factor of quality.

Tab.(4) Evaluation model of the quality

Measuring

factors

Characteristics

Factors

Sub

characteristics

D1 D2 D3 LEAN AGILE LEAGILE

Quality Elimination

waste

Over-production 3,5,7 1,3,5 3,5,7 .633 .433 .233

Inventory

transportation

waiting

1,3,5 1,3,5 1,3,5 .5 .3 .1

Knowledge

Misconnects

3,5,7 3,5,7 1,3,5 .633 .433 .233

Flexibility

Manufacturing

flexibility,

1,3,5 5,7,9 3,5,7 .7 .5 .3

Delivery

flexibility

3,5,7 5,7,9 1,3,5 .7 .5 .366

Source

flexibility

1,3,5 3,5,7 3,5,7 .633 .433 .3

Information

technology

Electronic data

interchange

3,5,7 5,7,9 3,5,7 .766 .566 .366

Means of

information and

data accuracy

3,5,7 5,7,9 1,3,5 .7 .5 .3

Data and 1,3,5 1,3,5 5,7,9 .633 .433 .233

Page 466: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1100 Vol. 7, Issue 3, pp. 1091-1108

knowledge bases

Market

sensitivity

Delivery speed 5,7,9 3,5,7 5,7,9 .833 .633 .433

New product

introduction

5,7,9 5,7,9 3,5,7 .823 .633 .433

Customer

responsiveness

3,5,7 5,7,9 3,5,7 .766 .566 .366

2.66,4.66,6.66 3.5,5.5,7.5 2.66,4.66,6.66

2.2.4. Productivity

In addition ,tab(5) demonstrates the characteristics factor for the measuring factor of productivity.

Tab.(5) Evaluation model of the productivity

Measuring

factors

Characteristics

Factors

Sub

characteristics

D1 D2 D3 LEAN AGILE LEAGILE

Productivity Elimination

waste

Over-production 3,5,7 5,7,9 3,5,7 .766 .566 .366

Inventory

transportation

waiting

3,5,7 5,7,9 3,5,7 .766 .566 .366

Knowledge

Misconnects

3,5,7 3,5,7 3,5,7 .7 .5 .3

Flexibility

Manufacturing

flexibility,

1,3,5 5,7,9 5,7,9 .766 .566 .366

Delivery

flexibility

5,7,9 3,5,7 5,7,9 .833 .633 .433

Source

flexibility

3,5,7 5,7,9 3,5,7 .766 .566 .366

Information

technology

Electronic data

interchange

5,7,9 3,5,7 5,7,9 .833 .633 .433

Means of

information and

data accuracy

3,5,7 5,7,9 3,5,7 .766 .566 .366

Data and

knowledge bases

5,7,9 1,3,5 5,7,9 .766 .566 .366

Market

sensitivity

Delivery speed 3,5,7 5,7,9 3,5,7 .766 .566 .366

New product

introduction

5,7,9 3,5,7 3,5,7 .266 .566 .366

Customer

responsiveness

1,3,5 1,3,5 3,5,7 .566 .366 .166

3.66,5.66,7.66 3.66,5.66,7.33 3.33,5.33,7.33

2.2.5. Service level

In addition, tab(6) demonstrates the characteristics factor for the measuring factor of service level

Tab.(6) Evaluation model of the service level

Measuring

factors

Characteristics

Factors

Sub

characteristics

D1 D2 D3 LEAN AGILE LEAGILE

Page 467: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1101 Vol. 7, Issue 3, pp. 1091-1108

Service

level

Elimination

waste

Over-production 1,3,5 3,5,7 1,3,5 .566 .366 .166

Inventory

transportation

waiting

1,3,5 1,3,5 3,5,7 .566 .366 .166

Knowledge

Misconnects

3,5,7 5,7,9 3,5,7 .766 .566 .366

Flexibility

Manufacturing

flexibility,

5,7,9 5,7,9 3,5,7 .833 .633 .433

Delivery

flexibility

3,5,7 1,3,5 3,5,7 .633 .433 .233

Source

flexibility

3,5,7 1,3,5 3,5,7 .633 .433 .233

Information

technology

Electronic data

interchange

5,7,9 3,5,7 5,7,9 .833 .633 .433

Means of

information and

data accuracy

5,7,9 5,7,9 3,5,7 .766 .633 .433

Data and

knowledge bases

1,3,5 1,3,5 3,5,7 .566 .366 .166

Market

sensitivity

Delivery speed 3,5,7 1,3,5 3,5,7 .633 .433 .233

New product

introduction

5,7,9 3,5,7 1,3,5 .7 .5 . 3

Customer

responsiveness

1,3,5 3,5,7 1,3,5 .566 .366 .166

2.66,4.66,6.66 2.66,4.66,6.66 3,5,7

After entering the input ,the data output of the minitab program is shown in table(7) normalization all

the above means of characteristics factors by dividing by 10.(saaty,t.l.,1980) After data processing

methods adopted in the analytical hierarchy analysis, the outputs appear in the table (7) where the

data is converted to a standard format as shown in the table(7).

Tab(7)normalized of the measure factors means of the three decision maker

Service level productivity quality

.366 .566 .766 .266 .466 .666 .266 .466 .666

.366 .566 .733 .35 .55 .75 .2 .4 .6

.333 .533 .733 .266 .466 .666 .158 .341 .525

Cost lead time

.233 .433 .633 .266 .466 .666

.200 . 4 . 6 .266 .466 .666

.158 .341 .525 .383 . 583 .783

Page 468: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1102 Vol. 7, Issue 3, pp. 1091-1108

The proportion of consistency 10% between the vertebrae variables was found accurate comparisons

of marital and described in tables (8) (9) (10) Matrix comparisons that show the importance of marital

reliability of lean, agile, leagile manufacturing between each of the criteria.

Tab (8) Pairwise Comparison Matrix which appear lean manufacturing between index factor

Lead

time

cost quality productivity Service level

Lead time 1 0.142857 0.25 0.2 3

Cost 7 1 1 9 7

Quality

4 1 1 3 5

Productivity

5 0.111111 0.333333 1 5

Service level 0.333333 0.142857 0.2 0.2 1

Tab (9)Pairwise Comparison Matrix which appear agile manufacturing between index factor

Lead

time Cost Quality Productivity Service level

Lead time 1 9 9 7 7

Cost 0.111111 1 7 5 3

Quality 0.111111 0.142857 1 5 5

Productivity 0.142857 0.2 0.2 1 0.142857

Service

level 0.142857 0.333333 0.2 7 1

Tab (10) Pairwise Comparison Matrix which appear reliability for leagile manufacturing between index

factor

Lead

time

Cost Quality productivity

Servic

e level

Lead time 1 5 7 5 3

Cost 0.2 1 9 7 5

Quality 0.142857 0.111111 1 5 7

Productivity

0.2 0.142857 0.2 1 0.142857

Service level

0.333333 0.2 0.142857 7 1

Page 469: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1103 Vol. 7, Issue 3, pp. 1091-1108

Then, we calculate standard pair wise comparison matrix for lean, agile, leagile manufacturing, which

appear in tables (11),(12),(13)

Tab(11) Synthesized matrix for lean manufacturing

Lead

time

Cost Quality Productivity

Service

level

Priority weight

Eigen value

Lead time .o57 0.058 .089 .014 .142 .072

Cost .403 .418 .359 .671 .333 .436

Quality .230 .418 .359 .223 .238 .293

productivity .288 .046 .118 .074 .238 .152

Service level .019 .058 .071 .014 .047 0.041

The principal eigenvector is necessary for representing the priorities associated with that matrix,

providing that the inconsistency is less than or equal to a desired value [62].The value which appear in

the last Colum represent relative weight to lean manufacturing to each criteria's (lead time ,cost

,quality, productivity, service level),now estimating the consistency ratio, is as follow:

Tab(12)Synthesized matrix for agile manufacturing

Lead time Cost Quality Productivity

Service

level

Priority

weight

Eigen value

Lead time .666 0.843 .517 .28 .433 .547

Cost .073 .093 .402 .2 .185 .190

Quality .073 .013 .057 .28 .309 .146

productivity .093 .018 .011 .04 .008 .034

Service level .093 .030 .011 .28 .061 0.095

Tab(13) Synthesized matrix for leagile manufacturing

Lead

time

Cost Quality Productivity

Service

level

Priority weight

Eigen value

Lead time .534 0.775 .403 .2 .185 .419

Cost .106 .155 .519 .28 .309 .273

Quality .074 .017 .057 .2 .433 .156

productivity .106 .021 .011 .04 .008 .037

Service level .176 .031 .008 .028 .061 0.111

The result in the tab(14 ) summarize the relative weights of different types of manufacturing strategies

according to each criterion factor measurements (lead time, cost, quality, productivity, service),

according to lean , agile, leagile manufacturing strategies.

Tab (14) Relative weight of different type of manufacturing strategies

Measurement factor Lean Agile Leagile

Lead time .072 .547 .419

Cost .436 .190 .273

Quality .293 .146 .156

Productivity .152 .034 .037

Service level .041 .095 .111

sum .994 1.012 .996

Mean .1988 .2024 .1992

The normalized means of Tab (7) is multiplied by the means of feedback data input of the five

measuring factor Tab(1) to obtain the following: the result of multiplication is tab(15)

Page 470: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1104 Vol. 7, Issue 3, pp. 1091-1108

Tab(15) Result of multiplication Tab(1) and Tab(1)

average

lean 2.539 1.786 1.156 .376 .593 1.284

agile 2.492 1.753 1.135 .369 .582 1.266

leagile 2.391 1.682 1.089 .354 .558 1.210

then applying the equation :

𝛂=1.253−0.18

3.52−0.18= 0.321

Where 1.253 is the result of multiplication and .18 ,3.52are constant . The resulting (.321) is

multiplied by the certainty constant(.70) to get (.224)

Fig (2) is consulted to conclude that readymade ware in Mosul is below (.2-.10) which

represents the lean and bellow (.2924-.1124) agile and bellow(.293-.109) leagile baseline the

readymade ware in Mosul is traditional manufacturing. In order to facilitate its evolution to

lean manufacturing a should implement the following tools (cellular manufacturing, TQM,

value stream mapping ,5s, kaizen )

1

.818

1.818 3.106 7.332

α- cut

2.67 4.159

Triangular fuzzy number

Fig(2) fuzzy number to conclude manufacturing strategy

The results above summarizes measurements factors (lead time, cost, quality, productivity, service

level) in addition to convert all values to the standard according to expert opinion .for the experts

opinion appear in the tab (16).

Tab (16) mean fuzzy number of opinions experts

Expert opinion Lean agile leagile traditional

Total mean .2888-.1088 .1124-.2924 .1092-.293 0

we can calculate the consistency ratio which may be less 10% for comparison the fuzzy expert

opinion number the tab(16)appear the mean fuzzy number of the opinion experts

IV. CONCLUSIONS

1- Enhanced manufacturing strategy performance implies that a manufacturing is quickly

responding to the market volatile the customer demand with effective cost reduction.

Page 471: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1105 Vol. 7, Issue 3, pp. 1091-1108

Leanness in a manufacturing maximizes profits through cost reduction while agility

maximizes profit through providing exactly what the customer requires. The leagile strategy

enables the upstream part of the chain to be cost-effective and the downstream part to

achieve high service levels in a volatile marketplace.

2- The AHP methodology adopted here arrives at a synthetic score, which may be quite useful

for the decision-makers. The purpose of the present work is to analyze the relative impact of

different enablers on three strategic paradigms considered for manufacturing.

3- It integrates various criteria, enablers and alternatives in decision model. The approach

also captures their relationships and interdependencies across and along the hierarchies. It is

effective as both quantitative and qualitative characteristics can be considered

simultaneously without sacrificing their relationships.

4- The manufacturing system strategy of readymade ware company is traditional

manufacturing.

5- To resolve the manufacturing system in order to become lean, agile or leagile; a lot of tools

will help in becoming lean like Cellular Manufacturing, Total Quality Management ,Poka

yoke, Kaizen , Value Stream Mapping, 5 S, increase its focus on customer service and

improve the quality of its. Also, some tools will help in becoming agile like Customer Value

Focus, IT.

6- The paper provide evidence that the choice of manufacturing strategies should be based

upon a careful analysis and characteristics.

REFERENCES

[1]. Agarwal, A., Shankar, R. & Tiwari, M. K., 2006, Modeling the metrics of lean, agile and leagile supply

chain: An ANP-based approach. European Journal of Operational Research, 173, pp. 211-225.

[2]. Agbor, Tanyi Emmanuel, 2008, An evaluation of production operations at Supply Chain Based on

Lean Concept, A thesis Report submitted to Department of Production Engineering

and Management, partially fulfilling the requirements for the award of the degree of Master of Sciences

in Production Engineering and Management, Royal Institute at Technology Stockholm, Sweden.

[3]. Association for Manufacturing Excellence, 1994, Forward to the Future: The Transition to Agile,

Quick Response Manufacturing Textile/Clothing Technology Corporation [TC]2 shows how to step

sprightly, IJ Textile/Clothing Technology Corporation_ National Apparel Technology Center, Target

Vol. 10, Number 4, Illinois, pp. 41-44.

[4]. Baker, J., 1996, Less lean but considerably more agile. Special section mastering management.

Financial Times, 10 May, Systems 2, pp. 83–86.

[5]. Baramichai, M., Zimmer's, E. W. & Marangos, C. A., 2007, Agile supply chain transformation matrix:

an integrated tool for creating an agile enterprise, Supply Chain Management-an International Journal,

12, pp. 334-348.

[6]. Bhasin ,S. and P. Burcher, 2006, Lean Viewed as a Philosophy, Journal of Manufacturing Technology

Management, 17 (1), pp. 56-72.

[7]. Booth, R., 1996, Agile manufacturing, Engineering Management Journal, 6 (2), pp. 105-112.

[8]. Bunce, P. & Gould, P., 1996, From lean to agile manufacturing, IEE Colloquium (Digest), No. 278, pp.

3/1-3/5.

[9]. Cho, H., Jung, M. & Kim, M., 1996, Enabling technologies of agile manufacturing and its related

activities in Korea, Computers & Industrial Engineering, 30 (3), pp. 323-334.

[10]. Christopher, M. & Towill, D. R., 2001, An integrated model for the design of agile supply chains, An

International Journal of physical distribution and logistics management, 31(4), pp. 235 –

246.

[11]. Dean, Steven P., &Henderson, Ian, 2003, Lean ,Agile and Simple . iomnet.org.uk.

[12]. daiDes. S., Irani, Z. & Sharp, J. M., 1999, Working Towards agile manufacturing in the UK

Industry, International Journal of production and economics,62,155-169.

[13]. Devor, R., Graves, R.& Mills, J. J., 1997,Agile Manufacturing Research: accomplishments and

opportunities. IIE Transactions, 29 (10), pp. 813-823.

[14]. Esmail, K. & Saggu, J. A., 1996, Changing paradigm, Manufacturing Engineer December, pp. 285–

288.

[15]. Forsythe, C., Ashby, M. R., 1996, Human factors in agile manufacturing, Ergonomics in Design 4 (1),

pp. 15–21.

Page 472: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1106 Vol. 7, Issue 3, pp. 1091-1108

[16]. Gharakhani, Davood. Maghferati, Amid Pourghafar. Farahmandian, Arshad and Nasiri, Rasol.( 2013)

Agile manufacturing, Lean production, Just in Time systems and products quality improvement, Life

Science Journal;10(3s) . http://www.lifesciencesite.com

[17]. Ghatari,Ali,Rajabzadeh,GholamhosseinMehralian&ForouzandehZarenezhad, 2012, Developing a

model for Agile Pharmaceutical manufacturing: Evidence from Iran, Australian Journal of Basic and

Applied Sciences, 6(9), pp. 753-762.

[18]. Goldman, S.L. Nagel, R.N. (1993) Management, technology and agility: The emergence of a new era

in manufacturing, International Journal of Technology Management 8 (1/2) 18-38.

[19]. Goldman, S. L., Nage R. N. &Preiss K., 1995, Agile Competitors and Virtual Organizations: Strategies

for Enriching the Customer, Van Nostrand Reinhold Company, New York.

[20]. Goldshy, Thomas J., Stanley E. Griffiths & Adthony, S. Roath, 2006, Modeling Lean, Agile, and

Leagile supply chain strategy, Business Logics, Vol. 27, No. 1, pp. 57-80.

[21]. Gosling, J., Purvis, L., & Naim, M., 2010, Supply chain flexibility as a determinant of supplier

selection, International Journal of Production Economics, pp. 11-21.

[22]. Gould, P., 1997, What is agility, Manufacturing Engineer 76 (1), pp. 28–31.

[23]. Gunasekaran, A., 1998, Agile manufacturing: Enablers and an implementation framework,

International Journal of Production Research, 36 (5), pp. 1223-1247.

[24]. Gunasekaran, A., 1999a, Agile manufacturing: A framework for research and development,

International Journal of Production Economics, 62 (1-2), pp. 87-105.

[25]. Gunasekaran, A., 1999b, Design and implementation of agile manufacturing systems, International

Journal of Production Economics, 62 (1-2), pp. 1-6.

[26]. Gunasekaran, A., 2001, Agile manufacturing: The 21st Century Competitive Strategy, Elsevier Science

Ltd, Oxford, U.K.

[27]. Gunasekaran, A., Tirtiroglu, E. &Wolstencroft, V.,2002, An investigation into the application of agile

manufacturing in an aerospace company, Technovation, 22 (7), pp. 405-415.

[28]. Gunasekaran, A., Yusuf, Y. Y., 2002, Agile manufacturing: A taxonomy of strategic and technological

imperatives, International Journal of Production Research, 40 (6), pp. 1357-1385.

[29]. Gunasekaran, A., K., Lai & T. C. E., Cheng, 2008, Responsive supply chain: A competitive strategy in

networked company, Omega, 36(4): pp. 549.

[30]. Gupta, Anil, & Kundra, T. K., 2012, A review of designing machine tool for leanness, Sadhana, Vol.

37, Part 2, April, Indian Academy of Sciences, pp. 241–259.

[31]. Harrison, A. & Van Hoek, R., 2005, Logistics Management and strategy, London, Prentice Hall.

[32]. Hillman-Willis, T., 1998, Operational competitive requirements for the 21st century, Industrial

Management & Data.

[33]. Hitomi, K., 1997, Manufacturing strategy for future production moving toward manufacturing

excellence, International Journal of Technology Management 14 (6–8), pp. 701–711.

[34]. Hoek, R. I., Harrison, A. & Christopher, M., 2001, Measuring agile capabilities in the supply chain,

International Journal of Operations & Production Management, 21, pp. 126-147.

[35]. Holweg, M., 2005, The three dimensions of responsiveness, International Journal Operations &

Production Management, 25, pp. 603-622.

[36]. Huang, Yu-Ying &Shyh-Jane Lib, 2009, Tracking the Evolution of Research Issues on Agility, Asia

Pacific Management Review 14(1), pp. 107-129.

[37]. Ishizaka, Alessio & Labib Asharaf, 2011, Review of the main developments in the analytic hierarchy

process, Expert Systems with Applications, 38 (11), pp. 14336-14345.

[38]. Ismail, H., I. Raid, J. Mooney, J. Pool ton & I. Arokiam, 2007, How small and medium enterprises

effectively participate in the mass customization game, IEEE transactions on engineering management,

54 (1), pp. 86-97.

[39]. Jones, D. T., 2003, The Beginner's Guide to Lean, Chairman Newsletter – Lean Enterprise Academe,

November.

[40]. Kay, J., M & Prince J., 2003, Combining lean and agile characteristics: Creation of virtual groups by

enhanced production flow analysis, International Journal of production economics 85, pp. 305-318.

[41]. Khan, Arif, K., Bakkappa, B., Bhimaraya, A.M., and Sahay, B.S. , 2009 .Impact of agile supply chains’

delivery practices on firms’ performance: cluster analysis and validation. Supply Chain Management:

An International Journal, 14(1): 41-48

[42]. Kidd, P. T., 1996, Agile manufacturing: a strategy for the 21st century, IEE Colloquium (Digest) 74,

6IEE, Steven age, England.

[43]. Kim, Dong Jun & Chung, Sung Bong, 2005, Development of an assessment model using AHP

technique for rail road projects experience service conflicts in Korea, Proceedings of the Eastern Asia

Society for Transportation Studies, Vol. 5, pp. 2260 – 2274.

Page 473: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1107 Vol. 7, Issue 3, pp. 1091-1108

[44]. Kim, S. W., Narasimhan, R. &Swink, M., 2006, Disentangling Leaness and agility: An empirical

investigation. Journal of operations management, 24, pp. 440-457.

[45]. Krafcik, J., Triumph of the Lean Production System”. Sloan Management Review, 30(1), pp. 41–52.

[46]. Lin, C. T., Chiu, H. &Chu, P. Y., 2006, Agility index in the supply chain. International Journal of

Production Economics, 100, pp. 285-299.

[47]. Maskell, B. H., 1994, Software and the Agile Manufacturer: Computer Systems and World Class

Manufacturing, Oregon Productivity Press, Portland.

[48]. Maskell, B. H.,1991, Performance Measurement for World Class Manufacturing, Productivity Press,

Cambridge, MA.

[49]. Mason-Jones, R., Naylor, B. &Towill, D. R., 2000. Lean, agile or leagile? Matching your supply chain

to the marketplace, International Journal of Production Research, 38, pp. 4061-4070.

[50]. Masoud, 2007, Decision Support System for Lean, Agile and Leagile Manufacturing, Master of

Science in the Industrial Engineering Department with the College of Engineering, King Saud

University, p. 27-30.

[51]. Meier, R. L. & Walker, H. F., 1994, Agile manufacturing, Journal of Industrial Technology, 10 (4), pp.

41–43.

[52]. Meyer A. De & Wittenberg-Cox A., 1992, Creating product value: Putting manufacturing on

the strategic agenda, London: Pitman.

[53]. Monden, Y., 1998, Toyota production system: An integrated approach to just-in-time, 3rd ed., Norcross,

Georgia: Engineering and Management Press.

[54]. Moore, James S. M. R., 1996, Agility is easy, but effective agile manufacturing is not, IEE Colloquium

(Digest), No. 179, 4.

[55]. Narasimhan, R. & Das, A., 1999, Manufacturing agility and supply chain management practices,

Production and Inventory Management Journal, No. 40, pp. 4-10.

[56]. Narasimhan, R. & Das, A., 2000, An empirical examination of sourcing's role in developing

manufacturing flexibilities, Intl Journal of Prod Research, 38, pp. 875-893.

[57]. Naylor, J. B., M. M. Naim& D. Berry, 1999, Leagility: integrating the lean and agile manufacturing,

International Journal of Production Economics, 62, pp. 155-169.

[58]. Papadopoulou, T. C. & M. Özbayrak, 2005, Leanness: Experiences from the Journey to Date, Journal

of Manufacturing Technology Management, 16 (7).

[59]. Pogarcic, Ivan, Francic, Miro, &Davidovic, Vlatka, 2008, Application of AHP method in traffic

planning, ISEP, Croatia.

[60]. Preiss, K., Goldman, S. & Nagel, R., 1996, Co-operate to Compete. Van Nostrand Reinhold, New

York.

[61]. ŘEZÁČ, J., 2006, Modern management, Computer Press, Brno: 2009. 397.

[62]. Saaty, T. L., L. Vargas, 1984, Inconsistency and rank preservation, Journal of Mathematical

Psychology 28 (2).

[63]. Schonberger, R. J., 1982, Japanese Manufacturing Techniques: Nine Hidden Lessons in Simplicity,

Free Press, New York.

[64]. Schonsleben, P., 2000, With agility and adequate partnership strategies towards effective logistics

networks, Computers in Industry, 42, pp. 33-42.

[65]. Sen, Rahul, Kumar, Saurabh& S., Chandrawat, 2011, The total supply chain: Transportation System in

mine planning, Faculty of mining and geology, International journal of production economics,62, pp.

107-118.

[66]. Sen, Rahul, Kumar, & Saurabh S. Chandrawat R. , 2012, The Framework for Manufacturing Methods

and Techniques, International Journal of Engineering Research and Applications (IJERA),

www.ijera.com Vol. 2, Issue 6, November- December, pp.697-702.

[67]. Seyedi, Seyed Nima, 2012, Supply Chain Management, Lean Manufacturing, Agile Production, and

their Combination, International Journal of Contemporary Research in business, Institute of

Interdisciplinary Business Research, Vol. 4, No. 8, ijcrb.webs.com, pp. 648-653.

[68]. Sharifi, H. & Zhang Z., 1999, A methodology for achieving agility in manufacturing organizations, An

introduction International Journal of production economics, No. 62, pp. 7-22.

[69]. Sharifi, H. & Zhang, Z., 2001, Agile manufacturing in practice - Application of a methodology,

International Journal of Operations & Prod Management, No. 21, pp. 772-794.

[70]. Sharp, J. M., Irani, Z. & Desai, S., 1999, Working towards agile manufacturing in the UK industry,

International Journal of Production Economics, No. 62, pp. 155-169.

[71]. Shaw, N. E., Burgess, T. F., De Mattos, C. &Stec, L. Z., 2005, Supply chain agility: the influence of

industry culture on asset capabilities within capital intensive industries, International Journal of

Production Research, No. 43, pp. 3497-3516.

[72]. Slack, N., 1991, The Manufacturing Advantage, Mercury Books, London.

Page 474: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology, July, 2014.

©IJAET ISSN: 22311963

1108 Vol. 7, Issue 3, pp. 1091-1108

[73]. Slack, N., Chambers, S., Harland, C., Harrison, A. & Johnston, R., 1995, Operations Management,

Pitman, London, England.

[74]. Story, John, 1994,Flexible manufacturing systems, New wave manufacturing strategies, Paul chapman

publishing Ltd., London.

[75]. Swafford, P. M., Ghosh, S. & Murthy, N., 2006, The antecedents of supply chain agility of a firm:

scale development and model testing, Journal of Operations Management, Vol. 24, pp. 170-188.

[76]. Swafford, P. M., S., Ghosh & N., N. Murthy, 2006, A framework for assessing value chain agility,

International Journal of Operations and Production Management, Vol. 26, No. 2, pp. 118-

140.

[77]. Swafford, P. M., S., Ghosh & N., N. Murthy, 2008, Achieving supply chain agility through IT

integration and flexibility, International Journal of Production.

[78]. Swafford, P., 2003, Theoretical development and empirical investigation of supply chain agility, Ph. D.

thesis, The Georgia Institute of Technology, Georgia, U.S.A.

[79]. Syamsuddin, Irfan&Junseok, Hwang, 2009, The Application of AHP Model to Guide Decision

Makers: A Case Study of E-Banking Security, Fourth International Conference on

Computer Sciences and Convergence Information Technology.

[80]. Ustyugova, Tatiana, Darja Noskievicova, CSc., 2013, fuzzy logic model for evaluation of lean and

agile manufacturing integration, 15. - 17. 5., Brno, Czech Republic, EU.

[81]. Van Hoek, R. I., Harrison, A. & Christopher, M., 2001, Measuring agile capabilities in the supply

chain, International Journal of Operations & Production Management, No. 21, pp. 126-147.

[82]. Womack, J. P., D. T. Jones & D. Roos, 1990, The Machine that Changed the World: The Story of Lean

Production, Rawson Associates, New York, NY.

[83]. Womack, J. P., Jones, D. T., 1996, Lean Thinking: Banish Waste and Create Wealth in Your

Corporation, Simon and Schuster, New York.

[84]. Yusuf, Y. Y., Gunasekaran, A., Adeleye, E. O. & Sivayoganathan, K., 2004, Agile supply chain

capabilities: determinants of competitive objectives, European Journal of Operational Research, No.

159, pp. 379-392.

[85]. Yusuf, Y. Y., Sarhadi, M. & Gunasekaran, A., 1999, Agile manufacturing: The drivers, concepts and

attributes, International Journal of Production Economics, No. 62, pp. 33-43.

[86]. Zhang, David Z. & Wang, Rundong , A Taxonomical Study of Agility Strategies and Supply Chain

Management, www.google.com.

[87]. Zhang, Z. & Sharifi, H., 2000, A methodology for achieving agility in manufacturing organizations,

International Journal of Operations & Prod Management, No. 20, pp. 496-512.

[88]. Zhang, D. Z. W. & Sharifi, H., 2007, Towards theory building in agile manufacturing strategy - A

taxonomical approach, IEEE Trans on Engineering Management, 54, pp. 351-370.

AUTHOR

THAEIR AHMED SAADOON AL SAMMAN was born in MOSUL, IRAQ in 1963. He

received the Bachelor in 1984 degree from the university of MOSUL, IRAQ and the

master in year 1987 degree from the university of MOSUL, IRAQ both in business

management and the Ph.D. in year 2008 degree from the university of MOSUL,IRAQ .He

is currently Ass. Prof. in management information system in the college of Business

administration in university of MOSUL. His research interests include operation

management and operations research.

.

Page 475: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. A

MEMBERS OF IJAET FRATERNITY

Editorial Board Members from Academia

Dr. P. Singh,

Ethiopia.

Dr. A. K. Gupta,

India.

Dr. R. Saxena, India.

Dr. Natarajan Meghanathan,

Jackson State University, Jackson.

Dr. Syed M. Askari,

University of Texas, Dellas.

Prof. (Dr.) Mohd. Husain, A.I.E.T, Lucknow, India.

Dr. Vikas Tukaram Humbe,

S.R.T.M University, Latur, India.

Dr. Mallikarjun Hangarge,

Bidar, Karnataka, India.

Dr. B. H. Shekar,

Mangalore University, Karnataka, India.

Dr. A. Louise Perkins,

University of Southern Mississippi, MS.

Dr. Tang Aihong,

Wuhan University of Technology, P.R.China.

Dr. Rafiqul Zaman Khan, Aligarh Muslim University, Aligarh, India.

Dr. Abhay Bansal,

Amity University, Noida, India.

Dr. Sudhanshu Joshi,

School of Management, Doon University, Dehradun, India.

Dr. Su-Seng Pang, Louisiana State University, Baton Rouge, LA,U.S.A.

Dr. Avanish Bhadauria, CEERI, Pilani,India.

Dr. Dharma P. Agrawal

University of Cincinnati, Cincinnati.

Dr. Rajeev Singh

University of Delhi, New Delhi, India.

Dr. Smriti Agrawal

JB Institute of Engineering and Technology, Hyderabad, India

Prof. (Dr.) Anand K. Tripathi

College of Science and Engg.,Jhansi, UP, India.

Prof. N. Paramesh

Page 476: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. B

University of New South Wales, Sydney, Australia.

Dr. Suresh Kumar

Manav Rachna International University, Faridabad, India.

Dr. Akram Gasmelseed Universiti Teknologi Malaysia (UTM), Johor, Malaysia.

Dr. Umesh Kumar Singh

Vikram University, Ujjain, India.

Dr. A. Arul Lawrence Selvakumar

Adhiparasakthi Engineering College,Melmaravathur, TN, India.

Dr. Sukumar Senthilkumar Universiti Sains Malaysia,Pulau Pinang,Malaysia.

Dr. Saurabh Pal VBS Purvanchal University, Jaunpur, India.

Dr. Jesus Vigo Aguiar University Salamanca, Spain.

Dr. Muhammad Sarfraz Kuwait University,Safat, Kuwait.

Dr. Xianbo Qui Xiamen University, P.R.China.

Dr. C. Y. Fong University of California, Davis.

Prof. Stefanos Gritzalis University of the Aegean, Karlovassi, Samos, Greece.

Dr. Hong Hu Hampton University, Hampton, VA, USA.

Dr. Donald H. Kraft Louisiana State University, Baton Rouge, LA.

Dr. Veeresh G. Kasabegoudar COEA,Maharashtra, India.

Dr. Nouby M. Ghazaly Anna University, Chennai, India.

Dr. Paresh V. Virparia Sardar Patel University, V V Nagar, India.

Dr.Vuda Srinivasarao St. Mary’s College of Engg. & Tech., Hyderabad, India.

Dr. Pouya Derakhshan-Barjoei Islamic Azad University, Naein Branch, Iran.

Dr. Sanjay B. Warkad

Priyadarshini College of Engg., Nagpur, Maharashtra, India.

Dr. Pratyoosh Shukla

Birla Institute of Technology, Mesra, Ranchi,Jharkhand, India.

Dr. Mohamed Hassan Abdel-Wahab El-Newehy King Saud University, Riyadh, Kingdom of Saudi Arabia.

Dr. K. Ramani K.S.Rangasamy College of Tech.,Tiruchengode, T.N., India.

Dr. J. M. Mallikarjuna

Page 477: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. C

Indian Institute of Technology Madras, Chennai, India.

Dr. Chandrasekhar

Dr.Paul Raj Engg. College, Bhadrachalam, Andhra Pradesh, India.

Dr. V. Balamurugan

Einstein College of Engineering, Tirunelveli, Tamil Nadu, India.

Dr. Anitha Chennamaneni

Texas A&M University, Central Texas, U.S.

Dr. Sudhir Paraskar

S.S.G.M.C.E. Shegaon, Buldhana, M.S., India.

Dr. Hari Mohan Pandey

Middle East College of Information Technology, Muscat, Oman.

Dr. Youssef Said

Tunisie Telecom / Sys'Com Lab, ENIT, Tunisia.

Dr. Mohd Nazri Ismail

University of Kuala Lumpur (UniKL), Malaysia.

Dr. Gabriel Chavira Juárez

Autonomous University of Tamaulipas,Tamaulipas, Mexico.

Dr.Saurabh Mukherjee

Banasthali University, Banasthali,Rajasthan,India.

Prof. Smita Chaudhry

Kurukshetra University, Kurukshetra, Harayana, India.

Dr. Raj Kumar Arya

Jaypee University of Engg.& Tech., Guna, M. P., India.

Dr. Prashant M. Dolia

Bhavnagar University, Bhavnagar, Gujarat, India.

Dr. Dewan Muhammad Nuruzzaman

Dhaka University of Engg. and Tech., Gazipur, Bangladesh.

Dr. Hadj. Hamma Tadjine

IAV GmbH, Germany.

Dr. D. Sharmila

Bannari Amman Institute of Technology, Sathyamangalam, India

Dr. Jifeng Wang

University of Illinois, Illinois, USA.

Dr. G. V. Madhuri

GITAM University, Hyderabad, India.

Dr. T. S. Desmukh

MANIT, Bhopal, M.P., India.

Dr. Shaikh Abdul Hannan

Vivekanand College, Aurangabad, Maharashtra, India.

Dr. Zeeshan Ahmed

University of Wuerzburg, Germany.

Dr. Nitin S. Choubey

M.P.S.T.M.E.,N.M.I.M.S. (Shirpur Campus), Dhule, M.S., India.

Page 478: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. D

Dr. S. Vijayaragavan

Christ College of Engg. and Technology, Pondicherry, India.

Dr. Ram Shanmugam

Texas State University - San Marcos, Texas, USA.

Dr. Hong-Hu Zhu

School of Earth Sciences and Engg. Nanjing University, China.

Dr. Mahdi Zowghi

Department of Sharif University of technology, Tehran, Iran.

Dr. Cettina Santagati

Università degli Studi di Catania, Catania, Italy.

Prof. Laura Inzerillo

University of Palermo, Palermo, Italy.

Dr. Moinuddin Sarker

University of Colorado, Boulder, U.S.A.

Dr. Mohammad Amin Hariri Ardebili

University of Colorado, Boulder, U.S.A.

Dr. S. Kishore Reddy

Swarna Bharathi College of Engineering, Khammam, A.P., India.

Dr. V.P.S. Naidu

CSIR - National Aerospace Laboratories, Bangalore, India.

Dr. S. Ravi

Nandha Engineering College, Erode, Tamilnadu, India.

Dr. R. Sathish Kumar

K L University, Andhra Pradesh, India.

Dr. Nawfal Jebbor

Moualy Ismail University, Meknes Morocco.

Dr. Ali Jasim Mohammed Al-Jabiry

Al-Mustansiriyah University, Baghdad, Iraq.

Dr. Meyyappan Venkatesan

Mekelle University, Ethiopia.

Dr. A. S. N. Chakravarthy

University College of Engineering, JNTUK - Vizianagaram Campus, A. P., India.

Dr. A. Abdul Rasheed

Valliammai Engineering College, Kattankulathur, Tamil Nadu, India.

Dr. A. Ravi Shankar

Indian Institute of Technology, Guwahati, Assam, India.

Dr. John Kaiser S. Calautit

University of Leeds, Leeds, UK.

Dr. S. Manikandan

R.M.D. Engineering College, Kavaraipettai, Tamil nadu, India,

Page 479: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. E

Dr. Sandeep Gupta

Noida Institute of Engineering and Technology, Gr.Noida, India.

Dr. Kshitij Shinghal

Moradabad Institute of Technology, Moradabad, Uttar Pradesh, India.

Dr. Ankit Srivastava

Bundelkhand University,Jhansi, U.P., India

Dr. Abbas M. Al-Bakry

Babylon University, IRAQ

Dr. K. Narasimhulu

National Institute of Technology Warangal, Warangal, Andhra Pradesh, India.

Dr. Messaouda AZZOUZI

University of Djelfa, Algeria.

Dr. Govindaraj Thangavel

Muthayammal Engineering College, Rasipuram, Tamil Nadu, India.

Dr. Deep Kamal Kaur Randhawa

Guru Nanak Dev University Regional Campus, Jalandhar, Punjab, India.

Dr. L. Mary Immaculate Sheela

Professor at R.M.D. Engineering College, Chennai, India.

Dr. Sanjeevi Chitikeshi

Murray State University, Murray, KY, USA

Dr. Laith Ahmed Najam

Mosul University, College of Science, Iraq.

Dr. Brijesh Kumar

School of Engineering & Technology, Graphic Era University Dehradun,

Uttarakhand, India

Editorial Board Members from Industry/Research Labs.

Tushar Pandey

STEricsson Pvt Ltd, India.

Ashish Mohan

R&D Lab, DRDO, India.

Amit Sinha

Honeywell, India.

Tushar Johri

Infosys Technologies Ltd, India.

Dr. Om Prakash Singh

Manager, R&D, TVS Motor Company, India.

Dr. B.K. Sharma

Northern India Textile Research Assoc., Ghaziabad, U.P., India.

Page 480: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. F

Mr. Adis Medic

Infosys ltd, Bosnia.

Mr. M. Muralidharan

Indian Oil Corporation Ltd., India.

Mr. Rohit Kumar Malik

Oracle India Pvt. Ltd., Bangalore, India.

Dr. Pinak Ranade

Centre For Development of Advance Computing, Pune, India.

Dr. Jierui Xie

Oracle Corporation, Redwood City, CA, U.S.A.

Dr. R. Guruprasad

Scientist, CSIR-National Aerospace Laboratories, Bangalore, India.

Ms. Priyanka Gandhi

Goken America, Honda Research and Development North America, U.S.A.

Mr. Ang Boon Chong

eASIC, Malaysia

Advisory Board Members from Academia & Industry/Research Labs.

Prof. Andres Iglesias,

University of Cantabria, Santander, Spain.

Dr. Arun Sharma,

K.I.E.T, Ghaziabad, India.

Prof. Ching-Hsien (Robert) Hsu,

Chung Hua University, Taiwan, R.o.C.

Dr. Himanshu Aggarwal,

Punjabi University, Patiala, India.

Prof. Munesh Chandra Trivedi,

CSEDIT School of Engg.,Gr. Noida,India.

Dr. P. Balasubramanie,

K.E.C.,Perundurai, Tamilnadu, India.

Dr. Seema Verma,

Banasthali University, Rajasthan, India.

Dr. V. Sundarapandian,

Dr. RR & Dr. SR Technical University,Chennai, India.

Mayank Malik,

Keane Inc., US.

Prof. Fikret S. Gurgen,

Bogazici University Istanbul, Turkey.

Dr. Jiman Hong

Soongsil University, Seoul, Korea.

Prof. Sanjay Misra, Federal University of Technology, Minna, Nigeria.

Page 481: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. G

Prof. Xing Zuo Cheng,

National University of Defence Technology, P.R.China.

Dr. Ashutosh Kumar Singh

Indian Institute of Information Technology Allahabad, India.

Dr. S. H. Femmam

University of Haute-Alsace, France.

Dr. Sumit Gandhi

Jaypee University of Engg.& Tech., Guna, M. P., India.

Dr. Hradyesh Kumar Mishra

JUET, Guna , M.P., India.

Dr. Vijay Harishchandra Mankar

Govt. Polytechnic, Nagpur, India.

Prof. Surendra Rahamatkar

Nagpur Institute of Technology, Nagpur, India.

Dr. B. Narasimhan

Sankara College of Science And Commerce, Coimbatore, India.

Dr. Abbas Karimi

Islamic Azad University,Arak Branch, Arak,Iran.

Dr. M. Munir Ahamed Rabbani

Qassim University, Saudi Arabia.

Dr. Prasanta K Sinha

Durgapur Inst. of Adva. Tech. & Manag., Durgapur, W. B., India.

Dr. Tole H. Sutikno

Ahmad Dahlan University(UAD),Yogyakarta, Indonesia.

Dr. Anna Gina Perri

Politecnico di Bari, BARI - Italy.

Prof. Surendra Rahamatkar

RTM Nagpur University, India.

Dr. Sagar E. Shirsath

Vivekanand College, Aurangabad, MS, India.

Dr. Manoj K. Shukla

Harcourt Butler Technological Institute, Kanpur, India.

Dr. Fazal Noorbasha

KL University, Guntur, A.P., India.

Dr. Manjunath T.C.

HKBK College of Engg., Bangalore, Karnataka, India.

Dr. M. V. Raghavendra

Swathi Institute of Technology & Sciences, Ranga Reddy , A.P. , India.

Dr. Muhammad Farooq

University of Peshawar, 25120, Khyber Pakhtunkhwa, Pakistan.

Prof. H. N. Panchal

L C Institute of Technology, Mehsana, Gujarat, India.

Dr. Jagdish Shivhare

ITM University, Gurgaon, India.

Prof.(Dr.) Bharat Raj Singh

Page 482: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. H

SMS Institute of Technology, Lucknow, U.P., India.

Dr. B. Justus Rabi

Toc H Inst. of Sci. & Tech. Arakkunnam, Kerala, India.

Prof. (Dr.) S. N. Singh

National Institute of Technology, Jamshedpur, India.

Prof.(Dr) Srinivas Prasad,

Gandhi Inst. for Technological Advancement, Bhubaneswar, India.

Dr. Pankaj Agarwal

Samrat Ashok Technological Institute, Vidisha (M.P.), India.

Dr. K. V. L. N. Acharyulu

Bapatla Engineering College, Bapatla, India.

Dr. Shafiqul Abidin

Kalka Inst. for Research and Advanced Studies, New Delhi, India.

Dr. M. Senthil Kumar PRCET, Vallam, Thanjavur, T.N., India.

Dr. M. Sankar East Point College of Engg. and Technology, Bangalore, India.

Dr. Gurjeet Singh

Desh Bhagat Inst. of Engg. & Management, Moga, Punjab, India

Dr. C. Venkatesh

E. B. E. T. Group of Institutions, Tirupur District, T. N., India.

Dr. Ashu Gupta

Apeejay Institute of Management, Jalandhar, India.

Dr. Brijender Kahanwal

Galaxy Global Imperial Technical Campus, Ambala, India.

Dr. A. Kumaravel

K. S. Rangasamy College of Technology, Tiruchengode, India.

Dr. Norazmawati Md. Sani

Universiti Sains Malaysia, Pulau Pinang Malaysia

Dr. Mariateresa Galizia

University of Catania, Catania, Italy.

Dr. M V Raghavendra

Adama Science & Technology University, Ethiopia.

Dr. Mahdi Moharrampour

Islamic Azad University Buin zahra Branch, Iran.

Dr. S. Sasikumar

Jayaram College of Engg. and Tech.,Trichy, T.N., India.

Dr. Jay Prakash Verma

Banaras Hindu University, Varanasi, India.

Dr. Bensafi Abd-El-Hamid

Abou Bekr Belkaid University of Tlemcen, Tlemcen, Algeria.

Dr. Tarit Roychowdhury

Page 483: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. I

Supreme Knowledge Foundation Group of Institutes, Mankundu, Hooghly, India.

Dr. Shourabh Bhattacharya

Madhav Institute of Technology & Science (M.I.T.S.), Gwalior, India.

Dr. G. Dalin

SNMV College of Arts and Science, Coimbatore, India.

Dr. Syed Asif Ali

SMI University, Karachi, Pakistan.

Dr. Ushaa Eswaran

Velammal Engineering College, Chennai, India.

Dr. Ashok Dargar

Sir Padampat Singhania University, Udaipur,Raj., India.

Dr. Mohammad Hadi Dehghani

Tehran University of Medical Sciences, Tehran, I.R.Iran.

Dr. K. Nagamalleswara Rao

Bapatla Engineering College, Bapatla (PO), Andhra Pradhesh, India.

Dr. Md Shafiq Alam

Punjab Agricultural University, Ludhiana, Punjab, India.

Dr. D. Raju

Vidya Jyothi Institute of Technology, Hyderabad, India.

Research Volunteers from Academia

Mr. Ashish Seth,

Ideal Institute of Technology, Ghaziabad, India.

Mr. Brajesh Kumar Singh,

RBS College,Agra,India.

Prof. Anilkumar Suthar,

Kadi Sarva Viswavidhaylay, Gujarat, India.

Mr. Nikhil Raj,

National Institute of Technology, Kurukshetra, Haryana, India.

Mr. Shahnawaz Husain,

Graphic Era University, Dehradun, India.

Mr. Maniya Kalpesh Dudabhai

C.K.Pithawalla College of Engg.& Tech.,Surat, India.

Dr. M. Shahid Zeb

Universiti Teknologi Malaysia(UTM), Malaysia.

Mr. Brijesh Kumar

Research Scholar, Indian Institute of Technology, Roorkee, India.

Mr. Nitish Gupta

Guru Gobind Singh Indraprastha University,India.

Page 484: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. J

Mr. Bindeshwar Singh

Kamla Nehru Institute of Technology, Sultanpur, U. P., India.

Mr. Vikrant Bhateja

SRMGPC, Lucknow, India.

Mr. Ramchandra S. Mangrulkar

Bapurao Deshmukh College of Engineering, Sevagram,Wardha, India.

Mr. Nalin Galhaut

Vira College of Engineering, Bijnor, India.

Mr. Rahul Dev Gupta

M. M. University, Mullana, Ambala, India.

Mr. Navdeep Singh Arora

Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab, India.

Mr. Gagandeep Singh

Global Institute of Management and Emerging Tech.,Amritsar, Punjab, India.

Ms. G. Loshma

Sri Vasavi Engg. College, Pedatadepalli,West Godavari, Andhra Pradesh, India.

Mr. Mohd Helmy Abd Wahab

Universiti Tun Hussein ONN Malaysia, Malaysia.

Mr. Md. Rajibul Islam

University Technology Malaysia, Johor, Malaysia.

Mr. Dinesh Sathyamoorthy

Science & Technology Research Institute for Defence (STRIDE), Malaysia.

Ms. B. Neelima

NMAM Institute of Technology, Nitte, Karnataka, India.

Mr. Mamilla Ravi Sankar

IIT Kanpur, Kanpur, U.P., India.

Dr. Sunusi Sani Adamu

Bayero University, Kano, Nigeria.

Dr. Ahmed Abu-Siada

Curtin University, Australia.

Ms. Shumos Taha Hammadi

Al-Anbar University, Iraq.

Mr. Ankit R Patel

L C Institute of Technology, Mahesana, India.

Mr.Athar Ravish Khan Muzaffar Khan

Jawaharlal Darda Institute of Engineering & Technology Yavatmal, M.S., India.

Prof. Anand Nayyar

KCL Institute of Management and Technology, Jalandhar, Punjab, India.

Mr. Arshed Oudah

UTM University, Malaysia.

Mr. Piyush Mohan

Page 485: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. K

Swami Vivekanand Subharti University, Meerut, U.P., India.

Mr. Mogaraju Jagadish Kumar

Rajampeta, India.

Mr. Deepak Sharma

Swami Vivekanand Subharti University, Meerut, U.P., India.

Mr. B. T. P. Madhav

K L University, Vaddeswaram, Guntur DT, AP, India.

Mr. Nirankar Sharma

Subharti Institute of Technology & Engineering, Meerut, U.P., India.

Mr. Prasenjit Chatterjee

MCKV Institute of Engineering, Howrah, WB, India.

Mr. Mohammad Yazdani-Asrami

Babol University of Technology, Babol, Iran.

Mr. Sailesh Samanta

PNG University of Technology, Papua New Guinea.

Mr. Rupsa Chatterjee

University College of Science and Technology, WB, India.

Er. Kirtesh Jailia

Independent Researcher, India.

Mr. Abhijeet Kumar

MMEC, MMU, Mullana, India.

Dr. Ehab Aziz Khalil Awad

Faculty of Electronic Engineering, Menouf, Egypt.

Ms. Sadia Riaz

NUST College of E&ME, Rawalpindi, Pakistan.

Mr. Sreenivasa Rao Basavala

Yodlee Infotech, Bangalore, India.

Mr. Dinesh V. Rojatkar

Govt. College of Engineering, Chandrapur, Maharashtra State, India.

Mr. Vivek Bhambri

Desh Bhagat Inst. of Management & Comp. Sciences, Mandi Gobindgarh, India.

Er. Zakir Ali

I.E.T. Bundelkhand University, Jhansi, U.P., India.

Mr. Himanshu Sharma

M.M University, Mullana, Ambala, Punjab, India.

Mr. Pankaj Yadav

Senior Engineer in ROM Info Pvt.Ltd, India.

Mr. Fahimuddin.Shaik

JNT University, Anantapur, A.P., India.

Mr. Vivek W. Khond

G.H.Raisoni College of Engineering, Nagpur, M.S. , India.

Mr. B. Naresh Kumar Reddy

K. L. University, Vijayawada, Andra Pradesh, India.

Mr. Mohsin Ali

APCOMS, Pakistan.

Mr. R. B. Durairaj

Page 486: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. L

SRM University, Chennai., India.

Mr. Guru Jawahar .J

JNTUACE, Anantapur, India.

Mr. Muhammad Ishfaq Javed Army Public College of Management and Sciences, Rawalpindi, Pakistan.

Mr. M. Narasimhulu

Independent Researcher, India.

Mr. B. T. P. Madhav K L University, Vaddeswaram, Guntur DT, AP, India.

Mr. Prashant Singh Yadav

Vedant Institute of Management & Technology, Ghaziabad, India.

Prof. T. V. Narayana Rao

HITAM, Hyderabad, India.

Mr. Surya Suresh

Sri Vasavi Institute of Engg & Technology, Nandamuru,Andhra Pradesh, India.

Mr. Khashayar Teimoori

Science and Research Branch, IAU, Tehran, Iran.

Mr. Mohammad Faisal Integral University, Lucknow, India.

Prof. H. R. Sarode

Shri Sant Gadgebaba College of Engineering and Technology, Bhusawal, India.

Mr. Rajeev Kumar

KIRAS, GGSIP University, New Delhi, India.

Mr. Mamilla Ravi Sankar

Indian Institute of Technology, Kanpur, India.

Mr. Avadhesh Kumar Yadav

University of Lucknow, Lucknow, India.

Ms. Mina Asadi Independent Researcher, Adelaide, Australia.

Mr. Muhammad Naufal Bin Mansor Universiti Malaysia Perlis, Malaysia.

Mr. R. Suban Annamalai University, Annamalai Nagar, India.

Mr. Abhishek Shukla

R. D. Engineering College, Ghaziabad, India.

Mr. Sunil Kumar

Adama Science and Technology University, Adama, Ethiopia, Africa.

Mr. A. Abdul Rasheed

SRM Valliammai Engineering College, Chennai, T. N., India.

Mr. Hari Kishore Kakarla K L University, Guntur, Andhra Pradesh, India.

Mr. N. Vikram Jayaprakash Narayan College of Engineering, Dharmapur, A.P, India

Mr. Samir Malakar

MCKV Institute of Engineering, Howrah, India.

Ms. Ketaki Solanki

Page 487: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. M

Vishveshwarya Institute of Engineering and Technology, Ghaziabad, India.

Mr. Ashtashil V. Bhambulkar Rungta Group, Raipur, Chhattisgarh State, India.

Mr. Nitin K. Mandavgade

Priyadarshni College of Engineering, Nagpur, Maharashtra, India

Mr. Kamal Kant Sharma M.M. University Mullana, Ambala, India.

Mr. Chandresh Kumar Chhatlani JRN Rajasthan Vidyapeeth University, Udaipur, Rajasthan, India.

Mr. A. Ravi Shankar Sarada Institute of Technology & Science, Khammam, A.P., India.

Mr. Balajee Maram GMRIT, Rajam, A.P., India.

Mr. Srinivas Vadali Kakinada Institute of Engineering and Technology KIET, Kakinada, A.P., India.

Mr. T. Krishna Kishore St. Ann’s College of Engineering And Technology, Chirala, A.P., India.

Ms. Ranjana Rajnish Amity University, Lucknow, India.

Mr. Padam Singh J. P. Institute of Engineering & Technology, Meerut, India.

Ms. Sakshi Rajput Maharaja Surajmal Institute of Technology, GGSIPU, Delhi, India

Mr. Jaware Tushar H.

R.C. Patel Institute of Technology, Shirpur, MS, India

Mr. R. D. Badgujar R C Patel Institute of Technology, Shirpur Distt., Dhule, MS, India

Mr. Zairi Ismael Rizman Faculty of Electrical Engineering, Universiti Teknologi MARA, Malaysia.

Mr. Ashok Kumar Rajput Radha Govind Engineering College, Meerut, India.

Mr. Nitin H. Ambhore Vishwakarma Institute of Information Technology, Pune, India

Mr. Vishwajit K. Barbudhe Agnihotri College of Engineering, Nagthana, Wardha, India

Mr. Himanshu Chaurasiya

Amity University, Noida, U.P., India.

Ms. Rupali Shelke Marathwada Mitra Mandal's Polytechnic, Pune, M.S., India

Mr. Pankaj Manohar Pandit JDIET, Sant Gadgebaba Amravati University (M.S.), Yavatmal, India

Mr. Gurudatt Anil Kulkarni Marathwada Mitra Mandal's Polytechnic,Pune, M.S., India

Ms. Pragati Chavan Marathwada Mitra Mandal's Polytechnic,Pune, M.S., India

Mr. Mahendra Umare

Page 488: Ijaet volume 7 issue 3 july 2014

International Journal of Advances in Engineering & Technology.

©IJAET ISSN: 2231-1963

pg. N

Nagpur Institute of Technology, Nagpur, India

Prof. Ajay Gadicha P.R. Pote (Patil) College of Engineering, Amravati, India.

Mr. M. Pradeep A.S.L.Pauls College of Engineering and Technology, Coimbatore, Tamil Nadu, India.

Mr. Susrutha Babu Sukhavasi K.L. University, Guntur, A.P, India.

Mr. Vivek Kumar Singh Sustainable Energy Systems, MIT Portugal Program, University of Coimbra, Portugal.

Dr. V. Bhoopathy Vivekanandha College of Technology for Women, Namakkal, Tamilnadu, India.

Mr. Akash Porwal Axis Institute of Technology and Management, Kanpur, U.P., India.

Mr. Ravindra Jogekar RTM NU, Nagpur, India.

Md. Risat Abedin Ahsanullah University of Science & Technology, Dhaka, Bangladesh.

Page 489: Ijaet volume 7 issue 3 july 2014

OUR FAST REVIEWING PROCESS IS OUR

STRENGTH.

URL : http://www.ijaet.org

E-mail : [email protected]

[email protected]