an intelligent cloud for optimization of resource on large ...psrcentre.org/images/extraimages/13....

8
An Intelligent Cloud for Optimization of Resource on Large-Scale Reliable Optical Datacenter Shashidharan.M, Pradeepkumar Shapeti, Rakesh.M.R, Shabeen Taj G.A Abstract- A Cloud computing system provides infrastructure layer services to users by managing virtualized infrastructure resources. Intelligent Automation for Cloud is a self-service provisioning and orchestration software solution for cloud computing and data center automation. It helps enable secure, on-demand, and highly automated IT operations for both virtual and physical infrastructure across compute, network, storage, and applications. Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world.The infrastructure resources include CPU, hypervisor, storage, and networking. Each category of infrastructure resources is a subsystem in a cloud computing system. The cloud computing system coordinates infrastructure subsystems with to provide services to users. A cloud computing system should have the flexibility to switch from one infrastructure subsystem to another and one decision algorithm to another with ease.Survivability against disasters both natural and deliberate attacks, and spanning large geographical areas—is becoming a major challenge in communication network Keywords-- Cloud computing; labeled optical burst switching with home circuit (LOBS-HC); Datacenter wavelength division multiplexing (WDM); Large Synoptic Survey Telescope (LSST). I.INTRODUCTION LOUD computing system provides infrastructure layer services to users by managing virtualized infrastructure resources. The infrastructure resources in a data center include CPU hypervisor, storage, and networking. Each category of infrastructure resources is a subsystem in a cloud computing system.The intervening decade has seen the rise of datacenter computing as the paradigm of choice for practically every application domain. The move to datacenters has been monolithic ‘big-iron’ machines such leaner operational. powered by two separate trends. On the one hand, application domains typically tied to budgets. In parallel, functionality Shabeen Taj G.A is with VTU University, Computer Science and Engineering Dept. AMC Engineering college, Bangalore, KARNATAKA.INDIA (e-mail:[email protected]). Shashidharan.M is with VTU University, Computer Science and Engineering Dept.AMC Engineering college ,Bangalore, KARNATAKA.INDIA (e-mail:[email protected]). Pradeepkumar Shapeti is with VTU University, Computer Science and Engineering Dept.AMC Engineering college, Bangalore ,KARNATAKA.INDIA (e-mail:[email protected]). Rakesh.M.R is with VTU University, Computer Science and Engineering Dept.AMC Engineering college,Bangalore,KARNATAKA.INDIA (e-mail:[email protected]). and data usually associated with personal computing has moved into the datacenter; users continuously interact with remote sites while using their local computers, whether to run intrinsically online applications such as email, chat, games and blogs, or to manipulate data traditionally stored locally, such as documents, spreadsheets, videos and photos. In effect, modern architectures are converging towards cloud computing, a paradigm where all user activity is funneled into large datacenters via high-speed networks. Operation among the infrastructure subsystem managers.An infrastructure subsystem manager needs to handle a large number of physical machines, and communicate with other subsystem managers and the cloud computing system. Then the cloud computing system instructs the hypervisor manager to create virtual machines on the target physical machines. At the same time, the cloud computing system instructs the storage subsystem to provide virtual machine image for creating the virtual machines. The storage subsystem manages storage of a data center, and it also provides a distributed file system among physical machines. The storage subsystem is in charge of managing the virtual machine image files of the virtual machines the cloud computing system provides. These image files must be retrieved from disk in order to create virtual machines. The storage subsystem also provides a distributed file system that enables virtual machine to migrate among physical machines. The migration mechanism helps balance the workloads among Physical machines. II. LITERATURE REVIEW A. Intelligent Cloud Computing Primary Functions Intelligent Cloud supports a broad spectrum of cloud automation management activities, from setup and design, on- going service delivery and system operations to control points, reporting, and analytics. Self-service interface. End users can browse and search a web-based catalogue of options and make requests through a simple self-service process, as well as track and modify orders or view usage, consumption, and costs. Service delivery. Cloud automation management divides requests into components and orchestrates underlying resources. Tracking and integration into metering, chargeback, and billing systems is supported, and CMDB tools are kept updated. Operational process automation. I cloud CIAC can control user identity, roles and entitlements to keep users securely isolated. Users can detect and manage incidents, send C International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines) 64

Upload: others

Post on 08-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

An Intelligent Cloud for Optimization of Resource on

Large-Scale Reliable Optical Datacenter

Shashidharan.M, Pradeepkumar Shapeti, Rakesh.M.R, Shabeen Taj G.A

Abstract- A Cloud computing system provides infrastructure layer services to users by managing virtualized infrastructure resources. Intelligent Automation for Cloud is a self-service provisioning and orchestration software solution for cloud computing and data center automation. It helps enable secure, on-demand, and highly automated IT operations for both virtual and physical infrastructure across compute, network, storage, and applications. Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world.The infrastructure resources include CPU, hypervisor, storage, and networking. Each category of infrastructure resources is a subsystem in a cloud computing system. The cloud computing system coordinates infrastructure subsystems with to provide services to users. A cloud computing system should have the flexibility to switch from one infrastructure subsystem to another and one decision algorithm to another with ease.Survivability against disasters both natural and deliberate attacks, and spanning large geographical areas—is becoming a major challenge in communication network

Keywords-- Cloud computing; labeled optical burst switching with home circuit (LOBS-HC); Datacenter wavelength division multiplexing (WDM); Large Synoptic Survey Telescope (LSST).

I. INTRODUCTION

LOUD computing system provides infrastructure layer services to users by managing virtualized infrastructure resources. The infrastructure resources in a data center

include CPU hypervisor, storage, and networking. Each category of infrastructure resources is a subsystem in a cloud computing system.The intervening decade has seen the rise of datacenter computing as the paradigm of choice for practically every application domain. The move to datacenters has been monolithic ‘big-iron’ machines such leaner operational. powered by two separate trends. On the one hand, application domains typically tied to budgets. In parallel, functionality

Shabeen Taj G.A is with VTU University, Computer Science and Engineering Dept. AMC Engineering college, Bangalore, KARNATAKA.INDIA (e-mail:[email protected]).

Shashidharan.M is with VTU University, Computer Science and Engineering Dept.AMC Engineering college ,Bangalore, KARNATAKA.INDIA (e-mail:[email protected]).

Pradeepkumar Shapeti is with VTU University, Computer Science and Engineering Dept.AMC Engineering college, Bangalore ,KARNATAKA.INDIA (e-mail:[email protected]).

Rakesh.M.R is with VTU University, Computer Science and Engineering Dept.AMC Engineering college,Bangalore,KARNATAKA.INDIA (e-mail:[email protected]).

and data usually associated with personal computing has moved into the datacenter; users continuously interact with remote sites while using their local computers, whether to run intrinsically online applications such as email, chat, games and blogs, or to manipulate data traditionally stored locally, such as documents, spreadsheets, videos and photos. In effect, modern architectures are converging towards cloud computing, a paradigm where all user activity is funneled into large datacenters via high-speed networks. Operation among the infrastructure subsystem managers.An infrastructure subsystem manager needs to handle a large number of physical machines, and communicate with other subsystem managers and the cloud computing system. Then the cloud computing system instructs the hypervisor manager to create virtual machines on the target physical machines. At the same time, the cloud computing system instructs the storage subsystem to provide virtual machine image for creating the virtual machines.

The storage subsystem manages storage of a data center, and it also provides a distributed file system among physical machines. The storage subsystem is in charge of managing the virtual machine image files of the virtual machines the cloud computing system provides. These image files must be retrieved from disk in order to create virtual machines. The storage subsystem also provides a distributed file system that enables virtual machine to migrate among physical machines. The migration mechanism helps balance the workloads among Physical machines.

II. LITERATURE REVIEW

A. Intelligent Cloud Computing Primary Functions

Intelligent Cloud supports a broad spectrum of cloud automation management activities, from setup and design, on-going service delivery and system operations to control points, reporting, and analytics.

Self-service interface. End users can browse and search a web-based catalogue of options and make requests through a simple self-service process, as well as track and modify orders or view usage, consumption, and costs.

Service delivery. Cloud automation management divides requests into components and orchestrates underlying resources. Tracking and integration into metering, chargeback, and billing systems is supported, and CMDB tools are kept updated.

Operational process automation. I cloud CIAC can control user identity, roles and entitlements to keep users securely isolated. Users can detect and manage incidents, send

C

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

64

Page 2: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

alerts and open support tickets. Advanced cloud automation management reporting allows processes, results, compliance and audits to be tracked with built-in ROI models.

Resource management integration. I cloud CIAC integrates with systems such as UCS Manager and vCenter to allow provisioning of individual resource components. Automated capacity reports provide proactive management and the ability to evacuate resources for maintenance.

Lifecycle management. Service definitions include design descriptions, selection parameters and pricing options, as well as business and technical processing flows using a cloud automation management GUI for all steps of the service lifecycle.

Deployment. I cloud CIAC can be deployed as a comprehensive cloud automation management solution for preparation, planning, design, implementation and delivery optimization. The five essential characteristics are: on-demand self-service, where a consumer can unilaterally provision computing capabilities as needed automatically without requiring human interaction from each service's provider; broad network access, where the capabilities are available over the network and accessed through standard mechanisms, promoting the use of heterogeneous thick or thin client platforms such as mobile phones, laptops, and PDAs; resource pooling, where the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumers demands; rapid elasticity, where the capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out, and rapidly released to quickly scale in; measured service, where the cloud service providers automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service., mobile cloud computing can also be called the cloud computing in mobile internet. Virtualization: It enables the creation of a virtual (as opposed to actual) version of an IT resource (e.g., an operating system, a server, a storage device, or network). This allows data center consolidation and provides separation and protection; Cloud computing provides highly scalable resources accessed via Internet. since cloud computing is growing quickly day by day used by individuals and companies throughout the world, data protection problems in the cloud computing have not been tackled currently. In the cloud, users of cloud services have serious threat of losing confidential data. To address data privacy issues of users, they have proposed data protection framework. According to them the proposed data protection framework addresses the challenges throughout the cloud services life cycle. Their proposed framework

II. BASIC ARCHITECTURE CLOUD COMPUTING

Fig. 1. Computing Architecture

Fig. 2 Overview of their proposed framework

comprises of three key components: Policy ranking, policy integration and policy enforcement. After that, the subscribers’ requests are delivered to a cloud through the Internet. In the cloud, cloud controllers process the requests to provide mobile users with the corresponding cloud services. These services are developed with the concepts of utility computing, virtualization, and service-oriented architecture.

IV. DESIGN PRINCIPLE

There are three guiding principles in the design components interfaces and a hierarchical architecture based on logical racks, and virtual machine based cloud services. Datacenters consist of thousands of inexpensive fault-prone components, running commodity operating systems and protocols ill-fitted for high-performance applications. Further, datacenter applications have unconventional scaling requirements and bursty workloads that frequently push systems into delays and down-time. Clouds’ key characteristics include on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity, and pay-per-use. To implement these features, intelligent cloud computing systems offer:

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

65

Page 3: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

three service models — infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS);

public, private, or hybrid ownership; common features such as service-orientation, elasticity, and

virtualization; and benefits including the ability to pay costs incrementally,

increased storage, flexibility, and pervasive access.

A. Pluggable Component Interface

A cloud computing system with pluggable component design. The component pluggability is based on interfaces between components. 1) Pluggable Component: The component plug ability is based on a specifically designed interfaces between controlling system and infrastructure subsystems components. controls its subsystems by a set of interfaces, which serve as bridges between the cloud computing system and subsystem components. We first carefully define every task a cloud computing system must perform. By this clear definition of task functionality we are able to define the relation between cooperative infrastructure subsystems and controlling system. Each infrastructure subsystem only needs to cooperate with those components that it interacts with, not the whole cloud computing system.

Also provides a test bed for designing decision algorithms used in subsystems. For example, we may design deployment algorithm used to deploy virtual machines to physical machines, or power conservation algorithm that packs running virtual machines together, or image file managing algorithms that speed up the process of virtual machine image retrieval. These algorithms are the crucial to the performance of infrastructure subsystem.

The pluggability of gives a system administrator the flexibility to choose the most suitable components so as to provide the most appropriate functionalities under different circumstances. A good cloud computing system should allow system administrators to use any infrastructure subsystem component that he thinks most fit. There are various options of infrastructure subsystems, and different subsystem can provide different functionalities. For example, KVM is much easier to install than Xen is, but Xen has better performance in terms of speed of Para-virtualization.

Component Interface: Interface is the key idea behind the pluggability of . For the ease of replacing components, uses interfaces to isolate the works of infrastructure subsystem from cloud computing system, so the interface works as the bridge between the cloud computing system and the user specified components.

Each infrastructure subsystem does not need to communicate with every part of the cloud computing system– it only needs to communicate with the corresponding interfaces. For example, without interfaces a storage subsystem needs to communicate with the cloud computing system for managing virtual machine image files, it also needs to communicate with the monitoring subsystem for the information of available storage resource; it also needs to

communicate with the user to provide storage services. Now with the help of interfaces each specifically declared interfaces is responsible for each of the task above – managing virtual machine image files, getting the information of available storage resources, and interacting with users. As the result, the communication between cloud computing system and its subsystems become much easier to handle.

We separate the works in into three major functional units – Virtual Machine Manager, Subsystem Manager, and Coordinator. Each of them contains several pluggable components, as shown in Fig. 1. We will describe the three and other components in Section IV later. These components communicate with others through interfaces. The interface of every pluggable component is precisely defined so that one can easily develop a subsystem component by just following the interface.

Fig. 3. Lifecycle B. Problem Statement The hierarchy design of is based on logical racks. The

concept of logical rack is based on the network topology of physical machines, and physical machines connected to the same switch form a logical rack. There are two advantages of our logical rack based hierarchical design. First the hierarchical architecture design isolates heavy communication traffic within a logical rack. If we place a set of virtual machines that need to communicate with each other frequently into a logical rack, the traffic among them will only be within the switch. Second, a logical rack is the basic unit for monitoring system and forming shared file system in , which allows simple implementation and delivers good performance, as we will explain in the rest of this section.

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

66

Page 4: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

C. Hierarchical Scaling Interconnect Bandwidth TDM: To maximize off-chip bandwidth and power

efficiency, the I/O data rate will increase with new CMOS technology nodes. Nevertheless, the aggregated data rate can’t be scaled arbitrarily high, but is limited by physical impairments in both electrical and optical domains. As a result, TDM faces various scalability challenges.

SDM: To scale the bandwidth for intra-building interconnection, SDM uses multiple fibers in parallel to transfer multiple streams of data.

WDM: To overcome the limitations of TDM and SDM, wavelength multiplexing, where multiple wavelengths of light run in a single fiber, appears to be a promising technique. The true potential of the vast bandwidth available with single-mode optical fibers is fully exploited. A low-power and low-cost photonic integration circuit (PIC), using wavelength division multiplexing (WDM), holds the promise to further scale the density, reach, and data rates needed for next-generation datacenter networks, and to enable new network architectures and applications.

V. INTELLIGENT AUTOMATION

FIG. 4: Intelligent Automation Cloud

Fig. 5: Intelligent Agent

Table 1: Alignment of NIST-FISMA standard with the cloud

computing model An agent is anything that can be viewed as perceiving its

environment through sensors and acting upon that environment through effectors. A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body parts for effectors. A robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. A software agent has encoded bit strings as its percepts and actions. A generic agent is diagrammed our aim in this is to design agents that do a good job of acting on their environment. First, we will be a little more precise about what we mean by a good job. Then we will talk about Different designs for successful agents—filling in the question mark in We discuss some of the general principles used in the design of agents throughout, chief among which is the principle that agents should know things. Finally, we show how to couple an agent to an environment and describe several kinds of environments Intelligent Cloud Intelligent Cloud enables you to lease hosted servers in a virtual environment without the financial outlay of hardware costs but whilst retaining the ability to access and administer your services as required. With two models to choose from, both with 512Kbps of FREE Internet Bandwidth, there is the perfect solution for your needs. Choose a highly flexible Pay-As-You-Go solution with just a minimum contract term of one month. Alternatively, our reserved service with guaranteed modular options available on 1, 2 or 3 year contract terms.

Reserved Cloud Hosting Configurations Starter Small Medium Large Extra Large CPU Reserved 2.5GHz 5GHz 10GHz 22GHz 36GHz Memory Reserved 5GB 10GB 20GB 35GB 48GB Storage 300GB 600GB 900GB 1200GB 2000GB Virtual Machines 2-5 6-12 15-25 25-50 50-100 Public Addresses 1 1 1 2 4

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

67

Page 5: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

Both options have the ability to burst on the configuration. Intelligent Cloud is built on fully resilient and fault tolerant enterprise-class server hardware and utilizing a leading Virtual Machine software platform.

SYSTEM MODEL AND ASSUMPTION

1. Cloud Computing Environment the system model of cloud computing environment consists of four main components, namely cloud consumer, virtual machine (VM) repository, cloud providers, and cloud broker.

2. Provisioning Plans A cloud provider can offer the consumer two provisioning plans, i.e., reservation and/or on-demand plans. For planning, the cloud broker considers the reservation plan as medium- to long-term planning, since the plan has to be subscribed in advance (e.g., 1 or 3 years) and the plan can significantly reduce the total provisioning cost

3. Provisioning Phases The cloud broker considers both reservation and on-demand plans for provisioning resources. These resources are used in different time intervals, also called provisioning phases. There are three provisioning phases: reservation,

4. Provisioning Stages A provisioning stage is the time epoch when the cloud broker makes a decision to provision resources by purchasing reservation and/or on-demand plans, and also allocates VMs to cloud providers for utilizing the provisioned resources.

5. Reservation Contracts

A cloud provider can offer the consumer multiple reservation plans with different reservation contracts. Each reservation contract refers to the advance reservation of resources with the specific time duration of usage.

The intelligent cloud The Internet has had an enormous impact on people's lives

around the world in the ten years since Google's founding. It has changed politics, entertainment, culture, business, health care, the environment and just about every other topic you can think of. Which got us to thinking, what's going to happen in the next ten years? How will this phenomenal technology evolve, how will we adapt, and (more importantly) how will it adapt to us? We asked ten of our top experts this very question, and during September (our 10th anniversary month) we are presenting their responses. As computer scientist Alan Kay has famously observed, the best way to predict the future is to invent it, so we will be doing our best to make good on

our experts' words every day. - Karen Wickre and Alan Eagle, series editors In coming years, computer processing, storage, and networking capabilities will continue up the steeply exponential curve they have followed for the past few decades. By 2019, parallel-processing computer clusters will be 50 to 100 times more powerful in most respects. Computer programs, more of them web-based, will evolve to take advantage of this newfound power, and Internet usage will also grow: more people online, doing more things, using more advanced and responsive applications. By any metric, the "cloud" of computational resources and online data and content will grow very rapidly for a long time.

As we're already seeing, people will interact with the cloud using a plethora of devices: PCs, mobile phones and PDAs, and games. But we'll also see a rush of new devices customized to particular applications, and more environmental sensors and actuators, all sending and receiving data via the cloud. The increasing number and diversity of interactions will not only direct more information to the cloud, they will also provide valuable information on how people and systems think and react.

Thus, computer systems will have greater opportunity to learn from the collective behavior of billions of humans. They will get smarter, gleaning relationships between objects, nuances, intentions, meanings, and other deep conceptual information. Today's Google search uses an early form of this approach, but in the future many more systems will be able to benefit from it.

What does this mean to Google? For starters, even better search. We could train our systems to discern not only the characters or place names in a YouTube video or a book, for example, but also to recognize the plot or the symbolism. The potential result would be a kind of conceptual search: "Find me a story with an exciting chase scene and a happy ending." As systems are allowed to learn from interactions at an individual level, they can provide results customized to an individual's situational needs: where they are located, what time of day it is, what they are doing. And translation and multi-modal systems will also be feasible, so people speaking one language can seamlessly interact with people and information in other languages.

Optical Circuits for Data Centers Optical Technologies. Optical links are today’s standard for

ultra-high speed data transmission. Telecommunications and wide-area backbone networks commonly use 40Gbps OC-768 lines, and new 100Gbps optical links have already been developed. With the increasing demand for high speed transmission in data centers and storage area networks, optical fiber is a logical choice because of its low loss, ultra-high bandwidth, and low power consumption. Traditionally, however, optical components (e.g., transceivers and switches) have been significantly more expensive than their electrical counterparts. However, recent advances in optical interconnect technology have precipitated cost reductions that might make using optical links in data centers viable.

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

68

Page 6: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

Our design leverages MEMS-based optical switches. These devices, which offer a promising cost-performance point, provide switching by physically rotating mirror arrays that redirect carrier laser beams to create connections between input and output ports. Once such a connection is established, network link efficiency is extremely high; however, the reconfiguration time for such devices is long (a few milliseconds)

The impact of such systems will go well beyond Google. Researchers across medical and scientific fields can access massive data sets and run analysis and pattern detection algorithms that aren't possible today. The proposed Large Synoptic Survey Telescope (LSST), for example, may generate over 15 terabytes of new data per day! Virtually any research field will benefit from systems with the ability to gather, manipulate, and learn from datasets at that scale.

Traditionally, systems that solve complicated problems and queries have been called "intelligent", but compared to earlier approaches in the field of 'artificial intelligence', the path that we foresee has important new elements. First of all, this system will operate on an enormous scale with an unprecedented computational power of millions of computers. It will be used by billions of people and learn from an aggregate of potentially trillions of meaningful interactions per day. It will be engineered iteratively, based on a feedback loop of quick changes, evaluation, and adjustments. And it will be built based on the needs of solving and improving concrete and useful tasks such as finding information, answering questions, performing spoken dialogue, translating text and speech, understanding images and videos, and other tasks as yet undefined. When combined with the creativity, knowledge, and drive inherent in people, this "intelligent cloud" will generate many surprising and significant benefits to mankind.

The enormous bandwidth requirements faced by todays ommunication networks have stimulated the massive deployment of optical backbone networks. Wavelength-Division Multiplexing (WDM) has emerged as the most popular technology for optical networks, due to its flexibility and robustness.in an optical WDM network, an end-to-end connection (i.e. Circuit) is established through a wavelength channel, called a lightpath. Optical Cross-Connects (OXCs)

Fig. 6: Network of datacenters

are used to switch optical signals in a fiber optic network. In this work, we focus on opaque OXCs, where incoming optical signals (wavelengths) are first demultiplexed and then converted into electronic signals. The electronic signals re switched using an electronic switching module and then converted back into optical signals. These signals are then

multiplexed into the output optical fiber. One of the benefits of an opaque OXC is that we do not need to use the same wavelength over all the links in an end-to-end lightpath.

VI. CONCLUSION AND FUTURE WORK

A cloud solution based on Intelligent Automation for Cloud includes the following. • At a web-based self-service interface, users view service catalog options based on their roles, organization, and other access controls. Users can order services, provide configuration information through dynamic forms, and track and manage their services and usage on an ongoing basis. The catalog also helps IT to associate costs with various services, which can be integrated with billing and financial services for chargeback.

• Orders that have been placed and approved go through service delivery automation, which orchestrates the provisioning and configuration steps across all the elements. These elements include resources to be provisioned (compute, virtualization, network, and storage), configuration updates to be made, software to be provisioned, and supporting services to be set up (firewalls, load balancing, and disaster recovery).

• Operational process automation assists and coordinates the ongoing operational and support tasks for cloud management, including user management, performance management, alerting, service-level management, capacity planning, maintenance checks and procedures, and audit and compliance reporting.

• Resource management uses the resource pools to provision, manage, depravation, and conFig. individual resources to complete resource-level operations. Requests are orchestrated to domain resource managers or managed internally

• Lifecycle management involves creation and management of a service model, service definitions, and the underlying automation design for provisioning and managing each service.

Cloud computing system coordinates infrastructure subsystems to provide services to users. The services is usually provided as a lease of virtual machine, and efficient management of virtual machine depends on the close cooperation among the infrastructure subsystem managers. Most current cloud computing systems lacks pluggability of their infrastructure subsystems and decision algorithms. The lack of pluggability also makes it impossible for a cloud computing system administrator to choose an infrastructure subsystem manager to suit his needs.

The following results are achieved: Consolidation: So far, we have achieved a 6:1

consolidation ratio but there’s still room for growth. We plan to add more VMs to the existing server infrastructure to support business growth without the associated additional server hardware costs – we simply add extra storage to the Clariion SAN as needed and assign shared processor cores to the VM.

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

69

Page 7: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

Capital Expenditure savings: Thanks to the consolidation ratio, we’ve reduced the server footprint in

our data Centre by almost 50% and with EMC Mirror view/A have introduced almost real time Disaster recovery without the usual overhead costs of duplicating each individual physical server. High Availability: We leverage VMware HA and Fault

Tolerance to ensure that our services are available to our users or customers, even during hardware failures or network connectivity outages. Time Savings: In the past it used to take several weeks to

procure and provision a Windows server. Now, we can provision servers in a few hours on the fly because we have sufficient available capacity in our existing hardware.

•Backup and Recovery: We mix both traditional offsite remote tape backups using Symantec Backup Exec with SAN based replication of entire VM’s from our production site at Dalton Street to our disaster recovery site at Port Erin. Whenever necessary, we can quickly restore an entire file, SQL or exchange server, or provision a new virtual machine for testing.

• Increased Performance: We continue to see great performance with the VMware vSphere product and when coupled with the latest AMD or Intel processors and EMC Clariion SAN’s, the scalability is perfect for our requirements.

In this study different security and privacy related

research papers were studied briefly. Cloud services are used by both larger and smaller scale organizations. Advantages of Cloud computing are huge. But it‘s a global phenomenon that everything in this world has advantages as well as disadvantages. Cloud computing is suffering from severe security threats from user point of view, one can say that lack of security is the only worth mentioning disadvantage of cloud computing. Both the Service providers and the clients must work together to ensure safety and security of cloud and data on clouds. Datacenter platforms have dominated the systems landscape over the last decade, o erring applications the promise of scalability, availability and responsiveness at very low costs.

REFERENCES [1] Rongxing et al, ―Secure Provenance: The Essential Bread and Butter of

Data Forensics in Cloud Computing‖, ASIACCS‘10, Beijing, China. [2]“Amazon simple storage service,” http://aws.amazon.com/s3/. [3]“Amazon elastic compute cloud,” http://aws.amazon.com/ec2/. [4] “Eucalyptus project,” http://open.eucalyptus.com/ [5] R. La‘Quata Sumter, ―Cloud Computing: Security Risk [6]. Foster, Y. Zhao, and S. Lu, “Cloud Computing and GridComputing 360-

Degree Compared,” Proc. Grid Computing Environments Workshop (GCE ’08), 2008.

[7] Amazon EC2, http://aws.amazon.com/ec2, 2012. [8] Go Grid, http://www.gogrid.com, 2012. [9] Amazon EC2 Reserved Instances, http://aws.amazon.com/ec2/reserved-

instances, 2012. [10] Y. Jie, Q. Jie, and L. Ying, “A Profile-Based Approach to Justin-Time

Scalability for Cloud Applications,” Proc. IEEE Int’l Conf.Cloud Computing (CLOUD ’09), 2009.

[11] Y. Kee and C. Kesselman, “Grid Resource Abstraction, Virtualization, and Provisioning for Time-Target Applications,” Proc. IEEEInt’l Symp. Cluster Computing and the Grid, 2008.

[12] A. Filali, A.S. Hafid, and M. Gendreau, “Adaptive Resources Provisioning for Grid Applications and Services,” Proc. IEEE Int’l Conf. Comm., 2008.

[13] D. Kusic and N. Kandasamy, “Risk-Aware Limited Lookahead Control for Dynamic Resource Provisioning in Enterprise omputing Systems,” Proc. IEEE Int’l Conf. Autonomic Computing, 2006.

[14] K. Miyashita, K. Masuda, and F. Higashitani, “Coordinating Service Allocation through Flexible Reservation,” IEEE Trans. Services Computing, vol. 1, no. 2, pp. 117-128, Apr.-June 2008.

[15] J. Chen, G. Soundararajan, and C. Amza, “Autonomic Provisioning of Backend Databases in Dynamic ContentWeb Servers,” Proc.IEEE Int’l Conf. Autonomic Computing, 2006.

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

70

Page 8: An Intelligent Cloud for Optimization of Resource on Large ...psrcentre.org/images/extraimages/13. 1112103.pdf · to create virtual machines on the target physical machines. At the

DEFENDING MECHANISM TO SECURENODES FROM INTERNAL ATTACK IN WSN

Annapoorna Rao, Priyanka Singh, Shruthi R and Syeda S. Rubbani

Abstract—Wireless sensor networks have gained considerable attention in the past few years. They have found application domains in battlefield communication, homeland security, pollution sensing, and traffic monitoring. However, security threats to WSNs become increasingly varied, due to the open nature of the wireless medium an adversary can easily eavesdrop and replay or inject fabricated messages. Various cryptographic methods are used to defend against some of such attacks but very limited. For example, node compromise is another major problem of WSN security as it allows an adversary to enter inside the security perimeter of the network, which raised a serious challenge for WSNs. This paper focuses on a systematic investigation of internal attacks of wireless sensor networks, and novel framework for the security under some fixed parameters designed by the network designer, and also a reasonable model to predict the time when the highest signal noise ratio (S/N) transmitting to the sinker. “Timing control” method is to be used for protecting internal attacks in SWNs. Therefore a sinker only opens at a special time period; other time is in sleeping state and ignores any coming signals such that it can protect the network from the internal attacks within the sleeping time period. Sending rate is manipulated to control the time when the highest S/N is occurring to protect from “internal attacks.” Keywords- Wireless Sensor Networks, Sensor Node, Target Node Network Security, Internal Attacks.

I. INTRODUCTION

Wireless Sensor Network (WSN) is considered as a collection of spatially deployed wireless sensors by which we monitor various changes of environmental

conditions (E.g. Military surveillance [4], forest fire [5], air pollutant concentration [3], volcano detection [1], object moving [2] and computing platform for tomorrows’ internet [6]) in a collaborative manner without relying on any underlying infrastructure support [7].

Annapoorna Rao is with VTU University, Computer Science and Engineering Dept. AMC Engineering college, Bangalore, KARNATAKA.INDIA (e-mail: [email protected]).

Priyanka Singh is with VTU University, Computer Science and Engineering Dept. AMC Engineering college, Bangalore, KARNATAKA.INDIA (e-mail:[email protected]).

Shruthi R is with VTU University, Computer Science and Engineering Dept.AMC Engineering college, Bangalore, KARNATAKA.INDIA (e-mail:[email protected]).

Syeda S. Rubbani is with VTU University, Computer Science and Engineering Dept.AMC Engineering college , Bangalore ,KARNATAKA, INDIA (e-mail: [email protected]).

Recently, a number of research efforts have been made to develop sensor hardware and network architectures in order to effectively deploy WSNs for a variety of applications. This has led to a wide diversity of WSN application requirements. However, a general-purpose WSN design cannot fulfill the needs of all applications. Many network parameters such as sensing range, transmission range, and node density have to be considered at the network design stage, according to specific applications. To achieve this, it is critical to capture the impacts of network parameters on network performance with respect to application specifications. In the WSN systems, the sensor node will detect the interested

Fig 1: Sink Architecture in WSN information, processes it with the help of an in-built microcontroller and communicates the results to a sink or base station. Normally the base station is a more powerful node [8]. This can be linked to a central station via satellite or internet communication to form network deployments for wireless sensor networks depending on various applications. A typical WSN with one sinker and static state is composed of a large number of sensor nodes responsible for sensing data and a sink node as shown in Fig. 1. There are nodes that are responsible for collecting and processing data.Furthermore, it is noted the fact that WSNs have the open nature of the wireless medium an adversary can easily eavesdrop, which is called a “passive attacker” [9], and replay or inject fabricated messages, so-called an “active attacker” [10].It is well known that for the protection from the WSNs attacks there are various cryptographic methods can be used and sometimes are very efficient and effective [11]. Due to the WSN deployments in open and possibly hostile environments, attackers can easily lunch “denial-of-service” (DoS) attacks [9], causing physical damage to sensors or capture them to

A

International Conference on Advances in Computer and Electrical Engineering (ICACEE'2012) Nov. 17-18, 2012 Manila (Philippines)

71