[ieee 2011 3rd international conference on computational intelligence, communication systems and...

6
Integration of Grid and Cloud with Semantics based integrator Kailash Selvaraj Centre for Development of Advanced Computing (CDAC) Chennai, India e-mail: [email protected] Dr.Saswati Mukherjee College of Engineering, Anna University Chennai, India e-mail: [email protected] Abstract— Grid is fully utilized only during certain period and when a new job arrives at that instance it should wait until grid resources are free. But most of the time the grid resources are utilized minimally. On the other hand although private cloud is secured than public cloud, its elasticity is limited depending on the private cloud’s maximum hardware capacity. We propose an architecture to integrate grid and private cloud such that grid resource requirements are fulfilled by fetching resources from cloud when needed and vice versa. A dedicated storage cluster is added in the integrated environment to enhance the performance and improve job execution. We propose a semantic based integration management of grid and cloud for improved resource management. This approach provides better service availability, quality and optimal utilization of resources. Keywords-Grid cloud integration, virtual grid, storage cluster for grid, ontology based integration I. INTRODUCTION The rapid growth of the software and hardware technology led way for grid computing and then to cloud computing. As the need for computational and storage resources increase, the technology needs to develop to fulfill the needs. On the other hand, when technology is developed, the usage increases. Thus these two drive each other towards betterment. Grid computing resources, generally used for scientific applications, are not utilized optimally most of the times. However, at certain peak utilization times, the grid resources are fully utilized by jobs under execution. If this situation continues for a considerably long time, new grid jobs submitted during this time would need to wait in queue till the required grid resources become free. Cloud computing is more widely adopted by the commercial industry than the scientific industry. Still there are certain barriers for complete adoption of public clouds by the industry, the most important of these being the security issue. To overcome these barriers, private cloud emerged which comparatively is more secured while still upholding some of the typical characteristics of cloud. Elasticity is one of the major attractions of cloud. Although private clouds do indeed offer elasticity, the extent of elasticity is limited to the hardware capacity of the private cloud and at peak time, it is bound to fall short of the demand. This situation is known as Cloud burst. Researchers have proposed architectures to fetch additional resources from public cloud during cloud burst. However, integrating private cloud with public cloud is fraught with security threats and hence is not widely acceptable. Grid, on the other hand, is proprietary in nature and is often underutilized in an organization. Thus this paper proposes to integrate private cloud with grid environment. The grid, in turn, can utilize the private cloud when it needs resources. Since grid itself is highly secured, this integration poses no threat to the security of private cloud, while still solving the problem of cloud burst. Integration of grid and cloud is one of the critical requirements in the field of High Performance computing and such integration leads to effective resource utilization and to Green ICT which is the need for the enterprise datacenters. This also enhances service availability of both grid and private cloud. Various ways have been attempted by researchers for the integration. In the proposed architecture, the grid execution environment is deployed over cloud as virtual machines on to which grid jobs are submitted. Grid jobs, when waiting in queue for quite long period of time due to unavailability of execution nodes, are directed to cloud. Better performance and quality of service can be achieved in the grid application using the elasticity, and on demand provisioning properties of cloud. As a result of this integration, resources locked under grid environment can also be utilized to execute cloud instances when the private cloud needs additional resources and vice versa. Although the Integration of Grid with private Cloud offers service availability over secured environment, there are certain issues that need to be resolved. The Meta scheduler of grid finds only the exact match of the resources, and if an exact match of the user requirement specification is not available from the available resource specification, the Grid job waits in queue until resources are made free. This is an existing problem in any grid environment. This problem, however, becomes more crucial in the integrated environment. We propose to use semantic resource matching in integrated grid cloud environment that matches not only for the exact specification of resources but also for the alternate match of resources and hence more jobs are executed. Grid does not offer dynamic provisioning of software resources and normal grid users should adapt to the available software resources including the required library files. In the integrated environment when grid requests are being diverted to cloud and cloud, though has hardware resource available 2011 Third International Conference on Computational Intelligence, Communication Systems and Networks 978-0-7695-4482-3/11 $26.00 © 2011 IEEE DOI 10.1109/CICSyN.2011.71 308

Upload: saswati

Post on 25-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2011 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2011) - Bali, Indonesia (2011.07.26-2011.07.28)] 2011 Third International

Integration of Grid and Cloud with Semantics based integrator

Kailash Selvaraj Centre for Development of Advanced Computing

(CDAC) Chennai, India

e-mail: [email protected]

Dr.Saswati Mukherjee College of Engineering, Anna University

Chennai, India e-mail: [email protected]

Abstract— Grid is fully utilized only during certain period and when a new job arrives at that instance it should wait until grid resources are free. But most of the time the grid resources are utilized minimally. On the other hand although private cloud is secured than public cloud, its elasticity is limited depending on the private cloud’s maximum hardware capacity. We propose an architecture to integrate grid and private cloud such that grid resource requirements are fulfilled by fetching resources from cloud when needed and vice versa. A dedicated storage cluster is added in the integrated environment to enhance the performance and improve job execution. We propose a semantic based integration management of grid and cloud for improved resource management. This approach provides better service availability, quality and optimal utilization of resources.

Keywords-Grid cloud integration, virtual grid, storage cluster for grid, ontology based integration

I. INTRODUCTION The rapid growth of the software and hardware

technology led way for grid computing and then to cloud computing. As the need for computational and storage resources increase, the technology needs to develop to fulfill the needs. On the other hand, when technology is developed, the usage increases. Thus these two drive each other towards betterment. Grid computing resources, generally used for scientific applications, are not utilized optimally most of the times. However, at certain peak utilization times, the grid resources are fully utilized by jobs under execution. If this situation continues for a considerably long time, new grid jobs submitted during this time would need to wait in queue till the required grid resources become free.

Cloud computing is more widely adopted by the commercial industry than the scientific industry. Still there are certain barriers for complete adoption of public clouds by the industry, the most important of these being the security issue. To overcome these barriers, private cloud emerged which comparatively is more secured while still upholding some of the typical characteristics of cloud. Elasticity is one of the major attractions of cloud. Although private clouds do indeed offer elasticity, the extent of elasticity is limited to the hardware capacity of the private cloud and at peak time, it is bound to fall short of the demand. This situation is known as Cloud burst. Researchers have proposed architectures to fetch additional resources from public cloud during cloud burst.

However, integrating private cloud with public cloud is fraught with security threats and hence is not widely acceptable. Grid, on the other hand, is proprietary in nature and is often underutilized in an organization. Thus this paper proposes to integrate private cloud with grid environment. The grid, in turn, can utilize the private cloud when it needs resources. Since grid itself is highly secured, this integration poses no threat to the security of private cloud, while still solving the problem of cloud burst.

Integration of grid and cloud is one of the critical requirements in the field of High Performance computing and such integration leads to effective resource utilization and to Green ICT which is the need for the enterprise datacenters. This also enhances service availability of both grid and private cloud. Various ways have been attempted by researchers for the integration.

In the proposed architecture, the grid execution environment is deployed over cloud as virtual machines on to which grid jobs are submitted. Grid jobs, when waiting in queue for quite long period of time due to unavailability of execution nodes, are directed to cloud. Better performance and quality of service can be achieved in the grid application using the elasticity, and on demand provisioning properties of cloud. As a result of this integration, resources locked under grid environment can also be utilized to execute cloud instances when the private cloud needs additional resources and vice versa.

Although the Integration of Grid with private Cloud offers service availability over secured environment, there are certain issues that need to be resolved. The Meta scheduler of grid finds only the exact match of the resources, and if an exact match of the user requirement specification is not available from the available resource specification, the Grid job waits in queue until resources are made free. This is an existing problem in any grid environment. This problem, however, becomes more crucial in the integrated environment. We propose to use semantic resource matching in integrated grid cloud environment that matches not only for the exact specification of resources but also for the alternate match of resources and hence more jobs are executed.

Grid does not offer dynamic provisioning of software resources and normal grid users should adapt to the available software resources including the required library files. In the integrated environment when grid requests are being diverted to cloud and cloud, though has hardware resource available

2011 Third International Conference on Computational Intelligence, Communication Systems and Networks

978-0-7695-4482-3/11 $26.00 © 2011 IEEE

DOI 10.1109/CICSyN.2011.71

308

Page 2: [IEEE 2011 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2011) - Bali, Indonesia (2011.07.26-2011.07.28)] 2011 Third International

to execute the job, but no available software stack, the grid job still remains in queue until the software resources are deployed into cloud. On the other hand, if a cloud user request for platform service and the requested platform stack is not available, then the request cannot be provisioned and hence the service quality of the provider may get lowered.

In a situation where each cloud user requests platforms of different stack, a huge number of software stacks need to be bundled and maintained in the cloud repository. The size of the repository, under such circumstances may grow very high. The service provider needs to pay for these software packages with licenses, with the total license amount paid for the software stack is high when compared to the number of users requesting for the same. With the proposed semantic matching of resources, minimum number of licensed software can be utilized with alternate match of the requested platforms. For example, if user request for red hat operating system, fedora core operating system which is an alternate match can be provisioned for the user.

Thus, this work proposes an integrated grid cloud architecture with semantic matching that focuses on the critical aspects of management of resources and execution of jobs in the integrated environment.

The paper is organized as follows: Section 2 provides the literature survey on grid and cloud integration and semantics based grid and cloud. Section 3 explains the proposed architecture of integrated grid and cloud environment and the components of the integrator and also the ontology based integration environment followed by the implementation in Section 4. Section 5 contains the discussion on the results while Section 6 draws the conclusion.

II. LITERATURE SURVEY The work proposed in [2] explains elastic site manager

that directly interacts only with the cluster manager and extends the cluster resources with cloud. There is a restriction that monitoring is only at the cluster. Our approach focus on extending a level above the clusters

Private cloud would provide flexible load balancing, energy efficiency and addition of hybrid clouds will provide elasticity and thus integration provides flexibility. In Hybrid Grid and Cloud, the issues arising are Multi-domain Communication, Multi-domain deployment, Support to dynamic resources Deployment of middleware encompasses the configuration of channels interconnecting the ProActive runtimes [3]. In [4] Single system image across multi cores for providing Ease of Administration, Transparent Sharing, Informed Optimizations, Consistency, Fault tolerance is proposed. In [1] Grid over VMs allows grid users to define an environment in terms of its requirements such as resource requirements and software configuration, controls it, and then deploys the environment in the grid environment. System architecture for Virtual machine grid is exposed which explains layers for executing grid jobs and Globus in VM. The authors create virtual machine for execution of grid jobs and execute them into grid resources. The number of virtual machines executing simultaneously depends on the capacity of grid and the number of free resources. In [5], the

authors explore the use of cloud computing for scientific workflows and their approach is to evaluate the tradeoffs between running tasks in a local environment, if such is available, and running in a virtual environment via remote, wide-area network resource access. The work in [6] describes a scalable and lightweight computational workflow system for clouds which can run workflow jobs composed of multiple Hadoop MapReduce or legacy programs. But both works do not offer support for any other services.

Building a virtual grid over a local area network, deploying resources by pooling has been suggested in [7] but the capacity of grid is limited. The system uses one monitoring server to keep track of various parameters pertaining to the pooled resources and tasks deployed on the system which does not provide solution to single point of failure. In [8], a portable layer is proposed between different vendor clouds to avoid vendor lock-in, forming meta-cloud or cloud-of-clouds. It lacks framework for handling datasets. Our approach focus on handling large volume data sets in an integrated grid cloud environment.

Grid infrastructures do not isolate and partition the performance of the resources. The execution of application of one grid user may affect the execution of others. This limits the quality of service and reliability of actual platforms, preventing a wide adoption of the Grid paradigm. In [9] a straightforward deployment of virtual machines in a Grid infrastructure is presented. Although this strategy does not require additional middleware to be installed and it is not bounded to a virtualization technology, however it presents attractive benefits, like increasing software robustness or saving administration efforts, so improving the quality of life in the Grid.

The work in [3] is a lean middleware that stands between the hybrid infrastructure and the application layer, enabling a seamless and flexible but efficient use of any combination of heterogeneous computing resources in intensively communicating applications. It basically acts as a dynamically evolvable logical representation of the underlying infrastructure and supports application message routing. The middleware relies upon SSH tunneling and message forwarding to logically connect processes in multi-domain environments. Another possible solution for the connectivity problem is the usage of VPNs. However, this solution is less flexible.

Use of ontology and corresponding match-making algorithms in grid are proposed by [13]. We have adopted the same algorithm in our request handler extending the same to grid and cloud integrated environment.

III. ARCHITECTURE

A. Integrated Grid-Cloud Architecture The architecture has four major components: Grid,

Integrator, Cloud and Storage Cluster Grid: The grid environment consists of a conventional

grid with meta scheduler at the top layer. The layer that lies below this is the middleware. The layer below this middleware is the local scheduler.

309

Page 3: [IEEE 2011 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2011) - Bali, Indonesia (2011.07.26-2011.07.28)] 2011 Third International

When a grid job is submitted to the grid, meta scheduler queries the availability of the suitable resources to execute the job through grid middleware services. If there is resource available to handle the request, then meta scheduler handles the job along with the middleware. The middleware in turn diverts the request to the local scheduler at the specific cluster. The local scheduler schedules the request to the compute nodes for execution. If the query returns with result stating that resources are not available to handle the request, the request is passed on to the integrator component for executing it in the cloud.

Cloud: The cloud environment consists of cloud controller where user requests are submitted and passed on to the underlying components. The cloud controller controls the cluster controller that is deployed at every cluster. These cluster controllers control and schedule resources at their respected clusters and in turn control the storage controller present in that cluster. The cluster controller controls the node controller running in each and every physical machine. The node controller acts as communication interface between the hypervisor and cluster controller. Hypervisor is responsible for bringing up virtualization. Hypervisor interacts with the firmware and brings up the virtual machines.

Each time a new request for executing a virtual machine in cloud is received, it is handled by the cloud controller by checking for the available cores, Memory, Storage to deploy the image. If there is provision for deploying the images, it is processed by the cloud middleware controllers. If the image cannot be deployed due to lack of resources, the request is diverted to the integrator for deploying the images in the grid.

The integrator: The integrator component has five modules. Each of these modules plays a vital role in integration of Grid and Cloud. The five major modules are Resource manager, Execution manager, Security manager, Image manager, Network manager.

Resource Manager (RM) manages the grid resources when a cloud instance is running in grid environment and manages the cloud resources when the grid jobs are running in cloud environment.

Execution manager (EM) is responsible for managing the execution of grid jobs in cloud environment and cloud instances in grid environment. Execution manager keeps tracks of the grid jobs submitted to the cloud environment. It periodically queries for the status of the jobs and if any intermediate input files are need needed for processing, execution manager fetches the input files from the grid and passes it to the cloud instances where grid jobs are running. Also if storage cluster is included, this EM passes the raw data to the storage cluster and processed data to the application. As grid cannot contact cloud and cloud cannot contact the grid directly, EM acts as communication interface between them when cross job execution takes place. When the grid jobs are completed in the cloud, the output data is fetched by the execution manager and passed onto the grid. Once task is completed this EM destroys the instances deployed at the grid and cloud site.

Imaging Component (IM) acts as repository for holding the cloud images along with the application bundled, grid jobs bundled along with required operating system, binaries, and libraries. When the request is raised by RM to Cloud, the IM bundles the required components and stores it in its repository, and passes the image to cloud when instance is ready to be deployed. On the other hand when request is passed to meta scheduler, the image from walrus under cloud controller is fetched by IM and stored in the repository.

Integration component integrates coordinates and controls grid and cloud operational in different networking domains and in different subnets. Within grid, dynamic virtual organization is formed for better execution of jobs. When the grid jobs are submitted to Cloud the virtual organization is to be extended to the cloud instances. The grid and cloud be integrated through a peer to peer network for enhanced data transfer and secured communication. It will have to be specifically engineered to provide high resilience while avoiding single points of control and failure, which would make a decentralized super peer based control mechanism insufficient. The Network Manager (NM) is responsible for maintaining the network state across the domains, and makes the ease of accessibility.

Storage Cluster: There is a dedicated storage cluster which can be attached to instances dynamically or the bulk volume data can be submitted and results retrieved. This storage cluster has a distributed file system with map-reduce. Storage cluster is integrated with the integrator component and controlled by resource manager. When the applications running in grid environment is in need to process large volume of data the data sets are passed onto the cluster through the execution manager.

The storage cluster with map reduce process the data and stores in it. On the other side, if grid jobs running over cloud need to process huge volumes the same can be processed through the cluster with the aid of execution manager. In a conventional grid, the jobs are executed on physical nodes that provide high performance compared to the grid jobs executed on cloud over virtual machines. To enhance the performance of integrated environment the storage processing is done in a dedicated cluster.

B. Semantics based Integrator Component The architecture of the semantic integrator is provided in

Figure 1. The user request is usually processed by cloud controller

(CC). The CC checks for the available hardware and software resources. If there is no match then the request is not completed and dropped in a conventional way. The user request is converted to ontology specific format by the ontology converter component. The platform stack available in the repository is converted into ontology specific format by the ontology converter and stored in the ontology store. Ontology store is the database that consists of the semantic information of the platform stack available in the cloud environment. The request handler handles the user request. We have included the match-making algorithm in the request handler component. The algorithm matches the user request with the available resources and produces an available

310

Page 4: [IEEE 2011 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2011) - Bali, Indonesia (2011.07.26-2011.07.28)] 2011 Third International

match. The result is sent to the cloud controller, which provisions the request.

Figure 1 Semantics based Integrator in Grid Cloud integrated environment

All user jobs irrespective of whether it is a grid job or a

cloud job are submitted through the job submission portal, which stores the jobs in job pool queue. The grid middleware and the cloud controller update the resource availability information to the resource pool periodically and store the information in the resource pool. The ontology converter fetches the job requirement specification from the queue and converts to ontology specific format. The resource specification available in the resource pool is converted to ontology specific format and stored in the ontology repository managed by ontology converter.

We have included a reservation handler that handles the grid reservation. When the resource match is obtained for a job request, the request handler checks the match with the reservation handler to ensure that there is no advanced reservation of that resource for that particular period of time. If the resource is reserved then the same information is passed to request handler and another match is performed for in search of other resources.

IV. IMPLEMENTATION Grid environment is implemented with debian Linux

operating system and Torque as local scheduler, Globus toolkit [10] as middleware, and Gridway as Meta scheduler. Cloud environment is established with Eucalyptus[11] as cloud middleware, Xen as hypervisor and libvirt for hypervisor to communicate with middleware. Storage Cluster is implemented with Hadoop map-reduce [12].

Hadoop is an open source distributed processing framework that allows computation of large datasets by splitting the dataset into manageable chunks, spreading it

across a fleet of machines and managing the overall process by launching jobs, processing the job no matter where the data is physically located and, at the end, aggregating the job output into a final result.

When implementing grid environment with Globus toolkit over Virtual Machines on cloud, we found that there is considerable delay in executing the installation.

EUCALYPTUS uses Xen or KVM for virtualization which provides scalability and isolation. Utilization of SOAP interface provided by default offers better security because it implements the WS-Security standard with an asymmetric RAS key pair.

The ontology description is through protégé, and snobase for storing ontology. With java and linux shell scripts we have implemented the integrator component.

V. RESULTS AND DISCUSSION The following observations in various scenarios show the

performance of the proposed system.

TABLE I. NO OF VM AND LOAD

No of VMs Load in % 2 62 4 69 6 76 8 85 10 96

TABLE II. NO OF QUERIES, VM AND LOAD

TABLE III. NO OF VM, QUERY AND DELAY

It is observed that on the Cloud head node, the load of the

head node increases gradually as the number of grid instances running on cloud increases, as is observable in Table 1. Table 2 provided the statistical data of load of the head node, providing various queries to grid instances over cloud. The load increases minimally when the number of queries to the grid instances increases even for constant number of VMs. Table 3 shows that as the number of Cloud instances running as VMs in Grid increases along with the number of queries, the delay in accessing the VMs or applications on VM increases rapidly. The increase in load is

No of Queries No of VMs Load in %1 3 65 5 3 72 10 3 79 20 3 86 40 3 95

No of VMs in Grid No of Queries Delay in Access in Sec1 2 2 5 4 2.75 10 8 4.5 15 16 8.5 20 32 13

311

Page 5: [IEEE 2011 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2011) - Bali, Indonesia (2011.07.26-2011.07.28)] 2011 Third International

predicted to be due to memory leakage in the cloud controller or cluster component of eucalyptus.

TABLE IV. LOAD AT CLOUD PHYSICAL NODE

No of VM Total No. of Queries Load 1 1 412 10 45 3 20 534 40 616 80 82 8 115 95

TABLE V. LOAD IN GRID VIRTUAL MACHINE OVER CLOUD

However it is observed that at the grid head node or at

the grid execution node, the load does not rapidly rise when the number of VMs or queries/transactions to the cloud instances running over grid increases. This does not affect the overall performance of the grid environment.

A sample application is hosted on the Cloud instance running over grid environment. Table 8 shows the test report and Figure 2 shows the performance of the application over VM.

Figure 3 Performance of application over VM on grid

TABLE VI. APPLICATION AND DATABASE ACCESS FROM GRID

In our implementation, migration of cloud instances to grid, and grid jobs to cloud is done manually when resource manager raises an intimation alert to contact the counterpart. However this can be automated with scripting on the resource manager of integrator.

Execution of data sets over Hadoop cluster from the virtual machines at both grid and cloud environment provides considerably better performance. In our implementation, all the three components, viz., the grid, cloud and Hadoop clusters lie in the same network domain. However there may be variation in the performance when all the three are geographically distributed and operating in different network domain.

It has also been observed that maximizing the load of the cloud controller in eucalyptus adds to additional overhead in cloud causing delay. Further execution delay occurs when executing the jobs on grid cluster over cloud when compared to executing the same directly on grid.

The cloud instances with a cluster controller executed on grid can be accessed easily from the cloud controller of eucalyptus cloud, but the load of the cloud controller rises rapidly due to memory leakage problem in the eucalyptus cloud controller.

On fine tuning the cloud middleware and tools or by enhancing the integrator, these delays and overheads can be reduced, which then is expected to give better performance and improved quality of service.

VI. CONCLUSION AND FUTURE WORK The integration of grid with private cloud offers

effective utilization of the resources over secured environment. This raises the issue that in grid environment jobs are executed over physical machines whereas in cloud the same is executed over virtual machines in cloud which may lead to performance degradation. This is solved by integrating the storage cluster, so that when the jobs need to process bulk volume of data, such data are processed in the storage cluster thereby making not very serious compromise in the performance. This also helps in green ICT.

The future work includes enhancement of the networks with p2p architecture for better performance, establishing single sign on security mechanism, automated bundling and provisioning of grid images instantly.

No of Queries Load 0 55 1 60 5 68 10 80 15 97

312

Page 6: [IEEE 2011 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2011) - Bali, Indonesia (2011.07.26-2011.07.28)] 2011 Third International

REFERENCES

[1] Song Wu, Wei Zhu, Wenbin Jiang, Hai Jin, VMGrid: “A Virtual Machine Supported Grid Middleware”, The 2008 IEEE International Conference on Granular Computing (GrC 2008), China, 2008.

[2] Paul Marshall, Kate Keahey and Tim Freeman, “Elastic Site Using Clouds to Elastically Extend Site Resources”, 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing (CCGRID 2010), Australia, 2010.

[3] Elton Mathias and Françoise Baude, “Multi-domain Grid/Cloud Computing Through a Hierarchical Component-Based Middleware”, 8th International Workshop on Middleware for Grids, Clouds and e-Science (MGC 2010), India, 2010.

[4] David Wentzlaff, Charles Gruenwald III, Nathan Beckmann, “An Operating System for Multicore and Clouds: Mechanisms and Implementation”, ACM Symposium on Cloud Computing (SoCC’10), Indianapolis, 2010.

[5] C. Hoffa, G. Mehta, T. Freeman, E. Deelman, K. Keahey, B. Berriman, and J. Good, “On the use of cloud computing for scientific workflows”, 4th IEEE International Conference on e-Science (e-SCIENCE ’08), USA, 2008.

[6] C. Zhang, H. D. Sterck, “Cloudwf: A computational workflow system for clouds based on hadoop”, The 1st International Conference on Cloud Computing (CloudCom 2009), China, 2009.

[7] Alpana Rajan, Anil Rawat, Rajesh Kumar Verma, “Virtual Computing Grid using Resource Pooling”, 2008 International Conference on Information Technology, USA, 2008.

[8] Wajeeha Khalil, Erich Schikuta, “Towards a Virtual Organization for Computational Intelligence”, The Fourth International Conference on Digital Society (ICDS 2010), Netherlands, 2010.

[9] A.J. Rubio-Montero1, E. Huedo, R.S. Montero, I.M. Llorente, “Management of Virtual Machines on Globus Grids Using GridWay”, 21st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2007), USA, 2007.

[10] Ian Foster, “Globus Toolkit Version 4: Software for Service-Oriented Systems”, J. Comput. Sci & Technol. July 2006.

[11] Daniel Nurmi, Rich Wolski, Chris Grzegorczyk Graziano Obertelli, Sunil Soman, Lamia Youseff, Dmitrii Zagorodnov, “The Eucalyptus Open-source Cloud-computing System”, 9th IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID ’09), USA, 2009.

[12] Garhan Attebury et al, “Hadoop Distributed File System for the Grid”, IEEE Nuclear Science Symposium and Medical Imaging Conference, USA, 2009.

[13] Kailash Selvaraj, Neela Narayanan Venkatraman, Dr.Saswati Mukherjee, “Semantics based Computational Resource Broker for Grid”, Second Asia International Conference on Modelling & Simulation (AMS 2008), Malaysia, 2008.

[14] Nair, et. al, “Towards Secure Cloud bursting and Aggregation”, 8th European Conference on Web Services (ECOWS 2010),Cyprus 2010.

313