Transcript
  • 2016, IJARCSSE All Rights Reserved Page | 340

    Volume 6, Issue 6, June 2016 ISSN: 2277 128X

    International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com

    Differential Evolution Based Optimal Task Scheduling in

    Cloud Computing Anshul Ahluwalia

    *

    Research Scholar, M.tech Computer Science

    JMIT, Kurukshetra University, Haryana, India

    Vivek Sharma

    Associate Professor, Department of Computer Engg.

    JMIT, Kurukshetra University, Haryana, India

    Abstract: In today's scenario, cloud computing has been renowned as a new mode of service in the area of computer

    science. This paper initially starts from the definition, also the key technology and corresponding characteristic that

    are reviewed from all the aspects of cloud computing. In both literature and practice, the cloud computing face the

    large quantity of the cloud users or client groups and the quantity of number of tasks and the massive data, so the

    concern for scheduling task processing is very significant. How to schedule the tasks efficiently has now become an

    important issue rather a problem to be solved in the field of cloud computing. Hence for the programming framework

    of cloud computing, a Differential Evolution(DE) Algorithm can get an efficient optimized task scheduling

    completion time and better results, and the results of the scheduling task average make-span completion time is far

    more better than existing PSO(particle swarm optimization)algorithm due to the fast convergence property of

    Differential Evolution Algorithm. The simulation result of this algorithm is compared with other algorithms, the

    results analysis show that, this algorithm DE is much better than the existing PSO algorithm and also it can be used

    as an efficient task scheduling algorithm in cloud computing environment.

    Keywords: Cloud Computing, Task Scheduling Algorithm, Differential Evolution Algorithm, DE, Optimization,

    Scheduling Types, Particle swarm optimization (PSO).

    I. INTRODUCTION Cloud computing is mainly meant for the delivery of computing services over the Internet. Cloud services allow

    individuals and businesses to access or use the software and hardware that are in turn managed by third parties at remote

    locations. Examples of cloud services includes the online file storage, the social networking sites, a web-mail, and an

    online business applications.[10] The cloud computing model has the open access to information and computer resources

    from anywhere or any part of that a network the connection is available. [1]Cloud computing provides a shared pool of

    the resources, including the data storage space, networks, the computer processing power, and a specialized corporate

    and a user applications. Cloud computing architecture has two most important components that are generally known as

    the front end and other is the back end. Where the Front end is that part of cloud which is visible to the user of cloud.

    This include applications and the computer that user look to access the cloud. Back end of cloud computing is nothing

    but it is the cloud itself, which mainly comprises of computers, storage devices.[2]

    Fig.1 Layered View of Cloud Architecture

    http://www.ijarcsse.com/

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 341

    Cloud computing is an internet based computing that delivers the users an infrastructure as a service (IaaS), platform as a

    service (PaaS), and software as services (SaaS). The services are mostly made available on a subscription based using

    pay per demand user model to customers, regardless of their location. IaaS is the lower most layers of the cloud

    computing systems and it basically provides a virtualized resource such as the computation, communication and storage

    that are available on demand. PaaS makes the cloud so easily as programmable. SaaS can be called as the software

    delivery model. [8]

    Cloud Scheduling: Traditional cloud scheduling algorithms typically aim to minimize and decrease the time and cost for

    processing all tasks scheduled. However, in cloud computing environment, computing capability varies from different

    resources and the cost of the resource usage. Therefore, it is important to take into consideration the cost. A scheduling

    algorithm is implemented by programmers to schedule the task with maximum estimated gain or profit and execute the

    task in the queue.

    Need for Cloud Scheduling: In cloud computing, users may utilize hundreds of thousands of virtualized resources and it

    is impossible for everyone to allocate each task manually. Due to commercialization and Virtualization, cloud computing

    left the task scheduling complexity to virtual machine layer by utilizing resources virtually. Hence to assign the resources

    to each task efficiently and effectively, Scheduling plays an important role in cloud computing. [7]

    Types of Cloud Scheduling: The different types of cloud scheduling are:

    User level scheduling: User level scheduling comprises of market based and auction based scheduling. FIFO scheduling, priority based, non pre-emptive scheduling etc. are used in user level scheduling.

    Cloud Service Scheduling: Cloud service scheduling is classified at user level and system level. At user level, it mainly considers the service regarding problems between the provider and the user. At system level, scheduling

    and resource management is done. In addition to real time satisfaction, fault tolerance, reliability, resource

    sharing and QoS parameters are also taken under consideration.[12]

    Static and Dynamic Scheduling: Static scheduling permits pre-fetching of required data and pipelining of distant stages of task execution. Static scheduling imposes minimum runtime overhead. In case of dynamic

    scheduling, information of the job components or task is not known before. Thus the execution time of the task

    may not be known and the allocation of tasks is done only as the application executes.

    Heuristics Scheduling: In cloud environment, heuristic based scheduling can be done to optimize results. More accurate results can be built by heuristic methods.[4]

    Workflow Scheduling: For the management of workflow execution, workflow scheduling is done.

    Real Time Scheduling: Real Time Scheduling in cloud environment is done to increase the throughput and to decrease the average response time instead of meeting deadline.

    Performance Parameters of Cloud: Various cloud scheduling parameters are discussed below:

    Makespan: (completion time of a schedule): The time variation between the start and finish of a sequence of schedule.

    Resource Cost: Resource cost in a cloud can be determined by resource capacity and occupied time. Powerful resources led to higher cost.

    Scalability: Capability to cope and perform with increased workload and has ability to add resources effectively.

    Reliability: The ability of a system to continue work even in case of failure.

    Resource Utilization: This parameter defines how effectively the system utilizes the resources.

    II. RELATED WORK 1A.I.Awad,N.A.El-Hefnawy[1] presented a task scheduled based on load balancing Mutation Particle Swarm

    Optimization (LBMPSO) that is used to minimize cost, minimize round, trip time, minimize execution time, minimize

    transmission time, achieve load balancing between tasks and virtual machine also proved that LBMPSO improves the

    Reliability of cloud computing. 2D.I. George Amalarethinam [2] presented a CFCSC algorithm that balances the load and

    minimizes the total monetary cost as compared to the HEFT algorithm. The CFCSC algorithm saves cost by using easier

    cost function. 3Antony Thomas [3] formulated the makespan factor tested upon 3 cloud scheduling scenarios i.e. length

    of task, task priority and both cloudlet priority and cloudlet length. As per result the makespan of task decreased after a

    certain value in number of tasks.4Mala Kalra [4] proposes the application of metaheuristic techniques in the area of

    scheduling in cloud and grid environments. Metaheuristic techniques are usually slower than deterministic algorithms

    and the generated solutions may not be optimal, thus the research is towards improving the convergence speed and

    quality of the solution.5Atul Vikas Lakra [5] proposed a multi-objective task scheduling algorithm for mapping tasks to a

    Vms in order to improve the throughput of the data center and reduce the cost without violating the SLA (Service Level

    Agreement) for an application in cloud SaaS environment. The proposed algorithm provides an optimal scheduling

    method.6Alexander Visheratin[6] investigated the applicability of previously developed Metaheuristic algorithm co

    evolutional genetic algorithm (CGA) for scheduling series of workflows with hard deadlines constraints.7Brototi

    Mondal[7] formulated a soft computing based load balancing approach has been proposed. A local optimization approach

    Stochastic Hill climbing is used for allocation of incoming jobs to the servers or virtual machines (VMs). Performance of

    the algorithm is analyzed both qualitatively and quantitatively using CloudAnalyst.8Kook-Hyun Choi[8] proposed the

    research on the calculation of SaaS hardware capacity, the foundation of cloud services, is basic research that is required

    to expand into various projects within the cloud. 9N. Ch. S. N. Iyengar[9] proposed that the significance of the proposed

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 342

    mechanism is detecting the higher rate of overload threats at earlier layers. Constant monitoring and stringent protocol

    setup for incoming traffic strengthens the proposed mechanism against several kinds of overload threats. Based on the

    traffic pattern of incoming requestors, the vulnerability is observed and outwitted at various layers.10

    Kook-Hyun

    Choi[10] researched on the hardware capacity calculation technique for SaaS, the foundation of cloud services, is key

    research that is needed to expand into a variety of projects within the cloud in the future. In this research, the method and

    criteria for capacity calculation of SaaS-based hardware are presented.

    III. PROPOSED APPROACH Task scheduling in cloud computing is implemented by using the Differential Evolution Algorithm which defines itself

    by an important factor of fast convergence followed by many more which made this algorithm more faster than the rest

    of the earlier used algorithms by decreasing the makespan factor after working upon the number of tasks and a virtual

    machine.

    Differential Evolution Algorithm: One extremely powerful algorithm from evolutionary computation due to its excellent

    convergence characteristics and few control parameters is Differential Evolution (DE). Differential Evolution (DE) is an

    efficient optimization approach.

    Storn and Prince in 1995 is the inventor of Differential evolution algorithm. Differential Evolution is a simple yet very efficient optimization approach in solving a variety of task

    scheduling problems with many real-world applications.

    Differential evolution combined with evolution strategies (ESs) and evolutionary programming (EP) can together be categorized into a class of population based, with Derivative free methods known as Evolutionary

    algorithm.

    All these approaches mimic Darwinian evolution and evolve a population of individuals from one generation to another generation by an analogous evolutionary operational factors such as mutation, crossover and selection.

    IV. DIFFERENTIAL EVOLUTION OPTIMIZATION PROCESS Differential evolution is a renowned for a best solution to solve the real valued problems based on the principles of the

    natural evolution using a population P of Np floating point-encoded individuals Eq.(1) that evolve over G generations to

    finally reach an optimal solution. In DE the differential Evolution algorithm, the size of population remains constant

    throughout the process of optimization. Each individual or user or a candidate solution is actually a vector that contains

    same parameters as in Eq.(2) as an issue the problem decision variables D. The basic idea employs that the difference of

    the two randomly selected parameter vectors as the source of the random variations for a third parameter vector. So here

    we depict a more rigorous and renowned description of this new optimization method.

    P = [Y1(G)

    ......................YNp(G)

    ] Eq.(1)

    Yi(G)

    = [ X1i(G)

    , X2i(G)

    ,........., XDi(G)

    ) ] i = 1,2......Np Eq.(2)

    Extracting the distance and direction of the information from the population to create or generate random deviations

    result in an adaptive scheme with excellent fast convergence properties. Differential Evolution creates a new off springs

    just by generating a noisy replication of each individual of the population. The individual in turn performs better from the

    parent vector (target) and a replica (trail vector) advances to next generation.

    This optimal task scheduler process is carried out with three basic operations given below:

    Mutation

    Cross-over

    Selection

    First of all the mutation operation activates and creates the mutant vectors by perturbing each and every target vector

    with a weighted difference of the two other individuals selected randomly. Once the cross over operation generates the

    trail vectors by mixing the parameters of the mutation vectors directly with the target vectors, in accordance with a

    selected probability distribution. In Final step, the selection operator produces the next generation population by selecting

    between the corresponding target vectors and trial vectors that fits better the actual objective function.

    A. Initialization

    The first step in the differential evolution DE optimization process is basically to create an initial population of candidate

    solutions generally by assigning random values to each and every decision parameter of each individual of the existing

    population. These values must lie within the feasible bounds of the decision variable that can be generated by an Eq. (3).

    In the case of a preliminary solution is available, adding normally distributed random deviations to the nominal solution

    often generates as a result the initial population.

    Y (0)

    i,j= Yjmin

    + j (Yjmax

    Yjmin

    ) Eq.(3)

    i = 1,2,.Np , j = 1,2,.D

    Where Yjmin

    and Yjmax

    are respectively, the lower bound and the upper bound of jth decision parameter and the j is a

    uniformly distributed random number within [0,1] generated a new for each value of j .

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 343

    B. Mutation

    Once the population is initialized, then it evolves through the operators of mutation, cross over and selection. For the

    crossover and mutation distinct types of strategies are used by programmers. Basic schema is explained here in detail.

    The mutant operator is in charge of introducing the new parameters into the population. So to acquire this, the mutation

    operator auto creates mutant vectors by perturbing a randomly selected vector (Ya) with the difference of two other

    vectors selected randomly ( Yb and Yc ) according to the Eq. (4). All of the vectors should have a different value from

    each other, requiring the population to be at least four individuals to actually satisfy this condition. To control the

    perturbation factor further improve the convergence, the difference vector is scaled by user defined constant in the

    provided range [0, 1.2]. This constant again is commonly known as the scaling constant (S).

    Yi(G)

    = Ya(G)

    + S (Yb(G)

    Yc(G)

    ) Eq.(4)

    i =1,2,. Np

    Where Ya ,Yb ,Yc , vectors chosen randomly {1,2,.........Np } and a b c I Ya ,Yb ,Yc , are generated a new for each parent vector, S is the scaling constant. For such problems, it is considered a = i .

    C. Crossover

    The crossover operator creates the series of trial vectors, which are further used in the process of selection. A trail vector

    is a combination of a parent (target) vector and a mutant vector based on different distributions like exponential

    distribution, binomial distribution and the uniform distribution is generated in the range [0, 1] and is compared against

    the user defined constant referred as the crossover constant. Again if the value of the random number parameter is less

    than or equal to the crossover value constant, the parameter will come from the mutant vector, else the parameter comes

    from the parent vector as given in the Eq. (5). The crossover operation maintains diversity in the population, preventing

    the local minima convergence. The crossover constant (CR) should be in the range of [0, 1].

    A crossover constant of one means that the trial vector will be composed entirely of the mutant vector parameters. A

    crossover constant near 0 zero results in more probability chances of having parameters from the target vector in trial

    vector. A parameter chosen randomly from the mutant vector even if the crossover constant is set to zero is always

    selected to ensure that the trail vector gets at least one parameter from mutant vector.

    Xi,j(G)

    = { Xi,j(G)

    if i CR or j=q }

    { Xi,j(G)

    Otherwise } Eq.(5)

    where i = 1,2.....NP j = 1,2..........D

    Fig.2 Scheme for Generating two Trail Parameters

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 344

    q is a randomly selected index {1,2,...........D} that actually guarantees that trial vector gets at least 1 parameter from the mutant vector i

    which is a uniformly distributed random number [0, 1] generated a new for each value of j .Here

    the Xi,j(G)

    is a parent (target) vector, while Xi,j(G)

    the mutant vector and Xi,j(G)

    the trial vector. Another type of the

    crossover scheme is mentioned in the acute brackets where D denotes the function modulo with modulus D. The starting

    index n is a randomly chosen integer value from the interval i.e. [0, D-1]. The integer L is drawn from interval given

    [0, D-1] with the probability of Pr (L=v) = (CR) v. CR [0,1 ] depicts the crossover probability and it constitutes a control variable for the DE optimization technique.

    The random decision for both n as well as L are made a new for each trial vector.

    Xi,j(G)

    = { Xi,j(G)

    for j = D, D............. D }

    { Xi,j(G)

    Otherwise } Eq.(6)

    D. Selection

    The selection operator selects the vectors that are meant to compose the population in the next generation. This operator

    first compares the fitness of the trial vector and the fitness of the corresponding target value vector, and it selects the one

    which performs better as mentioned in the Eq. (6).

    Yi(G+1)

    = { Yi(G)

    if f( Yi(G)

    ) f( Yi(G)

    )

    { Yi(G)

    Otherwise } Eq.(7)

    where i = 1,2.....NP

    The selection process is repeated over again for each pair of target/ trail vector until and unless the population for the

    next generation is completed.

    Basic Steps in Differential Algorithm: Steps involves differential evolution algorithm in a procedural manner that is

    easy to understand it by the fig.3.

    1. Sampling the search space at multiple, randomly chosen initial points i.e. a population of individual vectors. 2. Differential evolution is a nature derivative-free continuous function optimizer, it encodes the parameters as a

    floating-point numbers and manipulates them with simple arithmetic operations For this differential evolution

    it mutates a (parent) vector in the population with a scaled difference of the other randomly selected individual

    vectors.

    3. The resultant mutation vector is a crossed over with corresponding parent vector to generate a trial or a offspring vector.

    4. Then, finally it takes a decision in a one-to-one selection process of each pair of offspring and parent vectors; the one with a better fitness value survives and enters the next generation.

    Fig.3 Steps of Differential Evolution Algorithm

    This procedure thus repeats for each parent vector and the survivors of all parent-offspring pairs will become the parents

    of a new generation in the evolutionary search cycle. The evolutionary search stops automatically when the algorithm

    converges to the true optimum or a certain termination criterion such as the number of generations is reached.

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 345

    V. EXPERIMENTAL RESULTS To demonstrate the power of DE for the task scheduling problem, the proposed algorithm is compared with the PSO

    algorithm. In a cloud environment, tasks may be of different numbers and parameters are also different that means there

    will be hundreds of thousands of tasks that have different execution time.

    Table.1 Comparison of PSO and DE algorithm on the basis of makespan value with respect to tasks parameter range.

    The simulation has been performed for different number of tasks and same virtual machines that executes multiple times

    for the same number of tasks. Table 1 describes the average makespan with respect to task parameter values, which is

    continuosly decreasing in DE as compared with PSO.

    Fig.4 Comparison of makespan value with respect to Attempts.

    The decrease in makespan is due to the decreasing nature of the tasks parameter values in DE algorithm.The graph in

    Fig.4 shows the makespan value with respect to number of tasks with static value of virtual machine. Initially makespan

    value is changing rapidly for PSO & DE but as the iteration goes further the makespan value is constantly decreasing in

    case of DE. Simulation has been performed multiple times for similar number of tasks in order to get more accuracy in

    terms of makespan.

    Fig.5 Comparison of PSO and DE algorithm on the basis of average makespan value with respect to varied number of

    Tasks on 10 Attempts

    0

    100

    200

    300

    400

    500

    600

    1 2 3 4 5 6 7 8 9 10 11

    Makespan

    Attempt

    Attempts: VM:5 Task:20

    PSO

    DE

    0

    500

    1000

    1500

    2000

    2500

    3000

    30 50 70 100 150 200 250

    Makespan

    No of tasks

    Makespan comparison

    PSO

    DE

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 346

    Simulation has been performed for different number of tasks. As show in Fig.5 above, the proposed DE algorithm is

    better than PSO algorithm in terms of makespan. The graph shows that DE algorithm gives optimum results if vm is

    same and the tasks are randomly initialized. In all cases, makespan calculated from DE lies below the makespan

    calculated from PSO.

    Fig.6 Comparison of PSO and DE algorithm on the basis of makespan value with respect to varied number of tasks

    Fig. 6 graph shows the final value of makespan achieved by PSO and DE algorithm after the stopping criteria is satisfied.

    It is concluded that results obtained from DE algorithm has better average makespan than PSO algorithm. The table1

    shown above also concluded the statstical analysis in which makespan value calculated from DE algorithm is better than

    PSO algorithm.

    VI. CONCLUSION This paper concludes that in cloud computing Differential Evolution algorithm (DE) which deploys the tasks based on

    million instructions is the fast convergence concept introduced for task scheduling approach that results in the optimized

    schedule. Also the schedule for cloud scheduling generated by DE is optimal. The simulation demonstrates that

    makespan calculated from DE is better than the makespan calculated from Particle Swarm Optimization algorithm

    (PSO).In cloud computing better makespan is possible only through proper allocation of resources which has been done

    by DE scheduler. The DE also increases the throughput of the cloud provider as well as client. It is also observed that

    increase in tasks and number of virtual machines (VM) results in decrease in complexity of DE which in turn is far better

    and fast. Hence the issues in Particle Swarm Optimization algorithm (PSO) are resolved as now we can implement the

    task scheduling with efficient makespan by means of Differential Evolution algorithm (DE) implemented in cloud

    computing. Due to fast convergence in DE optimal schedule within time constraint is better approach for cloud

    computing.

    REFERENCES

    [1] A.I.Awad , N.A.El-Hefnawy , H.M.Abdel_kader Enhanched Particle Swarm Optimization for Task

    Scheduling in Cloud Computing Environment Procedia Computer Science 65 ( 2015 ) 920 929 Elsevier.

    [2] Antony Thomas , Krishnalal G, Jagathy Raj V P , Credit Based Scheduling Algorithm in Cloud Computing

    Environment International Procedia Computer Science 46 ( 2015 ) 913-920 Elsevier.

    [3] Mala Kalra , Sarbjeet Singh , A review of metaheuristic scheduling techniques in cloud computing Egyptian

    Informatics Journal (2015) 16, 275295 Elsevier.

    [4] Atul Vikas Lakra, Dharmendra Kumar Yadav, Multi-Objective Tasks Scheduling Algorithm for Cloud

    Computing Throughput Optimization Procedia Computer Science 48 ( 2015 ) 107-113 Elsevier.

    [5] Nidhi Bansal, Amitab Maurya, Tarun Kumar, Manzeet Singh, Shruti Bansal, Cost performance of QoS Driven

    task scheduling in cloud Computing Procedia Computer Science 57 ( 2015 ) 126-130 Elsevier.

    [6] Alexander Visheratin, Mikhail Melnik, Nikolay Butakov and Denis Nasonov Hard-deadline constrained

    workflows scheduling using metaheuristic algorithms Procedia Computer Science Vol. 66, 2015 Elsevier.

    [7] Brototi Mondal, Kousik Dasgupta, Paramartha Dutta, Load Balancing in Cloud Computing using Stochastic

    Hill Climbing-A Soft Computing Approach Procedia Computer Science 4( 2012 ) 783-789 Elsevier.

    [8] Huan Ma, Gaofeng Shen , Ming Chen and Jianwei Zhang, Technologies based on Cloud Computing

    Technology ISSN: 2287-1233 ASTL 2015.

    [9] N. Ch. S. N. Iyengar and Gopinath Ganapathy, An Effective Layered Load Balance Defensive Mechanism

    against DDoS Attacks in Cloud Computing Environment ISSN 1738-9976 IJSIA 2015.

    [10] Kook-Hyun Choi, Yang-Ha Chun, Se-Jeong Park, Yongtae Shin and Jong-Bae Kim Method for Calculation of

    Cloud Computing Server Capacity ISSN 2287-1233 ASTL 2015.

    0

    500

    1000

    1500

    2000

    2500

    3000

    30 50 70 100 150 200 250

    makespan

    no of tasks

    Makespan comparison PSO/DE

    PSO

    DE

  • Ahluwalia et al., International Journal of Advanced Research in Computer Science and Software Engineering 6(6),

    June- 2016, pp. 340-347

    2016, IJARCSSE All Rights Reserved Page | 347

    [11] Kushang Parikh, Nagesh Hawanna, Haleema.P.K, Jayasubalakshmi.R and N.Ch.S.N.Iyengar Virtual Machine

    Allocation Policy in Cloud Computing Using CloudSim in Java ISSN: 2005-4262 IJGDC 2015 SERSC.

    [12] Ying Gao, Guang Yang, Yanglin Ma, Mu Lei and Jiajie Duan, Research of Resource Scheduling Strategy in

    Cloud Computing Vol. 8 ISSN: 2005-4262 IJGDC 2015..

    [13] Mini Singh Ahuja, Randeep Kaur and Dinesh Kumar , Trend Towards the Use of Complex Networks in Cloud

    Computing Environment ISSN: 1738-9968 IJHIT 2015 SERSC.

    14] Hao Yuan, Changbing Li and Maokang Du,Cellular Particle Swarm Scheduling Algorithm for Virtual

    Resource Scheduling of Cloud Computing Vol. 8, No. 3, (2015), pp. 299-308 ISSN: 2005-4262 IJGDC.

    [15] Ryan P Taylor, Frank Berghaus, Franco Brasolin, Cristovao Jose ,Domingues Cordeiro, Ron Desmarais,

    Laurence Field, Ian Gable The Evolution of Cloud Computing in ATLAS ATL-SOFT-PROC-2015-049.

    [16] L. Sreenivasulu Reddy, V. Vasu, M. Usha Rani Scheduling Algorithm Applications to Solve Simple Problems

    in Diagnostic Related Health Care Centers International Journal of Soft Computing and Engineering (IJSCE)

    ISSN: 2231-2307, Volume-2, Issue-2, May 2012.

    [17] Mohsin Nazir, Nitish Bhardwaj, Rahul Kumar Chawda,Raj Gaurav Mishra Cloud Computing: Current

    Research Challenges Cloud Computing: Reviews, Surveys, Tools, Techniques and Applications An Open-

    Access eBook by HCTL Open.

    [18] Qing Xie, Di Zhu, Yanzhi Wang, and Massoud Pedram, Younghyun Kim, and Naehyuck Chang An Efficient

    Scheduling Algorithm for Multiple Charge Migration Tasks in Hybrid Electrical Energy Storage Systems.

    [19] Yogesh Sharma, Bahman Javadi, Weisheng Si On the Reliability and Energy Efficiency in Cloud Computing

    Proceedings of the 13th Australasian Symposium on Parallel and Distributed Computing (AusPDC 2015),

    Sydney, Australia, 27 - 30 January 2015.

    [20] Anju Bala, Anu Scrutinize : Fault Monitoring for Preventing System Failure in Cloud Computing

    International Journal of Innovations & Advancement in Computer Science IJIACS ISSN 2347 8616 Volume

    4, Special Issue May 2015.


Top Related