Monitoring And Load Balancing Inside Cloud Computing Computer Science

Essay add: 29-10-2015, 10:39   /   Views: 288

To improve the efficiency of a cloud, the load of every computing node must be taken into account. A proper balance of the load among the nodes in a cloud will lead to the distribution of the leases addressed to the cloud. This in turn leads to faster execution of the leases and better performance of the cloud. But to balance the load in a cloud, the resources of each node have to be monitored. Thus monitoring is essential to know the current state of the nodes which is needed for load balancing. We are proposing a solution for monitoring the resources in a cloud and balancing the load dynamically in a private cloud. More preference is given for monitoring the resources with higher priority, is proposed. Monitoring is done in dynamic time interval based on a timer and based on the change in the status of each node. Both the concept of centralized and decentralized approaches is combined to produce an efficient Load Balancing among the computing nodes inside a cloud and also among the clouds.

Keywords: hybrid load balancing; monitoring; cloud computing; priority of resources

Reference:

Biographical Notes: P. Varalakshmi is a Lecturer in the Department of Information Technology, MIT, Anna University. She has about 15 years of teaching experience and is pursuing research in the field of grid computing. She is working in the area of grid computing, cloud computing, compiler design and theory of computation.

1 Introduction

Though cloud computing was initially considered by many as the offshoot of grid computing, it has now captured the imagination of many and has overtaken its predecessor by leaps and bounds. A cloud is basically a cluster of nodes which provides services to the users. To improve the efficiency of a cloud's performance, the load of every computing node must be considered. A proper balance of the load among the computing nodes will lead to the better distribution of the leases addressed to the cloud. This in turn leads to faster execution of the jobs and better performance of the cloud.

Cloud computing is mainly used to provide services to the users as and when required. The users specify their needs as leases. In each lease they mention their requirements for memory, CPU, duration, etc. Based on these requirements, the head node of the cloud must allocate the required resources on time. To do this, the head node must have knowledge of the resource availability in each computing node present in its cloud. So the head node must monitor the computing nodes regularly and find out the resource availability in each computing node. Thus, monitoring plays a major role in cloud computing.

Generally monitoring is done in two ways. One method is to monitor at regular intervals. This method is not suitable if the change in the usage of the resources is frequent. The other method is to monitor the resources whenever a change occurs in the resources. The drawback of this method is monitoring of the resources is done even if it is a small change.

So we have proposed a hybrid solution that dynamically monitors the resources when the dynamic timer expires or when the change in the resources is above a dynamic threshold.

Load balancing is done by the head node in each cloud based on the monitored resource availability in the computing nodes. The head node assigns the leases to the under loaded computing nodes in the cloud to balance the load. Thus load balancing within a cloud is essential for a better performance of the cloud in terms of resource utilization and faster response time of the users' requests.

Load balancing between clouds is also essential because, when all the computing nodes in one cloud are fully loaded and when there are idle computing nodes in a nearby cloud, the leases can be transferred to the neighboring cloud by the head node in order to reduce the load. When the leases are migrated, the leases get executed faster than executing them in the same cloud. So load balancing within a cloud and between clouds are essential for the proper working of the cloud computing environment. Monitoring of the resources provides the base for proper load balancing.

Generally load balancing is done in two ways. One is static load balancing and the other is dynamic load balancing. In static load balancing, the load is calculated and balanced based on the prior information about the node and the jobs that are to be executed by that node. In dynamic load balancing, the load is calculated dynamically as the jobs arrive to that node. The load is also balanced dynamically. It does not require any prior information. Dynamic load balancing is further divided into two ways- centralized and decentralized. In the centralized approach, a central node decides if the load has to be transferred or not. In the decentralized approach, there is no central node. All the nodes communicate and decide if the load has to be transferred or not.

We have proposed a solution combining the centralized and decentralized approaches in dynamic load balancing. Our proposed hybrid load balancing algorithm deploys centralized load balancing within the cloud and decentralized load balancing between clouds.

2 Related Work

Cloud computing basically involves providing services to the users as and when required. Load balancing in cloud plays a vital role in the performance of the cloud. For balancing the load efficiently, accurate monitoring of the resources in each node is necessary. In grid computing, a protocol named, "Announcing with Change and Time Consideration (ACTC)" for monitoring is proposed in [15]. A push model is used to push the updated information to the Mediator from the systems. A solution for selecting an optimal time interval for resource monitoring in Grid, is proposed in [11].

A solution for efficient distributed network monitoring, by making use of a local manager in each cluster, is proposed in [16] . The local manager is responsible for monitoring all the nodes in the cluster. A solution based on priority, for monitoring wireless grid, is discussed in [7]. For monitoring, priority is assigned for the resources in each node. Based on the priority assigned, the nodes are monitored in corresponding frequencies.

HPC cluster monitoring system for obtaining the status information from every node, is proposed in [9]. They designed an architecture and implemented it for effective monitoring. The WormTest status monitoring framework in Grid, is proposed in [6].

Two lease migration algorithms, which are MELISA (Modified ELISA) and LBA (Load Balancing on Arrival) for decentralized load balancing in Grid environment is proposed in [10]. MELISA is used for large scale systems and LBA is used for small scale systems. In both these algorithms, a buddy set is considered for each processor. The buddy set for a processor consists of the processors directly connected to that processor. The load balancing is carried out within the buddy set.

A Load Balancing model for Grid environment considering the link utilization, is described in [1]. The load is transferred to the destination node through the shortest route. A solution for balancing load in a tree structured grid environment, is suggested in [4]. They addressed the problems in balancing the load in a cluster by using a grid manager.

A comparative study is performed in [8] for distributed Load Balancing Algorithms in Cloud Computing and concluded that Honey bee algorithm is the best of all the algorithms they considered. A comparison of load balancing algorithms in a Grid environment is suggested in [5]. An efficient decentralized load balancing algorithm for Grid based on the proximity of the nodes ,is proposed in [12].

The virtual machines are created whenever the leases arrive and they are deleted whenever their usage is over. A dynamic scaling of Web Applications in a Virtualized Cloud Computing environment is discussed in [14]. Virtual machines can be migrated lively between the nodes in a cluster. An adaptive distributed load balancing algorithm based on Live Migration of Virtual Machines in Cloud is discussed in [17]. A relocatable Virtual Machine Services on Clouds is proposed in [13].

3 The Proposed Framework

Since the cloud is a vast collection of resources, monitoring the cloud is essential to improve its performance. In order to improve the performance of multiple nodes in a cloud, the workload is balanced among the nodes. In a cloud, some nodes may be heavily loaded while some may be idle. Therefore, workload distribution is the problem of distributing workload among physically dispersed nodes during run time. Workload distribution is carried out in such a way that a set of independent leases are distributed among all the computing nodes of the cloud so that the leases are uniformly distributed. A lease is basically the request given by any user specifying the CPU and memory requirements, and the duration of this requirement.

So, there is an improvement in the system side performance like improved throughput and minimized make span. There is also an improvement in user satisfaction criteria like reduced waiting time and improved response time.

3.1 Cloud Computing Architecture

The block diagram for the cloud computing architecture is shown in Fig 1. A head node is present for each cluster. The head node monitors the processors in its cluster and stores the status information of each one of them. The status information of each processor consists of the load, the processing speed, the memory capacity and other resource availability.

The Layer 1 consists of the compute nodes in each cluster and Layer 2 consists of the head nodes from each cluster. Virtual machines are generated over the compute nodes of clusters based on the capacity of the compute nodes so that it satisfies the requirements of the users. Migration of virtual machines among the compute nodes within each cluster uses the centralized approach and migration of virtual machines across the cluster are done using decentralized approach for the hybrid load balancing algorithm.

Inter cluster communication among head nodesIntra cluster communication between the head node and computing nodesFig 1. Block Diagram for Cloud Computing

Fig 2 explains the cloud computing architecture that is deployed in Open Nebula with our proposed system. The functionality of each of the blocks is explained below.

In the scheduler daemon, our algorithm for load balancing is embedded to balance the load among the computing nodes by scheduling the leases to the respective computing nodes.

The Information manager collects the status information necessary for monitoring the resources in each node.

The Information Manager Driver collects the information about the hosts from the Host Pool.

The Virtual Machine Manager (Xen) gives information about the current status of the virtual machines.

VMPool contains the list of available virtual machines in each computing node in the cloud. The Scheduler virtual machine is linked to the VMPool to manage the leases among the virtual machines present in the computing nodes.

The Transfer Manager is used to transfer the virtual machines between the hosts. Every computing node is a host. The virtual machines are migrated within the cloud and between clouds by the head node, to balance the load in the computing nodes.

mm_sched is used to schedule the leases to the computing nodes

Rank policy is used to prioritize the computing nodes based on resource availability

VMM Driver is used to create and manage virtual machine managers

Vm Pool represents the collection of virtual machines

Host Pool represents the collection of hosts

Information Manager Driver is used to create and manage information managers

Fig 2. Cloud Computing Architecture4 Working Principle of the Architecture

The proposed working principle of monitoring and load balancing mechanisms in cloud environment are discussed in this section.

4.1 Monitoring In Cloud Computing

The head node monitors the resources of the computing nodes, like the available memory, load and etc. An announcement refers to the pushing of the status information of the resources by the computing node to the head node. The status information of each processor is transferred in dynamic intervals based on two conditions.

One condition is based on the number of changes using d_threshold and the other is based on timer. The dynamic threshold, d_threshold is set for each resource dynamically for every change in the resource. If the update of the resource is greater than the d_threshold, the update of that resource status is pushed to the head node. Using Eq(1), the dynamic threshold is calculated for every resource in each computing node. This dynamic threshold is calculated whenever the change in the resource is greater than this.

------(1)

where ith Announcement Change, ACi = |Ai − Ai−1| is the change in the resource value between successive announcements, and NA is the number of announcements. Ai denotes the ith announcement.

The second condition is when the Timer expires and when the change is not greater than d_threshold, a minimum threshold, min_threshold is considered. If the change is more than the min_threshold, the status information of that resource is pushed to the head node. The Timer is reset to Dynamic Time Interval, DTI. As in Eq(2), DTI for a resource is calculated based on the number of changes in the resource (NC), and the time of occurrence of each change Ti.

-----(2)

where TNC is the time of occurrence of the NCth change, T0 is the time of occurrence of the zeroth change and Ti is the time of occurrence of the ith change in the resource.

If the Timer has not expired and the change is greater than min_threshold but less than d_threshold, then the Priority of the resource is considered. The priority of each resource is set based on the amount of monitoring needed for that resource. If a change in a resource reflects on a great change in the loads of the node, such a resource is given a high priority. Basically priority can be classified as high and low. If it has a high priority, then the status information is given to the head node, even though the dynamic timer has not expired and the resource change is not greater than d_threshold. If the priority is low, then the status information is given to the head node only if the dynamic timer expires or the resource change is greater than the d_threshold. This is Priority Based Dynamic Monitoring (PBDM) algorithm. A flowchart for PBDM algorithm is given in Fig 4.

4.2 Load Balancing In Cloud Computing

Based on the load of each computing node in the cluster, the average normalized load of cloud (NLCavg), is calculated for each cloud by the head node, using Eq(3).

-------(3)



where the speed of each computing node k in the cluster A is represented by sk and the load of each computing node k at time T is represented by L k (T).

The leases for every cloud arrive at the head node. The head node assigns these leases to the computing nodes based on the current resource availability of each node. The head node finds a node in that cloud which can process this lease without getting overloaded. Thus Centralized load balancing is done.

The lease is sent by the head node to the computing node by encapsulating it in a virtual machine. Thus the load of each computing node basically refers to the number of virtual machines that are present in it (that is, the number of leases that can be executed at a time). The load of each computing node is calculated and the NLCavg is calculated at the instants as specified by the monitoring algorithm.

If no computing node is having the required resources for executing that lease in that cloud, the layer 1 is fully loaded and the lease is flooded to the layer 2. The head node transfers the lease, to its least under loaded buddy cloud (that is, one among the head nodes of the nearer clouds). The virtual machine is then migrated to the least under loaded computing node in that cloud. Migration cost is taken into account for selecting the buddy cloud. Thus Decentralized load balancing is done across clouds.

Transfer of status information from every node to its head in a cluster is done based on the PBDM algorithm as given below in Fig 3.

Let Ay(i) denote the yth announcement of the current computing node i and let Cx(i) denote the xth change of the resource in the current computing node i. When a change is announced to the head node, the current change becomes the current announcement.

Let min_thresholdi and d_thresholdi denote the minimum threshold and the dynamic threshold for the ith computing node.

Let Timeri denote the dynamic time interval between two consecutive monitoring in computing node i.

At the initialization, announce the status value of memory and CPU of each computing node to the head node; Timer = ∞; Let the number of changes in the resource for each computing node i be NCi = 0;

WHILE (TRUE)

DO {

Assume Ay(i) is the status value of the last announcement and Cx(i) is the current status

Article name: Monitoring And Load Balancing Inside Cloud Computing Computer Science essay, research paper, dissertation