Network Load Balancing

659 Words2 Pages

Load balancing is a method for distributing workloads across multiple computers, such as a cluster, a Central Processing Unit, network links, or disk drives to optimize resource use, maximize throughput, minimize response time and avoid overload of any one of the resources. By using multiple components that use load balancing, it can increase reliability and speed through redundancy, instead of using a single component to achieve the desired result. The Load balancing is achieved through dedicated software or hardware like a multilayer switch or a domain name switch (DNS) server process. Server farms are just one of many uses that benefit from load balancing. Load balancing allows for a significantly higher fault tolerance level.
When a router learns multiple routes to a specific network via multiple routing protocols, it installs the route with the lowest administrative distance in the routing table. Sometimes the router has to select a route from many different paths along the same routing path with the same administrative distance. In this case, the router chooses the path with the lowest number of instances to its destination. Each routing process calculates its paths differently and the paths may need to be manipulated in order to get the desired load-balancing method.
Network Load Balancing
Network Load Balancing distributes IP traffic to multiple instances of a TCP/IP service, such as a Web server, each running on a host within the cluster. It transparently partitions the client’s request among the hosts and lets the client access the cluster using one or more "virtual" IP addresses. From the client's point of view, the cluster seems to be a single server that answers these client requests. As enterprise traffic increas...

... middle of paper ...

...y in processing speed and memory then a ratio or weighted method may be the best option. The default method is called round robin. In this method the connection request is passed on to the next server in line, eventually passing the requests evenly across the cluster. It works in most configurations. Ratio is passed around by ratio that is designed specifically for a defined set of rules that are made by the administrator. This allows for a distribution of requests that is defined and is specific to each server speed and memory. There are other ratio methods called Dynamic Ratio Node and Dynamic Ratio Member which are similar to Ratio except the ratio is system driven and which values are not static. Weighted Methods work best when the server’s capacities are different and require a weighted distribution of connection requests (University of Tennessee, 2014).

Open Document