What is Load Balancing?

In an extensive computing system, a single server cannot always handle the load or demand of the network traffic. This is where load balancing comes into place. Load balancing ensures that networks run smoothly by spreading the network or application traffic across a group of servers. Load balancers typically exist between client devices and backend servers and distribute the incoming requests to available servers capable of handling the request.

How Do Load Balancers Work?

Load balancers can be:

Load balancers work by:

What is Load Balancing – L4, L7 and GSLB Load Balancers

Server resources must be easily available and load balanced at Layers 4 and/or 7 of the Open Systems Interconnection (OSI) architecture in order to encourage improved consistency and keep up with ever-changing user demand.

Load balancers for Layer 4 (L4) operate at the transport level. This implies that they may decide how to route packets based on the TCP or UDP ports they employ as well as the source and destination IP addresses. Network address translation (NAT) is carried out by L4 load balancers, however they do not examine the data inside of each packet.

The highest level in the OSI model, Layer 7 (L7) load balancers operate at the application level. When determining how to distribute requests across the server farm’s servers, they have access to a greater variety of data than their L4 counterparts, such as HTTP headers and SSL session IDs.

Compared to L4, load balancing at L7 requires more work, but it can also be more effective since L7 servers have more context to comprehend and execute client requests. Global server load balancing (GSLB), in addition to basic L4 and L7 load balancing, may expand the capabilities of any type across numerous datacenters to effectively distribute enormous quantities of traffic without degrading end-user service.

Load Balancer Types Based on Function

Load balancers usually come in three flavors: network load balancers, HTTP(S) load balancers and classic load balancers.

1. Network load balancers

Network load balancers (NLBs), also called layer four load balancing, leverages the transport layer to decide which server receives incoming requests from clients. When the NLB gets a client request to connect, it selects the target server from a pool using a flow hash routing algorithm, which uses the IP address and port number to determine the appropriate server.

The NLB then opens the transmission control protocol (TCP) connection to the target server on the port specified by the flow hash algorithm. During this process, the connection request gets transmitted without modifying its headers. NLBs are faster when compared to other load balancers. However, they are slow when distributing traffic across different servers.

2. HTTP(S) load balancers

HTTP(S) load balancers (HLBs), also called layer seven load balancers, makes the routing decisions based on HTTP/HTTPS headers. HLBs can also track responses between clients and servers, thereby providing information about how busy or idle a particular server is in relation to processing. Unlike NLBs that are speculative, HLBs are data-driven and flexible when it comes to routing decisions.

3. Classic load balancers

Classic load balancers (CLBs) operate at layers 4 and 7. For seamless operations, CLBs require a fixed relationship between the ports. For example, you can map load balancer port 80 to container instance port 3030. However, you cannot map load balancer 80 to port 3030 on one instance and port 4040 on another instance.

Load Balancer Types Based on Configurations

Besides the above classification, you can also group load balancers as either hardware, software, or virtual load balancers

Hardware load balancers

Hardware load balancers rely on the physical, on-premises hardware to route traffic between clients and servers. They have the following properties:

Software load balancers (SLBs)

SLBs are either commercial or open-source software that you install to act as a load balancer. Software load balancers have the following characteristics:

Virtual load balancers (VLBs)

On the other hand, VLBs deliver software load balancing capabilities by running load balancing software on virtual machines (VMs). You can use VLBs to provide short-term load balancing capabilities. However, VLBs can’t solve the inherent hardware challenges, such as limited scalability and automation. Additionally, the lack of centralized management in data centers can hinder VLBs’ potentials.

What is Load Balancing – Hardware vs Software-Based Load Balancers

Load balancers can be either hardware-based or software-based. A hardware-based load balancer—also called Hardware Load Balancing Device (HLD)—requires a proprietary rack and stack appliance with specialized firmware.

However, HLDs are built from Application Specific Integrated Controllers (ASICs) to distribute traffic between the client and server. On the other hand, a software-based load balancer runs on standard x86 servers or virtual machines (VMs) as an Application Delivery Controller (ADC).

Built with specialized ASICs, HLDs manage traffic between the clients and the server with minimum impact on the processor. During peak times, organizations must provision enough HLDs to meet increased demands. However, such a provision may imply that most HLDs could sit idle during off-peak season.

In contrast, software-based load balancers run their services on clustered VMs. Typical software-based load balancers usually designate one primary cluster server to distribute client workloads to other secondary servers. This helps to minimize downtimes in case one server fails. In this regard, software-based load balancers can scale elastically to meet growing demands than their HLD counterparts.

Common Load Balancing Algorithms

Load Balancing algorithms select which backend server handles the client traffic based on two factors: the server’s health status and pre-defined rules. A load-balancing algorithm first identifies which server pool can correctly respond to clients’ requests. Next, it uses pre-configured rules to choose an appropriate server within the pool to handle the traffic.

Typical load balancing algorithms include:

Round robin

The load balancer sequentially serves the requests to the server. After transmitting the request to the last server, the process restarts with the first server. There are two categories of round-robin algorithms: weighted round-robin and dynamic round-robin.

The weighted round-robin algorithm assigns individual weights to the servers based on their efficiencies and structure. Weighted round-robin is useful in instances where you have unidentical pools of servers. In contrast, the dynamic round-robin algorithm computes the server’s weight in real-time to determine which server to forward the requests to.

Least connections method

As the name suggests, the least connections algorithm selects the server that has the minimum connections. It is appropriate in instances where client workloads result in longer sessions.

Least Response Time method

The load balancer selects the server that has the fewest active connections and the minimum response time. This method is appropriate in instances where clients demand a prompt response from the server.

Least bandwidth method

The load balancer computes the bandwidth (in Mbps) required to send the client workloads to various servers. It then sends the request to the server that consumes the minimum bandwidth.

Hashing method

The load balancer computes a hash value based on the client’s packet. The hash value determines the server to forward the client’s workload. In other words, the workload’s IP address determines the server that receives the request.

Custom load method

The load balancer queries the individual servers’ load, such as CPU and memory consumption using Simple Network Management Protocol (SNMP). It then forwards the incoming requests to servers based on the server’s workload.

Resource-based method

The load balancer determines which servers are idle based on existing sessions, CPU and memory consumption, and counters. It then distributes client workloads to servers that are consuming the least resources.

What is Load Balancing – The Benefits

In addition to controlling network traffic, load balancing has other capabilities. Predictive analytics capabilities offered by software load balancers help identify traffic bottlenecks before they occur. The software load balancer provides an organization with useful information as a consequence. These are essential for automation and can influence company choices.

In short, load balancing helps with:

How Can Parallel RAS Help with Load Balancing?

Parallels RAS enables you to load balance your extensive IT infrastructure without the need for complex configurations, or expensive add-ons. It balances both RDSH servers and internal components.

Parallels RAS offers High Availability Load Balancing (HALB), reducing the possibility of downtime and disruption. HALB distributes data traffic among remote desktop servers and gateways with resource-based distribution (user sessions, memory and CPU). Third-party load balancers, such as AWS Elastic Load Balancer (ELB) and Azure Load Balancers, are also supported.

Administrators can also configure multiple HALB virtual servers for isolated access. For example, when using different Secure Client Gateways for internal and external access or different office branches.

Various HALB configuration settings enable advanced management, including:

Organizations can enhance their original load balancing infrastructure and provide an enhanced user experience, especially in wide area network (WAN) scenarios.

Download the 30-day trial of Parallels RAS to reap the benefits of load balancing!

Download the Trial