What Is High Availability Load Balancing? | Parallels Insights

High Availability Load Balancing (HALB) is crucial in preventing potentially catastrophic component failures. Central to this concept is the use of a system of primary and secondary load balancers to automatically distribute workloads across your data centers. This redundancy in both your load balancers and servers ensures near-continuous application delivery. In such a system, when either a load balancer or server fails, corresponding backup equipment takes its place.

Understanding How Load Balancing Works

high availability load balancingLoad balancers distribute workloads between servers so that no one server gets overloaded with requests. While routing traffic, load balancers also monitor the health of each server. When they detect impending signs of server failure, they reroute the traffic to another server. This ensures that your system remains responsive and available no matter the number of server requests.

Load balancers use session persistence to prevent performance issues and transaction failures in applications such as shopping carts, where multiple session requests are normal. With session persistence, load balancers are able to send requests belonging to the same session to the same server.

Load balancers can also decrypt SSL traffic prior to passing the request on to a server in a process known as SSL termination. However, this approach can expose the traffic between the load balancer and server to potential attacks. To prevent this, you can set the load balancer to send the encrypted request to the server instead of decrypting it first. This so-called SSL pass-through poses a potential performance issue but can be ideal if you require extra security.

In case of a distributed denial-of-service (DDoS) attack, load balancers can also shift the DDoS traffic to a cloud provider, easing the impact of the attack on your infrastructure.

To ensure high availability load balancing, you should at least have another load balancer that will serve as a backup. This so-called N+1 model is the least costly high availability load balancing model.

Active-Active and Active-Passive are the other high availability load, balancing models. In Active-Active, two or more load balancers operate at the same time. In Active-Standby, each load balancer has an assigned backup that will take over its load in case it goes down.

Algorithms for Load Balancers to Distribute Loads

To distribute workloads, load balancers rely on any of the following algorithms:

The algorithm you select will ultimately depend on your requirements.

Key Benefits of High Availability Load Balancing

Load balancers serve to distribute network traffic and application workloads across several servers so that no one server is overwhelmed. The high availability, or continuous operation, of your IT infrastructure, is achieved through this redundant system.

Ideally, this switch over from a failing piece of equipment to the other one should occur seamlessly so that users do not encounter any downtime. In the real world, minimal downtime is the more achievable objective.

Highly available load balancers serve to protect your organization from Distributed Denial of Service (DDOS) attacks through SYN cookies and delayed binding. They also conduct regular health checks to ensure that your applications and servers can still handle the volume of transaction requests. When they detect impending failure, they reroute traffic to application copies and backup servers.

In the case of web servers, highly available load balancers can separate TLS requests from the main HTTPS requests and speed up web server responses in the process.

Characteristics of Hardware and Software Load Balancers

There are two primary categories of load balancers: hardware and software load balancers. These are explained in the next two sections.

Hardware

Hardware load balancers consist of physical hardware, such as an appliance. These direct traffic to servers based on criteria like the number of existing connections to a server, processor utilization, and server performance. These come with firmware that requires maintenance and software updates.

Hardware load balancers offer better performance and control with a fuller range of features—like Kerberos authentication and Secure Sockets Layer (SSL) hardware acceleration—but require some level of proficiency for management and maintenance. Due to being hardware-based, these load balancers are not very flexible and scalable, so there is a tendency to over-provision hardware load balancers.

Software

Unlike hardware load balancers that require proprietary hardware appliances, software load balancers are simply applications that you can install on standard x86 servers or virtual machines (VMs) to achieve load balancing. Another critical difference between these two load balancing solutions lies in their ability to scale.

Differences between Hardware and Software Load Balancers

While software load balancers can scale elastically in real-time to meet user demands, you must physically provision hardware load balancers to meet peak demands. Software load balancers are more straightforward to deploy than hardware versions. They are also more cost-effective and flexible and used in conjunction with software development environments. The software approach provides you the flexibility of configuring the load balancer to your environment’s specific needs. Compared to hardware versions, which offer more of a closed-box approach, software balancers grant you more liberty when it comes to changes and upgrades.

Software load balancers can come as prepackaged virtual machines (VMs) to spare you some of the configurations but may not offer all of the features available with hardware versions.

Software load balancers are available as standalone solutions that require configuration and management or as a cloud service—known as Load Balancer as a Service (LBaaS). Choosing the cloud service frees you from the maintenance, management, and upgrading of the locally installed server. The cloud provider handles these tasks.

Important Features for High Availability Load Balancing

Besides ensuring high availability for your infrastructure, you should also ensure that the infrastructure offers continuous services for the applications whose traffic it manages. In the event that any server fails, the high availability infrastructure can reroute traffic quickly to another server within the same cluster.

Below are some essential features that such a high availability infrastructure should possess:

How Parallels RAS Helps with High Availability Load Balancing

With Parallels® Remote Application Server (RAS) you get out-of-the-box High Availability Load Balancing that distributes data traffic among remote desktop servers and gateways with resource-based distribution (user sessions, memory, and CPU). Third-party load balancers, such as AWS Elastic Load Balancer (ELB) and Azure
Load Balancers are also supported.

Parallels RAS  removes the restrictions of multi-gateway setups by dynamically moving traffic among healthy gateways and allocating incoming connections based on workload. HALB also lets you run many HALB appliances at the same time, reducing downtime and guaranteeing that applications are always available.

Experience the benefits of high availability load balancing!

Download the Trial