WP Cloud-based Load Balancing Services | Imperva

Load balancing services

22.8k views
Network Management

What is load balancing?

Load balancing is a general term for various distribution techniques that help spread traffic and workload across different servers within a network. Put in human terms, the idea is simple: the more available hands working, the faster and more efficiently the job gets done, and the less work each person has to do.

When applied to the computer networks, these principles of “community labor” become extremely valuable, as they help increase overall computing efficiency – minimizing downtime and raising overall throughput and performance.

As more and more computing is done online, load balancing has taken on a broader meaning. Global Server Load Balancing is the same in principle but its implementation is not confined to one local network. The workload is still distributed, but it’s distributed planet-wide instead of just across a data center.

As a result, modern-day solutions face new challenges, as they are required to take into account not just an individual cluster of servers, but also communications parameters (e.g. link quality) and geographical location of remote requesters.

Today, as more and more online businesses seek to leverage Content Delivery Networks (CDNs), load balancing has become a key component in most content distribution tasks.

Load Balancing Methods

How does load balancer work?

As described above, load balancing enables network administrators to spread work across multiple machines, exploiting available resources more efficiently.

To implement such solutions, administrators generally define one IP address and/or DNS name for a given application, task, or web site, to which all requests will come. This IP address or DNS name is actually, of course, the load balancing server.

The administrator will then enter into the load balancing server the IP addresses of all the actual servers that will be sharing the workload for a given application or task. This pool of available servers is only accessible internally, via the load balancer.

Finally, your load balancer needs to be deployed – either as a proxy, which sits between your app servers and your users worldwide and accepts all traffic, or as a gateway, which assigns a user to a server once and leaves the interaction alone thereafter.

Once the load balancing system is in place, all requests to the application come to the load balancer, and are redirected according to the administrator’s preferred algorithm.

Load balancing algorithms

load balancing algorithm controls the distribution of incoming requests to your cluster ofservers. There are numerous methods employed to accomplish this, depending on the complexity of load balancing required, the type of task at hand, and the actual distribution of the requests coming in. Some common methods include:

  • Round Robin – the most basic load distribution technique, and considered rather primitive by network administrators.
    In a round robin scenario the load balancer simply runs down the list of servers, sending one connection to each in turn, and starting at the top of the list when it reaches the end.
  • Weighted Round Robin – the same principle as Round Robin, but the number of connections that each machine receives over time is proportionate to a ratio weight predefined for each machine.
    For example, the administrator can define that Server X can handle twice the traffic of Servers Y and Z, and thus the load balancer should send two requests to Server X for each one request sent to Servers Y and Z.
    However, given that most enterprises use servers that are uniform in their processing power, Weighted Round Robin essentially attempts to address a nonexistent problem.
  • Least Connections – transfers the latest session to the server with the least connections at the time of session initiation. To avoid latency, this method is advisable in an environment where server capacity and resources are uniform. Least Connections is considered problematic, as most implementations are challenged to accurately measure actual server workload.
  • Weighted Least Connections – identical to Least Connection, except that servers are selected based on capacity, not just availability. For each node, the admin specifies a Connection Limit value, and the system creates a proportional algorithm on which load balancing is based.
    Similar to Weighted Round Robin, this method presumes that server resources are not uniform – which is not in-line with most enterprise network topography.
  • Least Pending Requests – the emerging industry standard, Least Pending Requests selects the server with the least active sessions based on real-time monitoring. Requires assignment of both Layer 7 and TCP profile to the virtual server.

To achieve optimal load distribution and maximize performance, administrators must carefully weigh the pros and cons of their load balancing algorithms of choice.

For example, performance in a Round Robin scenario is likely to be a product of luck, since the next server in line may or may not be previously engaged. A Least Connection algorithm, which is a Layer 4 solution, can be effective due to the partial correlation between load and number of connections, but not all connections are equal in terms of their load (one could be idle, another could be pushing 50 requests per second, etc.).

The Least Pending Requests (LPR) algorithm, which is enabeled by Layer 7 solution, is currently considered a best practice, as it provides best indication of actual load delivered from each connection.


The compromise of DNS load balancing

Considered one the simplest load balancing approaches. In a DNS scenario, load balancing pools for various geographic regions are established, so the load balancer knows exactly which web servers are available for traffic and how often they should receive traffic. This enables the administrator to take advantage of geographically dispersed infrastructure and enhance performance by shortening the distance between requesters and data centers.

Although, in some specific scenarios, DNS load balancing can be effective for simpler applications or web sites, it has notable limitations, which lower overall efficacy for mission-critical deployments.

DNS load balancing uses a simple Round Robin methodology (see above). Unfortunately, DNS records have no native failure detection. This means that if the next server in the rotation is down, requesters will be directed to it anyway – unless the organization adopts a third-party monitoring solution, which adds yet another source of implementation, configuration and maintenance complexity.

Moreover, a DNS solution cannot take into account the unknown percentage of users who have DNS data cached, with varying amounts of Time to Live (TTL) left.

And so, until TTL times out, visitors may still be redirected to the “wrong” server. Even if TTL is set to a low value, which can negatively impact performance, the possibility of some users “getting lost” still exists – which is unacceptable for business-critical applications.

The DNS Compromise

The DNS Compromise: Costly appliances, split architecture, upstream caching issues

High overhead of hardware load balancers

Until recently, most hardware load balancing was based on a hardware load-balancing device (HLD). Also known as a layer 4-7 router, an HLD is an actual physical unit in the network, which works by directing users to individual servers based on various usage parameters such as server processor utilization, number of connections to a server, and overall server performance.

Today, single-function HLDs are being replaced by multi-function ADCs (application delivery controllers). An ADC delivers a full range of functions that optimize enterprise application environments, including load balancing, reliability, data center resource use, end-user performance, security, and more.

ADC-based hardware is server-based (as opposed to content-switch-based). Server-based load balancing leverages standard PC-class servers on which special load-balancing software has been installed. Content-switch-based load balancers are actual network switches that have load-balancing software on-board, and act as intelligent switching devices.

Moreover, of the primary problems with load-balancing ADCs is that they can represent a single point of failure, and can bottleneck traffic if not configured or maintained properly. Also, the setup for ADCs is complex, requiring dedicated and expert staff.

Hardware solutions can be effective for organizations with the resources to manage the complex installation process, the high maintenance overhead, and ongoing capital outlays associated with hardware.

However, HLDs and ADCs alike are aging technology, considered unnecessarily costly and resource-intensive by forward-thinking network admins. These legacy solutions are today in the process of being replaced by solutions that actually reduce costs, while at the same time effectively addressing the single point of failure issue discussed above.

Compatibility issues of software load balancing

Software load balancing, as the name implies, is based on software solutions and as such is mostly independent of the platform on which the load balancing utility is installed. Such software can be implemented either as an add-on application bundled with a DNS solution, as part of an operating system, or – more commonly today – as part of virtual service and application delivery solution.

Service and application delivery solutions are designed to optimize, secure and control the delivery of all enterprise and cloud services, maximizing end user experience for all users, including mobile users. Most of these packages include an integral virtualized load balancing solution.

However, no matter how it is implemented, software-based solutions ultimately requires either hardware to run on or intensive setup and maintenance. This forces organizations to work with multiple vendors, often leading to compatibility issues which are impractical for larger organizations.

Thus, although software-based load balancing solutions may appear less expensive than hardware-based solutions, Total Cost of Ownership (TCO) for software solutions is still high – and setup and maintenance tasks are no less demanding. For this reason, software-based solutions – virtual or not – are often beyond the means of many SMEs and are generally not the first choice for other organizations, as well.

See how Imperva Load Balancer can help you with high availability .

Reliability issues of open source solutions

Three advantages of open source software are clear and are mostly cost-related. However, load balancing is a uniquely mission-critical function, and open source software often lacks effective support and service, requiring in-house expertise that many SMEs can’t afford. Moreover, similar to proprietary software solutions, open source solutions require hardware to run on, and updates on cluster servers.