WP What is Hardware Load Balancer (HLD) | Box vs Cloud | Imperva

Load Balancer Hardware

Edge SecurityConnection Optimization

What is hardware load balancer (HLD)

Hardware load balancer device (HLD) is a physical appliance used to distribute web traffic across multiple network servers. Routing is either randomized (e.g., round-robin), or based on such factors as available server connections, server processing power, and resource utilization.

Scalability is the primary goal of load balancing. In addition, optimal load distribution reduces site inaccessibility caused by the failure of a single server, while assuring even performance for all users. Different routing techniques and algorithms ensure optimal performance in varying load balancing scenarios.

Hardware vs. cloud: use case comparisons

Cloud load balancing, also referred to as LBaaS (load balancing as a service), is an updated alternative to hardware load balancers. Among several other advantages, it offers global server load balancing and is suitable for a highly distributed environment.

The following use case scenarios compare hardware load balancer to a cloud-based solution.

Single data center load balancing

This refers to traffic distribution through a local data center containing a minimum of two servers and one load balancer. Here, both hardware and cloud load balancers are equally effective in load distribution and server utilization.

Single Data Center Load Balancing

The main difference is the higher cost of purchasing hardware compared to a LBaaS subscription fee. In addition, lack of HLD scalability may hinder performance, forcing you to purchase additional hardware—either out of the gate or down the road. Both issues are non-existent with cloud-based solutions, which can scale on-demand for no extra cost.

Cross data center load balancing

Cross data center load balancing, also known as global server load balancing (GSLB), distributes traffic across global data centers typically located in different regions. The cost of purchasing and maintaining requisite hardware for GSLB is considerable— at least one appliance has to be located in each of your data centers, with another central box to manage load distribution between them.

Single Data Center Load Balancing

To minimize costs, the central appliance can be replaced by a DNS-based solution. But that comes with its own problems; today DNS is considered outdated and ineffective due to its TTL reliance.

Lastly, scalability becomes an even bigger problem in GSLB appliances and DNS cross data center configurations, due to an increase in possible bottlenecks.

Cross Data Center Load Balancing - HLD and DNS

Contrast these issues with cloud-based solutions. Cloud GSLB scales on demand, possibly saving your organization tens of thousands dollars in setup and maintenance costs. Additionally, the service manages all routing, so users won’t ever experience DNS-related delays. The latter also extends to failover and disaster recovery scenarios, in which responsive rerouting is even more crucial and can make a difference between instant recovery and prolonged downtime.Cross Data Center Load Balancing - Cloud

Hardware vs. cloud: feature comparison

The following table contrasts cloud and hardware-based load balancers:

Hardware Load Balancer Cloud Load Balancer
CAPEX High Low
Maintenance/OPEX Low to High Low
Distribution algorithms Random/data driven Random/data driven
Network layer load distribution Yes Yes
Application layer load distribution Yes Yes
TTL reliance Yes (in in DNS/GSLB scenarios) No
Scalability High Low
Compatibility Server Server, cloud, hybrid

Capex costs

CAPEX costs for hardware load balancer are considerably higher than for cloud-based alternatives. A single appliance is typically more expensive than a subscription to a cloud service. Costs are multiplied when factoring in additional hardware needed for additional scalability and/or cross data center load balancing.

Maintenance (opex) costs

Maintenance overhead for a single appliance is considered minimal and doesn’t significantly differ from managing a cloud-based service. However, maintenance dramatically increases when multiple data centers and hardware devices are involved. Among other matters, this opens the door to integration issues, while also hindering your ability to effectively control and monitor load distribution.

Contrast that with cloud-based services, which offer centralized traffic flow control and are often provided as managed services. They require little maintenance, even when used for cross data center load management.

Distribution algorithms

Both solution types support basic and advanced algorithms used to manage load distribution. These typically include:

  • Round robin – Distributes incoming connections to each server in succession. Round-robin is simplistic, however, in that it doesn’t consider external factors such as server capacity and the number of existing connections.
  • Least connections – Sends traffic to the server having the fewest number of connections when a session is initiated. This a more intelligent algorithm compared to round-robin. Still, it still doesn’t account for all available data, such as server capacity or the number of active sessions.
  • Least pending requests – This data-driven algorithm monitors real-time server loads and routes traffic to the server having the fewest number of active sessions. It relies on its ability to monitor actual server loads in real-time, making it the most effective load distribution method.

Network layer load distribution

Network layer load distribution is available on both HLDs and cloud load balancers. A basic inspection of incoming traffic routes it to the correct server. This allows for basic and semi-advanced distribution methods, such as round-robin and least connections.

Application layer load distribution

Application layer load distribution provides more detailed information about incoming traffic. Leveraging it allows for data driven distribution, e.g., the least pending requests algorithm.

TTL reliance

Cross data center setups not using centralized GSLB appliances suffer from TTL reliance. The result is uneven and non-agile distribution, introducing minutes-long delays on all routing changes. Users continue to encumber a struggling server while ISP caches are refreshed.


Cloud load balancers scale on demand since they don’t use physical hardware. Conversely, HLDs have predefined limitations they cannot exceed. To increase capacity, more appliances need to be purchased, shipped, installed and maintained—costing your organization in both money spent and time lost.


Hardware load balancers are only compatible with other appliances—they’re unable to distribute load to cloud servers. Conversely, cloud load balancers are compatible with load balancing hardware as well as other cloud servers

See how Imperva Load Balancer can help you with high availability .

Choosing a load balancer

Both hardware and cloud-based solutions are viable for load distribution. Each can provide the tools to tackle different configurations, enabling load distribution at both the network and application layer level. That said, cloud-based services are generally easier to setup and manage and—for those who require it—offer more in the way of scalability.

But economic value is the biggest benefit of a cloud-based solution—an advantage that is far more significant in cross-data center scenarios, where a single service replaces multiple appliances.

As cloud computing becomes increasingly predominant, the need for LBaaS expands—since only it offers complete compatibility. Many organizations currently use cloud-based load balancers as an HLD supplement. This trend is likely to grow, making cloud-based solutions the natural next standard for load distribution.