The load balancer market is evolving from feature-heavy hardware appliances to lightweight software solutions and flexible cloud-native implementations. But if you’re evaluating load balancer options, how do you bridge the gap between old world systems and the new? Do you prioritize the product features you’ve come to rely on or the advantages of flexibility? What are the pros and cons?
The most “feature-rich” load balancers are those traditional, monolithic products that have been around for a long time, and have a legacy of features accumulated over years of product updates and solutions to niche customer scenarios.
These are typically hardware products deployed in physical locations in front of your servers to manage North-South traffic flows coming into and out of your system.
Buyers that traditionally purchased monolithic, feature-rich load balancers often need a range of niche features to maintain services built around specific vendors.
The benefit of feature-rich hardware load balancers is that one product can meet the feature requirements of many different businesses and can support legacy requirements to maintain continuity.
The downsides of most feature-rich hardware load balancers are:
- they are usually very complex appliances, which require specialist engineers to operate and maintain
- they cannot be moved easily, and require local resources to host (rack space, power, network connections)
- they are expensive to purchase, scale out and upgrade, with capacity tied to a specific hardware instance
- they are not suited to managing application delivery in the cloud (let alone multi-cloud scenarios).
In other words, the most feature-rich products are typically not flexible, in terms of who can use them, how to host them, how a business can grow with them, and what kinds of applications and deployments they support.
The most flexible load balancers are those relatively modern, cloud-native products designed to be platform-agnostic and with open APIs and management GUIs. In other words, they can run anywhere, can connect to anything, and can be used by IT generalists.
These are typically software products deployed in virtual machines (VMs) or in cloud / container environments, which can be managed and automated by orchestration platforms, and can manage East-West traffic flows between components in your network as easily as North-South.
Buyers are typically modern IT teams (a.k.a., DevOps) that manage entire platforms, not just a single piece of infrastructure. DevOps teams relate differently to load balancer suppliers and have different requirements compared with specialist engineers. They require flexibility to run on multiple platforms and in any cloud environment as well as the ability to scale from very small (i.e., during development and testing) to very large (i.e., in production).
The benefits of flexible, cloud-native load balancers are that they do not require specialist engineers, they can be deployed in a wide range of platforms, clouds and locations, they can scale easily and automatically, and they are suitable for application delivery in modern development and deployment environments, including containers and microservices.
The downside of most flexible, cloud-native load balancers is that they lack some of the features developed for traditional load balancers to meet the needs of niche scenarios.
With the migration to cloud computing, networks are more dynamic. Applications that were deployed on static, physical servers in data centers can now be implemented on VMs or containers in public, private or hybrid clouds. Dynamic networks are inherently flexible, agile and cost efficient. You can dynamically scale server deployments up/down or out/in according to demand to minimize resource usage and capacity costs, for example. Load balancers must be flexible enough to keep pace with these changes.
With the adoption of microservices architectures, there is more East-West traffic between components within a network, which must be load balanced. Load balancing is no longer confined to managing North-South traffic.
In the past, a large deployment might have contained up to a dozen ADCs. Today, enterprises might use thousands of ADCs across various DevOps teams and in multiple clouds and geographic locations. Large scale demands flexibility.
Virtualization doesn’t overcome inflexibility
When a traditional hardware load balancer is converted into a virtualized software version – when you simply remove the default, packaged hardware – you fundamentally have the same system. You still have a very resource-intensive, high-volume, high-capacity, single instance that is not designed to scale or operate in an agile way.
Furthermore, software versions of traditional hardware appliances can be bloated with legacy features, such as outdated protocol support, unnecessary rules or access control list (ACL) options or SSL VPNs. These features might have been added ten or more years ago when it was appropriate, but now they just bog the system down.
There is no substitute for designing for flexibility in the first place.
The best of both worlds
You don’t have to sacrifice rich features for flexibility – they are not mutually exclusive.
In a modern cloud-native ADC like Snapt Nova, you get a rich set of features and the flexibility you require. Nova includes all the capabilities you expect from an ADC (load balancing, web acceleration, WAF, GSLB) in an architecture designed for flexibility: centralized management, platform-agnostic, multi-cloud and multi-location, hyperscale, container-native, service-discovery, automation, security powered by AI and machine learning, observability, and a UI that’s friendly to new users.
Nova supports the types of deployments that DevOps teams, developers and application owners are implementing today with the feature set that makes it a dependable solution.
And, significantly, a software-as-a-service (SaaS) licencing model means you only pay for what you use.
To see Nova’s features and flexibility for yourself, get started with the Nova Community Edition and spin up some ADCs in up to five nodes for free.