Snapt Nova

Nova is a centrally managed, container-based ADC platform providing Layer 7 load balancing, GSLB, WAF and web acceleration. Nova is cloud-native, hyperscale and intelligent.

Snapt Aria

Layer 7 load balancing, web acceleration, WAF and global DNS load balancer. Blazing fast throughputs, high SSL TPS and runs on any cloud, VM or bare metal.

Need help choosing?

Compare Snapt Nova and Snapt Aria. You can even try them both for free.

Got time for a good read?

Designing for Infrastructure Agility: Lessons for Economic Opportunity

Bethany Hill
August 13, 2020 at 3:00 PM

The pandemic forced rapid scale-up and highlighted design decisions as critical to building an adaptive business.

With the global economy suffering the consequences of COVID-19 and national lockdowns, there has been a large focus on cutting costs, reducing staff and stripping back to the bare necessities. In the land of application delivery and cloud computing, the reverse has happened. Companies rushed to provision more capacity in response to sustained increases in traffic and usage driven by the move from offline to online for so many activities.  

The COVID-19 spike underscores the critical importance of designing your technology infrastructure for agility and unforeseen changes.

The ability to rapidly scale business operations – without out of control costs – is something that organisations need to consider when designing their operations in the age of uncertainty. Without flexibility in systems and operations, organizations are unable to act quickly to take advantage of new opportunities or react to events out of their control, whether it is a pandemic or a massively popular game release that outstrips expected demand.

As we discussed in the introductory blog, the key lesson here is that underlying systems must be designed to quickly scale up or scale down, as needed, on demand and within very short time-frames. Correspondingly, the system must avoid costly long-term investment in things that cannot be changed easily.  How can such a design be achieved? Achieving this agility requires investing in systems that can quickly react to unforeseen events and provide immediate solutions, rather than investing in pre-emptive solutions that could fail. 

The Concept of Just-In-Time Application Delivery

A concept originating in manufacturing but becoming more relevant in application delivery is the Just in Time or JIT methodology. In application delivery, a JIT methodology is enabled by adoptions such as:

  • Distributed cloud-based systems
  • Multi-cloud architectures
  • Microservices architectures and API-driven applications 
  • DevOps and continuous deployment systems for rapid testing and deployment
  • System-wide observability driven by ubiquitous analytics and real-time updating

Distributed Cloud-Based Systems 

For deployments across different geographies, it is generally easier to deploy in the cloud rather than manage multiple pieces of physical hardware in different locations. Cloud deployments can add up over time, however. Running a scale-out enterprise with only cloud computing can be significantly more expensive than creating a hybrid infrastructure that uses some elements of dedicated hardware (or heavily discounted long-term cloud contracts) to cover baseline usage with the capability to add incremental or JIT capacity in public clouds. Failover in either direction is possible, creating a more resilient system. 

Some larger businesses, like Netflix, have chosen to run entirely in Amazon or other clouds but they are willing to commit to creating and maintaining complicated tooling to keep their costs low. For example, they run their own load balancing systems on EC2 instances, rather than using far more expensive options like Amazon ELB or ALB. For smaller and even mid-sized businesses, running entirely in the cloud makes perfect sense due to the savings they gain not having to worry about fixed infrastructure and flexibility. However, small and medium-sized businesses face business risk when they scale without design. A load balancing bill can balloon to tens or even hundreds of thousands of dollars a month in outlier events. 

Multi-Cloud Deployments for Agile Design

There are many advantages to distributed, multi-cloud strategies, such as; having the freedom to use the best parts of each cloud and avoid vendor lock-in and better disaster recovery and resilience. To a lesser degree, multi-cloud can afford better geographic coverage and allow companies to move their cloud servers to locations closest to their users. This is one of the most cost-effective and simple ways to improve performance. 

Distributing web servers across multiple clouds and multiple locations helps online businesses of all sizes. But it also creates some new challenges. 

  • Performance. Having disparate geographic locations can negatively impact performance by increasing website response times, especially if traffic is randomly sent to different servers. To reduce latency and optimize performance, web site requests must be sent to the server that’s nearest to users and content must be delivered from a user’s nearest server. This is where geographically-aware load-balancing and traffic routing rules become important, particularly across different clouds.
  • Redundancy. Multi-site and multi-cloud server infrastructure is inherently redundant. The ability to run workloads on more than one cloud or shift from cloud to cloud provides the equivalent of backup sites and avoids the risks associated with having all servers located in a single data center. But you also need to have robust redundancy mechanisms in place so that traffic is re-routed intelligently and efficiently at the first signs of failure at one site. This is crucial to maintaining a seamless high-quality user experience.
  • Regulatory compliance. Rules for storing content and user data varies from country to country. Complying with local laws and regulations adds a layer of complexity to managing traffic across your international infrastructures. For this, you need capability to write policy rules for what data is moved, where it lands, and where and how it is stored. In the European Union, for example, most user data must be stored inside the EU. The United States does not have this requirement. .

These challenges require intelligent, automatic traffic routing decisions. And this is where Global Server Load Balancing (GSLB) comes in. GSLB runs load balancing across geographically distributed servers using both network-level and user-level  intelligence to decide where and how to route traffic. GSLB is the most cost-effective way to deliver content by pushing it closer to users and ensuring relevance, while also ensuring performance and high availability of applications.

Microservices Architecture

Microservices architecture has become an essential part of developing agile infrastructure for resilient and responsive software applications. Microservices basically means that applications are constructed from loosely-coupled services, each of which functions as its own small application. This is an extension of the cloud-native methodology for software development and infrastructure deployment. Most startup teams are now adopting variants of microservices as they build applications from multiple services, all connected via APIs. For the purpose of agility, microservices makes it easier to scale up or down individual or groups of services to meet particular demands while controlling costs. For example, with microservices, you can scale up all the services associated with shopping carts and transactions when a new piece of equipment that is highly popular drops in Fornite (or other popular video games) and you can roll the scale-up of these services to follow the Sun and match capacity as needed. 

That said, applications decomposed into microservices increase the communication load, which bumps general traffic loads in the data center. Legacy hardware and monolithic load balancers and Application Delivery Controllers (ADCs) were not designed to cope with microservices. As such, they cannot quickly scale up or down to adapt to the demands of microservices. The surge in data center traffic resulting from the rapid adoption of microservices demands has driven growth in service meshes and internal load balancing tools like Envoy. This is also driving  a new approach to Application Delivery Controllers (ADCs optimized for “East-West” load balancing within the data center).

Rapid Testing and Deployment

Part of the shift to microservices and cloud-native is designing for the agility of rapid testing and deployment. Many large applications using these loosely coupled design considerations deploy far more frequently than traditional monolithic applications. Leading-edge cloud applications like Uber may push code changes multiple times per day. According to the 2019 DevSecOps survey by security tool company Sonatype, 47% of companies push new code multiple times per week. This means that testing and deployment architectures must be included in agile design discussions. For example, a company may want to test all  of its new applications in one cloud that is less costly but then run those applications in production in a cloud that is more reliable or has better geographic coverage. To do this, the company must run the same infrastructure in both clouds using containers and multi-cloud load balancers and ADCs that operate in a cloud-agnostic fashion. It is critical, then, to design for situations where rapid testing and deployment is enabled but without sacrificing testing fidelity. 

System-Wide Observability Is Crucial for Agility

The flip side of agility to scale infrastructure up or down quickly is to simultaneously scale observability to map to agile application designs. Observability is the critical intelligence that lets DevOps teams understand what is really happening with their applications, regardless of location. Since agile design mandates that most applications have delivery planes that gracefully handle the users’ state (specific data and status), being able to observe the performance of delivery planes is crucial to maintaining top-notch user experiences. Similarly, observing the same processes and functions for the back-end in the same pane-of-glass across clouds and hybrid infrastructure is crucial for understanding the context of performance and for visualizing complex topologies. This presents some challenges for agile design:

  • Multi-Cloud Observability. Creating a single source of observations that generates an apple-to-apples comparison across cloud infrastructure is challenging at best. This capability is best delivered at the ADC layer as part of the cross-cloud overlay inherent in multi-cloud services.
  • Multi-Cloud Context. Related to observability is context. To create intelligent rules for cross-cloud and multi-cloud deployment approaches, you need smart context that can only be built on top of clear and reliable observability. 
  • Data Plane / Control Plane Separation. By deploying ADCs with a data plane / control plane infrastructure, observability is more easily scaled with minimal performance impact and reliable capture of user state. Lightweight observability probes are in the data plane but the orchestration and management runs through the centralized ADC orchestration in the control plane. This enables observability to scale in parallel with capacity, in real-time. 

Conclusion: Designing for Agility Is A Paradigm Shift

Truth be told, the term “agile” is now rather old in technology. Designing for agility, however, is kind of like nuclear fusion. Everyone wants it but few are able to achieve it. Fortunately, the infrastructure pieces and software capabilities to enable true “Design for Agility” are finally becoming commonplace and mainstream driven by multiple trends: cloud adoption, demand for real-time observability, widespread use of containers, and insistence on multi-cloud deployments to reduce lock-in risk. It has never been easier to design for agility and it will continue to become more accessible as companies align with the basic goal of agile design: giving end-users the best experience at the most affordable price point, both for the company and the users themselves. For COOs, DevOps teams, developers and customers, the end result will be better applications, better user experiences, lower costs - and an enhanced ability to respond quickly to unforeseen events like a pandemic or a runaway game launch success that only a few years back would have caused major outages and problems. Designing for agility means handling those problems well before they become problems, in the designs of the applications and the architecture behind application delivery.

For DevOps, Developers and Infrastructure IT at companies embracing digital transformation and migrating workloads from legacy load balancers to a more modern app delivery fabric - Snapt Nova is cloud-native, hyperscale and intelligent. Let us know what you think of Snapt Nova, our centrally managed, container-friendly ADC platform providing Layer 7 load balancing, GSLB, WAF and web acceleration. Try out Snapt Nova's community edition - free for up to 5 nodes.

Try Nova Free  

You May Also Like

These Stories on Scalability

Subscribe by Email

No Comments Yet

Let us know what you think