Schedule a demo

Snapt Nova

Nova is a centrally managed, container-based ADC platform providing Layer 7 load balancing, GSLB, WAF and web acceleration. Nova is cloud-native, hyperscale and intelligent.

Snapt Aria

Software load balancing, web acceleration, WAF and global DNS load balancer. Blazing fast throughputs, high SSL TPS and runs on any cloud, VM or bare metal.

Need help choosing?

Compare Snapt Nova and Snapt Aria. You can even try them both for free.

Got time for a good read?

Menu
Schedule a demo
Try It Free

High-Capacity, Scalable Kubernetes (K8S) Load Balancing on DigitalOcean

Bethany Hill
December 22, 2020 at 3:00 PM

Many of our customers are exploring the use of Kubernetes. Some are looking to use it as part of a multi-cloud strategy to reduce business risk. Others are looking at dividing monolithic apps into microservices and leveraging Kubernetes as a management and orchestration plane for all the separate services created from the decoupled application.

A number of them have come to us in recent weeks asking how to deploy production-grade, high-capacity load balancing on Kubernetes with DigitalOcean. They are looking at DigitalOcean’s Managed Kubernetes service because managing Kubernetes is challenging. The problem they have encountered is that DigitalOcean Managed Kubernetes strongly encourages them to spin-up a DigitalOcean load balancer. While the DigitalOcean load balancer is a good start, it has a few issues:

  • One service per load balancer – this is not economically viable
  • Only 10k RPS per load balancer – this capacity is too low for many high-usage apps
  • Missing security features like WAF – a WAF is commonly offered as a feature of many load balancers like HAProxy or Snapt’s own products
  • No scaling capability in the DigitalOcean load balancers – the only option is to use a load balancer to its full capacity and then spin-up another one, splitting the service traffic
  • Only a basic rules and policy engine for the DigitalOcean load balancer – for more complex applications, SREs and DevOps teams will likely want more granular control on ingress

There is a nifty way to deploy Snapt Nova ADCs as load balancers in front of DigitalOcean managed K8S clusters that results in better performance, lower cost, and higher capacity. This step-by-step guide (which we mirrored from our support pages) explains how to use Nova ADCs instead of DigitalOcean load balancers on K8S clusters within DO.

K8S Configuration and NodePort

For this demonstration let’s use the Kubernetes Guestbook application. (You can use any service if you prefer, though). Note below we are deploying a two-node cluster:

On that Kubernetes cluster we deployed the Guestbook app. The Guestbook app deploys a service called "frontend". If we describe that service we can see the NodePort that has been allocated (note: you use NotePort instead of LoadBalancer for this demo.).

❯ kubectl describe svc frontend
Name:                     frontend
Namespace:                default
Labels:                   app=guestbook
                          tier=frontend
Annotations:              <none>
Selector:                 app=guestbook,tier=frontend
Type:                     NodePort
IP:                       10.245.236.95
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31971/TCP
Endpoints:                10.244.0.155:80,10.244.0.175:80,10.244.0.45:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Take note that in reality this service is running on port 31971 - that's what we need to know from this.


Node IPs

Now that we have the NodePort (31971 in our case) we need to know the IP addresses to send traffic to. These are the actual IPs of the droplets in our Kubernetes cluster, not the Endpoints within Kubernetes. Go to Droplets and you can see them, as shown below:

We see the IPs 159.65.104.35 and 159.65.69.31. Ensure your firewall setup at DO allows it, and the connect to your NodePort from above (31971 for us) on those ports to verify:

❯ curl 159.65.69.31:31971
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" ref="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.12/angular.min.js"></script>
    <script src="controllers.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.0/
ui-bootstrap-tpls.js"></script>
</head> <body ng-controller="RedisCtrl"> <div style="width: 50%; margin-left: 20px"> <h2>Guestbook</h2> <form> <fieldset> <input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input">
<br> <button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button> </fieldset> </form> <div> <div ng-repeat="msg in messages track by $index">
{{ msg }} </div> </div> </div> </body> </html>
That means we can load balance to this service from Nova Nodes in DigitalOcean.

Deploying Nova

You have multiple options for how to deploy Nova into DigitalOcean. We recommend adding DigitalOcean as a Connected Cloud and deploying directly in, either a fixed number of droplets, or an AutoScaler which will automatically provision however many droplets are needed.

If for some reason you need a custom install, you can also run Nova on any stock Ubuntu system, so just launch your own Ubuntu droplets.

You can follow the cloud guide here or the manual install guide here.

You need at least 1 Nova droplet deployed into the environment to eventually load the ADC on to. This is a standard droplet(s) outside of K8S.


Configuring Nova Backend

There are two things to configure on Nova - a backend, and an ADC.

For the Backend you have options as well. The backend is the method used to define where we send the traffic. In this case your K8S Node IPs and NodePorts.

You can use either a Simple Backend where you specify the two IPs and Ports like so (remember to use your IPs and Ports discovered above):

159.65.104.35:31971
159.65.69.31:31971
The simple backend looks like this on Nova:

 

Or, you can use DigitalOcean's tags to do it. Add a Cloud API backend and enter port 31971 (in our case) and choose the tag "k8s" if you only have one managed Kubernetes installation.

The Cloud API backend looks like this on Nova:


Configuring Nova ADC

Now that we have the backend the ADC part is easy! Add an ADC type, typically HTTP or SSL Termination and set it to run on port 80 or port 443 (and so on).

Under the Backends section you set it to send traffic to the Backend you just added, your Kubernetes service. See below: 

Then configure any other options you want, and save. At this point you will attach it to your new Droplet(s) in DigitalOcean and you'll be online!

 

Please contact us if you need any assistance with the deployment.


Notes

  1. You can define a static NodePort so this behavior is more predictable in your Kubernetes services.
  2. You can also manually publish any local ingress services on Kubernetes and use this functionality with it.

Subscribe by Email

No Comments Yet

Let us know what you think