NGINXaaS for Azure: Load Balancing

NGINXaaS for Azure dramatically simplifies deploying NGINX Plus instances into your Azure environment. Gone are the days of managing your own compute or container instances; now you can deploy NGINX Plus right from the Azure console or CLI and upload your nginx.conf directly. In this article, we’ll take a look at deploying one of the most common use cases: load balancing to a set of backend resources.

NGINX for Azure Overview

Getting Started

If you haven’t read Jeff Giroux’s excellent article on getting started with NGINXaaS for Azure (which I highly recommend doing), we’ll quickly go through the process of getting a new instance deployed and running. I won’t be going through every step in detail, feel free to check out Jeff’s article or the official documentation.

Searching for “NGINXaaS for Azure” in the Azure Marketplace brings up the NGINXaaS offering:

 

NGINX for Azure Marketplace Listing

Once we subscribe to this offer, there’s not much to configure. We can create a new vnet or select an existing one, decide whether we want to assign a public IP, and assign any tags. Keep in mind that NGINX Plus doesn't actually live inside this subnet. NGINXaaS uses new Azure networking capabilities to keep end-user traffic private. Each NGINX Plus instance passes traffic to downstream services via an elastic network card (NIC) that exists inside your subscription. These NICs are injected into a delegated virtual network. A network security group controls traffic to your NGINX Plus instances.

 

NGINX Deployment BasicsNGINX Deployment Networking

After we review and click “create”, and with just a few more clicks we have our instance available! A Network Security Group was created as part of the deployment, so we just need to allow ports 80 and 443, and we’re ready to start passing traffic. Browsing to the public IP of the NGINX instance should bring up a Hello World page.

 

NGINX for Azure Hello World Page

 

Configuration

Configuration for your NGINX instance is available from the left-hand menu. One of the big benefits of using NGINXaaS for Azure is that it takes the same nginx.conf file format that you’re already used to. You can paste your config right into a new file here like I’ve done, or you can upload a gzipped archive with your full configuration set. Again, more details are available in the docs.

 

NGINX for Azure Configuration Editor

If you’ve ever front-ended web or app servers with NGINX before, this part should look pretty familiar; we’re configuring an upstream with a few backend servers (running as Azure Container Instances), with a proxy_pass pointing to that upstream. Click “Submit”, and there we have it: a working NGINX Plus load balancer. No TLS yet, I’ll be covering that in the next article in this series that covers Azure Key Vault.

 

NGINX for Azure Diagram

We can also configure additional NGINX features, like health checks, basic caching and rate limiting, by adding the appropriate configuration:

 

NGINX for Azure Advanced Configuration

Note that although NGINXaaS for Azure scales instances as necessary, they aren’t currently configured as a cluster, so there are a few limitations with features like caching and rate limiting. More details are available in the official docs.

Summary

With just a few clicks, and no VMs or containers to manage (not including the backends), we have a full NGINX Plus instance up and running and passing traffic. Start to finish, the process only took a few minutes. If you want to see how we can add more functionality to this instance, stay tuned for net next article in this series, where we’ll show you how to use Azure Key Vault to manage TLS certificates with NGINXaaS for Azure.

References

Updated Mar 19, 2024
Version 5.0

Was this article helpful?