cancel
Showing results for 
Search instead for 
Did you mean: 

Introduction

NGINX started out as a high performance web-server and quickly expanded adding more functionality in an integrated manner.

Put simply, NGINX is an open source web server, reverse proxy server, cache server, load balancer, media server and much more.

The enterprise version of NGINX has exclusive production ready features on top of what's available, including status monitoring, active health checks, configuration API, and live dashboard for metrics.

Think of this article as a quick introduction to each product but more importantly, as our placeholder for NGINX articles on DevCentral.

If you're interested in NGINX, you can use this article as the place to find DevCentral articles broken down by functionality in the near future.

By the way, this article here has also links to a bunch of interesting articles published on AskF5 and some introductory NGINX videos.

NGINX as a Webserver

The most basic use case of NGINX.

It can handle hundreds of thousands of requests simultaneously by using an event-drive architecture (as opposed to process-driven one) to handle multiple requests within one thread.

0151T000003lfUaQAI.png

NGINX as a Reverse Proxy and Load Balancer

Both NGINX and NGINX+ provide load balancing functionality and work as reverse-proxy by sitting in front of back-end servers:

0151T000003lfUfQAI.png

Similar to F5, traffic comes in, NGINX load balances the requests to different back-end servers.

In NGINX Plus version, it can even do session persistence and health check monitoring.

Published Content:

Server monitoring - some differences between BIG-IP and NGINX

NGINX as Caching Server

NGINX content caching improves efficiency, availability and capacity of back end servers.

When caching is on, NGINX checks if content exists in its cache and if that's the case, content is served to client without the need to contact back end server.

Otherwise, NGINX reaches out to backend server to retrieve content.

0151T000003lfUkQAI.png

A content cache sits between a client and back-end server and saves copies of pre-defined cacheable content.

Caching improves performance as strategically, content cache is supposed to be closer to client.

It also has the benefit of offloads requests from back-end servers.

NGINX Controller

NGINX controller is a piece of software that centralises and simplifies configuration, deployment and monitoring of NGINX Plus instances such as load balancers, API gateway and even web server.

0151T000003lfUuQAI.png

By the way, NGINX Controller 3.0 has just been released.

Published Content:

Introducing NGINX Controller 3.0

Setting up NGINX Controller

Use of NGINX Controller to Authenticate API Calls

Publishing an API using NGINX Controller

NGINX as Kubernetes Ingress Controller

NGINX Kubernetes Ingress Controller is a software that manages all Kubernetes ingress resources within a Kubernetes cluster.

It monitors and retrieves all ingress resources running in a cluster and configure the corresponding L7 proxy accordingly.

There are 2 versions of NGINX Ingress Controllers. One is maintained by the community and the other by NGINX itself.

0151T000003lfV4QAI.png

Published Content:

Lightboard Lesson: NGINX Kubernetes Ingress Controller Overview

NGINX as API Gateway

An API Gateway is a way of abstracting application services interaction from client by providing a single entry-point into the system.

Clients may issue a simple request to the application, for example, by requesting to load some information from a specific product.

In the background, API gateway may contact several different services to bundle up the information requested and fulfil client's request.

NGINX API management module for NGINX Controller can do request routing, composition, applying rate limiting to prevent overloading, offloading TLS traffic to improve performance, authentication, and real-time monitoring and alerting.

0151T000003lfV9QAI.png

NGINX as Application Server (Unit)

NGINX Unit provides all sorts of functionalities to integrate applications and even to migrate and split services out of older monolithic applications.

A key feature of Unit is that we don't need to reload processes once they're reconfigured. Unit only changes part of the memory associated to the changes we made.

In later versions, NGINX Unit can also serve as intermediate node within a web framework, accepting all kinds of traffic and maintaining dynamic configuration and acting as a reverse proxy for back-end servers.

0151T000003lfVEQAY.png

NGINX as WAF

NGINX uses ModSecurity module to protect applications from L7 attacks.

0151T000003lfVJQAY.png

NGINX as Sidecar Proxy Container

We can also use NGINX as side car proxy container in Service Mesh architecture deployment (e.g. using Istio with NGINX as sidecar proxy container).

A service mesh is an infrastructure layer that is supposed to be configurable and fast for the purposes of network-based interprocess communication using APIs.

NGINX can be configured as a Sidecar proxy to handle inter-service communication, monitoring and security-related features.

This is a way of ensuring developers only handle development, support and maintenance while platform engineers (ops team) can handle the service mesh maintenance.

0151T000003lfVdQAI.png

Comments
Cyro_Dubeux
Nimbostratus
Nimbostratus

Very useful, thanks.

 You can run NGINX as a Load Balancer and it's described in this article here step by step: https://devcentral.f5.com/s/articles/NGINX-as-an-HTTP-Load-Balancer

 

 

Version history
Last update:
‎24-Feb-2020 08:05
Updated by:
Contributors