Secure and Harden Forward Proxies in NGINX Plus
In 2025, in NGINX Plus R36, we introduced support for forward proxy through an HTTP CONNECT method. Customers are now able to set up egress traffic flow and use multiple NGINX features to route, secure, and limit users, ports, hosts, or IPs.
See the reference documentation at http://nginx.org/en/docs/http/ngx_http_tunnel_module.html
Full documentation is available at https://docs.nginx.com/nginx/admin-guide/web-server/http-connect-proxy/
This blog post provides some additional information, such as useful tips and tricks and a hardening guide.
Securing and Hardening
When you create a simple config with an HTTP forward proxy, NGINX starts to literally allow any traffic through the tunnel. In the minimal configuration, we create a server block that listens on port 80, accepts HTTP CONNECT requests from anywhere, and establishes connections to any port and any destination server. Other requests than CONNECT will throw a "405 Method Not Allowed" response to the user.
For production use, this configuration should be properly secured. Regular security features include the use of TLS and authentication.
For TLS, the configuration is no different from a regular NGINX configuration. Use a listen directive with "ssl" parameter, add certificates and keys through ssl_certificate and ssl_certificate_key directives, and use any other optional TLS settings. See http://nginx.org/en/docs/http/configuring_https_servers.html for more details.
For authentication, you can directly use TLS client certificates. Same as above, refer to the regular NGINX documentation.
As of now, HTTP Basic authentication for forward proxy is not supported by ngx_http_auth_basic_module. You might create your own authentication flow through scripting with NJS, but the details fall out of the scope of this article.
There are use cases and security patterns that are specific to forward proxy. Let’s discuss in more details.
Use a known and controlled DNS server
NGINX requires a DNS resolver in order to connect to named backend servers. It will not use the system resolver by default. If the DNS server is not configured, the client will see a "502 Bad Gateway" error and the lack of resolver will be recorded in NGINX error log.
It is tempting to use a well-known 1.1.1.1 or 8.8.8.8 resolver, however we don’t recommend this approach for any production use. Public DNS servers can become an issue for secure internal traffic, as they are prone to errors or aren’t able to resolve internal corporate DNS names.
Always use a controlled, properly configured and secure DNS server that is able to resolve the required server names.
Limit the client side (users connecting to the proxy)
In our first example, we use the well-known and simple "allow" and "deny" directives. They work very well for cases when we know the client network lists:
allow 192.168.1.0/24;
allow 10.1.1.0/16;
allow 2001:0db8::/32;
deny all;
See nginx.org/r/allow and nginx.org/r/deny for the reference documentation.
In case of hitting the "deny" directive, clients will see a "403 Forbidden" error. The same code will be written to the access log, and the error log will produce “access forbidden by rule" error such as the following:
2025/11/25 20:29:42 [error] 16927#16927: *25 access forbidden by rule, client: 10.0.2.2, server: , request: "CONNECT nginx.org:80 HTTP/1.1", host: "nginx.org:80"
In our second example, we use "geo" directive together with the new "tunnel_allow_upstream" directive.
This directive can be tricky, but it is actually very simple in design: if the parameters of the directive resolve in “0” or an empty string (“”), the proxy tunnel is disabled.
...
geo $geo {
default 0;
192.168.1.0/24 1;
10.1.0.0/16 1;
2001:0db8::/32 1;
}
server {
tunnel_allow_upstream $geo;
...
In case of hitting this rule, the client will receive a 502 error. The same code will be seen in the access log. There is no special error log record for this case.
NOTE: Don’t confuse "tunnel_allow_upstream" and "proxy_allow_upstream". For forward proxying (tunnel) you must use "tunnel_allow_upstream" in the same context as "tunnel_pass".
Alternatively, you can have the proxy server listen on a special network interface or one IP address only. Then you can lock that address down through networking equipment and firewalls. For example, instead of "listen 80" (listens on all IPs that NGINX machine owns), use "listen 10.2.3.4:80;" where 10.2.3.4 is a specific address with proper network-level security.
Limit the upstreams or backends (target servers)
To limit backends, you can use the same new directive tunnel_allow_upstream.
We need to create some logic to resolve the directive parameters (variables) to “0”. Let’s observe a few examples:
...
map $host $allowed_host {
default 0;
nginx.org 1;
f5.com 1;
hostnames; # Special map parameter for hostname matching
}
server {
tunnel_allow_upstream $allowed_host;
...
In the next example, we limit the list of allowed backend servers by their network addresses. Specifically, we lock down the proxy to disallow local loopback.
geo $upstream_last_addr $allowed_backends {
default 1; 127.0.0.0/8 0; ::1 0;
}
server {
tunnel_allow_upstream $allowed_backends;
...
You can combine the examples into one tunnel_allow_upstream directive. Simply add both parameters to it:
tunnel_allow_upstream $allowed_net $allowed_backends;
https://nginx.org/en/docs/http/ngx_http_map_module.html and https://nginx.org/en/docs/http/ngx_http_geo_module.html for the reference documentation.
Special attention to localhost and 127.0.0.1
By default, NGINX connects to any server that is requested by the client. 127.0.0.1 or localhost is not an exception. For some use cases, you might use the proxy to expose some internal servers and internal applications through this method. However, in many environments, access to the local interface must be disabled.
Make sure you use the “geo” directive correctly, where either the default is set to return “0”, or the networks 127.0.0.0/8 and "::1" are set to return "0".
NOTE: Don't forget about ipv6.
See a "geo" example below that allows all networks but explicitly disables the local network interface:
...
geo $allowed_net {
127.0.0.1 0;
::1 0;
default 1;
}
server {
tunnel_allow_upstream $allowed_net;
...
When securing a local interface, note that it can be accessed through multiple names or addresses:
- 127.0.0.1
- Any address in 127.0.0.0/8 network, such as 127.0.0.2
- IPv6 address ::1
- localhost name
- localhost.localdomain domain name
- Other DNS names
You need to know your network and system set up well.
See https://nginx.org/en/docs/http/ngx_http_geo_module.html for the reference documentation.
Limiting the request methods, URLs, and content inside the tunnel
You cannot set any rules on the traffic inside the proxy tunnel.
When the tunnel is established, the content inside is not decrypted or otherwise processed by NGINX. If you need granular control over internal content, you need to use a reverse proxy traffic flow.
Common configuration mistakes
Testing with curl without --proxytunnel
There are many tools available to initiate HTTP requests with a proxy.
If you prefer curl, use the "-p -x" parameters.
For example, this command will show the content together with useful details on connection parameters, headers, and protocols:
curl -v -p -x http://your-nginx-proxy-hostname:proxy-port http://nginx.org
Omitting the "-p" or "--proxytunnel" parameter will result in receiving "405 Method Not Allowed" in curl, "405" status in the nginx access log, and no errors in the nginx error log.
You can use environment variables such as "HTTPS_PROXY", "http_proxy", or "ALL_PROXY" to establish proxy connections. However, different http clients read and evaluate these variables in their own way. Always refer to the correct documentation to the tools you use.
For curl, see this link: https://everything.curl.dev/usingcurl/proxies/env.html
Missing DNS resolver
When you configure a forward proxy with NGINX Plus R36 and above, you must use the new directive "tunnel_pass". However, your setup might need a few more directives.
First of all, provide a "resolver" in the "http { }" block or in the "server { }" block. Otherwise, you will only be able to access target servers by their IP addresses, not by their names.
Omitting the resolver and trying to access a server through a name will result in "502 Bad Gateway" error to the client, "502" status in the nginx access log and the following error in the nginx error log:
2025/11/21 18:52:07 [error] 14927#14927: *11 no resolver defined to resolve your-target-server-hostname, client: 10.0.2.2, server: , request: "CONNECT your-target-server-hostname:443 HTTP/1.1", host: "your-target-server-hostname:443"
2025/11/21 18:52:07 [info] 14927#14927: *11 client 10.0.2.2 closed keepalive connection
Confusing directive prefix: "proxy_" and "tunnel_"
Make sure that you know the rest of your configuration. When your production config grows, it becomes extremely easy to confuse tunnel_pass with other primary directives such as proxy_pass, fastcgi_pass etc.
It might become even harder to manage a longer configuration when you use additional directives, such as:
tunnel_buffer_size
tunnel_connect_timeout
tunnel_next_upstream
tunnel_next_upstream_timeout
tunnel_next_upstream_tries
...
These can be easily confused with the following counterparts from the reverse proxy traffic flow:
proxy_buffer_size
proxy_connect_timeout
proxy_next_upstream
proxy_next_upstream_timeout
proxy_next_upstream_tries
...
Overcomplicating configuration
In the next example, NGINX configuration will work for regular HTTP requests to the server (reverse proxy), but it will error out with a 405 status code if you try to use it through an HTTP CONNECT tunnel:
http {
server {
location / {
tunnel_pass; # Will not work
proxy_pass http://upstream-servers; # Will work as usual
}
}
}
The following configuration will work as a reverse proxy for the prefix location and as a forward proxy for everything else. While this is a perfectly working config, don’t use it for the reasons of future confusion, especially when you grow it to a larger setup.
http {
server {
location / {
tunnel_pass;
}
location /api {
proxy_pass http://upstream-servers;
}
}
}
How to avoid future issues? Put a tunnel_pass directive in its own server block, where you can provide appropriate granular levels of access and security:
http {
server {
location / {
proxy_pass http://upstream-servers;
}
}
server {
listen 10.2.34.4:8081; # Separate forward proxy IP and port
tunnel_pass; # ... Other required security and hardening directives
...
}
...
Conclusion
The new ngx_http_tunnel_module provides flexible configuration methods for your secure forward proxy servers. However, a wide-open proxy might become a security issue. Please apply special attention to securing the configuration of this new feature.
Talk to our Professional Services team. Our engineers can help with configuration, security and hardening of your new forward proxy.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)