NGINX
2 TopicsNGINX Plus Request body Rate Limit with the NJS module and javascript
The nginx njs module allows javascript to process the code or as they call it on the backend nodejs. The module is dynamic for Nginx Plus https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/ while for the community nginx it needs to be compiled. The code and nginx configuration are also present at: https://github.com/Nikoolayy1/nginx_njs_request_body_limit/tree/main I have used the example rate limiter from https://github.com/nginx/njs-examples and https://clouddocs.f5.com/training/community/nginx/html/class3/class3.html and modified rate limit example to be based on the request body. It works as expected. The "r.internalRedirect('@app-backend');" internal redirect is needed as nginx by default does not populate or save the request body and this is why the request needs to pass 2 times in nginx proxy for the body variable to be properly populated! The nginx plus rootless container is a great option for F5 XC RE where root containers are not accepted and for Nginx on XC RE I have made another article at F5 XC vk8s open source nginx deployment on RE | DevCentral NJS "main" file code: const defaultResponse = "0"; const user = 'username'; const pass = 'username'; function ratelimit(r) { switch (r.method) { case 'POST': var body = r.requestText; r.log(`body: ${body}`); if (r.headersIn['Content-Type'] != 'application/x-www-form-urlencoded' || !body.length) { r.internalRedirect('@app-backend'); return; } var result_user = body.includes(user); var result_pass = body.includes(pass); if (!result_user) { r.internalRedirect('@app-backend'); return; } const zone = r.variables['rl_zone_name']; const kv = zone && ngx.shared && ngx.shared[zone]; if (!kv) { r.log(`ratelimit: ${zone} js_shared_dict_zone not found`); r.internalRedirect('@app-backend'); return; } const key = r.variables['rl_key'] || r.variables['remote_addr']; const window = Number(r.variables['rl_windows_ms']) || 60000; const limit = Number(r.variables['rl_limit']) || 10; const now = Date.now(); let requestData = kv.get(key); if (requestData === undefined || requestData.length === 0) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } try { requestData = JSON.parse(requestData); } catch (e) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (!requestData) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (now - requestData.timestamp >= window) { requestData.timestamp = now; requestData.count = 1; } else { requestData.count++; } const elapsed = now - requestData.timestamp; r.log(`limit: ${limit} window: ${window} elapsed: ${elapsed} count: ${requestData.count} timestamp: ${requestData.timestamp}`) let retryAfter = 0; if (requestData.count > limit) { retryAfter = 1; } kv.set(key, JSON.stringify(requestData)); if (retryAfter) { r.return(401, "Unauthorized\n"); return; } default: r.internalRedirect('@app-backend'); return; } } export default {sub, header, ratelimit, parseRequestBody, log}; Nginx nginx.conf file: server { listen 80 default_server; server_name localhost; access_log /var/log/nginx/host.access.log main; js_var $rl_zone_name kv; # shared dict zone name; requred variable js_var $rl_windows_ms 30000; # optional window in miliseconds; default 1 minute window if not set js_var $rl_limit 3; # optional limit for the window; default 10 requests if not set js_var $rl_key $remote_addr; # rate limit key; default remote_addr if not set js_set $rl_result main.ratelimit; # call ratelimit function that returns retry-after value if limit is exceeded root /var/www/html; index index.html; include /etc/nginx/mime.types; error_log /var/log/nginx/host.error_log debug; if ($target) { return 401; } location / { js_content main.ratelimit; } location @app-backend { internal; proxy_pass http://backend; } location /backend { internal; proxy_set_header Host httpforever.com; proxy_pass http://backend/; } Summary: There is another example how to populate the internal request body variable using that is needed by the njs module using the " mirror " option, shown at https://www.f5.com/company/blog/nginx/deploying-nginx-plus-as-an-api-gateway-part-2-protecting-backend-services but it did not work for me, so I used the " internal " option with "r.internalRedirect(uri)" https://nginx.org/en/docs/njs/reference.html Nginx njs feature r.subrequest can be used to populate response headers and body but mainly it is for logging and not for rate limiting and I think making a real http subrequest using javascript is not optimal and will not scale well, so I will not recommend this option as rate limiters are best left to be request based. Also I saw strange bug that the subrequest changes the content type header of the response and I had use "js_header_filter" to again change the response header.Nginx App Protect has the BD process from F5 BIG-IP AWAF/ASM that has DOS protections that can monitor the Server's response latency dynamically and make auto thresholds!79Views1like0CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount. "Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic 😉. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation492Views2likes1Comment