NGINX
18 TopicsNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution 😆 I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug 😝 The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back 😄28Views0likes0CommentsURL Redirect ? URL ReWrite ?
im still on my journey leanring nginx so forgive the stupid question. my goal is as follows: i want my clients to be able to browse to https://www.john.com/Greenlight in the clients browser i dont want the above to change, but i want to get the page load to populate actually from here. https://dev-assets.john.net/cdn/html2canvas/1.4.1/license.html i tried this, but its not working.. i think im close..but maybe not.. ############################################################ Greenlight redirect location /Greenlight { rewrite ^/Greenlight(/.*)$ $1 break; rewrite ^/Greenlight$ / break; proxy_pass https://dev-assets.john.net/cdn/html2canvas/1.4.1; proxy_set_header Host john-assets.alkami.net; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; try_files $uri $uri/ /license.html; } im thinking maybe i need a re-write statement... any guidance would be apprecited.68Views0likes2CommentsIssue with worker_connections limits in Nginx+
Hello Nginx Community, We are using Nginx+ for our Load Balancer and have encountered a problem where the current worker_connections limit is insufficient. I need our monitoring system to check the current value of worker_connections for each Nginx worker process to ensure that the active worker_connections are below the maximum allowed. The main issue is that I cannot determine the current number of connections for each Nginx worker process. In my test configuration, I set worker_connections to 28 (which is a small value used only for easily reproducing the issue). With 32 worker processes, the total capacity should be 32 * 28 = 896 connections. Using the /api/9/connections endpoint, we can see the total number of active connections: { "accepted": 2062055, "dropped": 4568, "active": 9, "idle": 28 } Despite the relatively low number of active connections, the log file continually reports that worker_connections are insufficient. Additionally, as of Nginx+ R30, there is an endpoint providing per-worker connection statistics (accepted, dropped, active, and idle connections, total and current requests). However, the reported values for active connections are much lower than 28: $ curl -s http://<some_ip>/api/9/workers | jq | grep active "active": 2, "active": 0, "active": 1, "active": 2, "active": 1, "active": 1, "active": 0, "active": 0, "active": 3, "active": 0, "active": 0, "active": 0, "active": 2, "active": 2, "active": 0, "active": 1, "active": 0, "active": 0, "active": 0, "active": 0, "active": 0, "active": 0, "active": 0, "active": 2, "active": 1, "active": 2, "active": 1, "active": 0, "active": 1, "active": 0, "active": 0, "active": 1, Could you please help us understand why the active connections are reported as lower than the limit, yet we receive logs indicating that worker_connections are not enough? Thank you for your assistance.242Views1like5CommentsNginx Reverse Proxy issue for port other than 81
I have a backend tomcat application which runs on port 8080 with IP 192.168.29.141. I am trying to reverse proxy using Nginx for which I have created the below configuration file: upstream tomcat{ server 192.168.29.141:8080; } server { #listen 192.168.122.28:80; listen 192.168.122.28:81; server_name tomcat; location / { proxy_pass http://tomcat; } } When I load the page on browser, the page is distorted and I get below error in Browser console: "Unsafe attempt to load URL http://tomcat/o/classic-theme/images/clay/icons.svg from frame with URL http://tomcat:81/. Domains, protocols and ports must match." But when I run the nginx on port 80 instead of port 81, everything works fine. Is there anything I am missing in configurations for port other than 80 ? My Nginx Server IP: 192.168.122.28 Browser screenshot when hit the URL as http://tomcat:81117Views0likes3CommentsConvert Nginx rule to F5 irule
Hi Team, I need some help in converting the below NGINX Rule to F5 iRule. proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # WebSocket specific proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";230Views0likes3CommentsConfig NGINX to F5
Hi everyone, I have VS. NGINX require script to implement at F5 profile but i dont know where I must config at F5 configuration. Here the NGINX requirement : client_max_body_size 5000M; client_body_buffer_size 5000M; client_body_timeout 4024; client_header_timeout 3024; Where I must config that NGINX requirement to the VS in F5 ??? Using profile or irules ?? How to set up ? ThanksSolved1.9KViews1like4CommentsNginx is only redirecting to port 8080
I have a .net 8 solution multiple APIs and I'm using docker and Nginx to host the application. please find below the full details: Dockerfile FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base WORKDIR /app EXPOSE 8080 FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build ARG BUILD_CONFIGURATION=Release ... FROM build AS publish ARG BUILD_CONFIGURATION=Release RUN dotnet publish "xxx.Api/xxx.Api.csproj" -c Release -o /app/publish /p:UseAppHost=false FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "xxx.Api.dll"] launchsettings.json "Docker": { "commandName": "Docker", "launchBrowser": true, "launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/swagger", "publishAllPorts": true, "useSSL": true, "sslPort": 4430, "httpPort": 8080 } nginx.conf worker_processes auto; events { worker_connections 1024; } http{ server { listen 80; server_name domain; port_in_redirect off; location /api1 { rewrite /api1(.*) $1 break; proxy_pass http://api1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } location /api2 { rewrite /api2(.*) $1 break; proxy_pass http://api2:8081; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } docker-compose version: '3.4' services: nginx: image: nginx ports: - 80:80 volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - api1 - api2 api1: image: ${DOCKER_REGISTRY-}api1:latest container_name: api1 build: context: . dockerfile: api1.Api/Dockerfile ports: - "8080:8080" api2: image: ${DOCKER_REGISTRY-}api2:latest container_name: api2 build: context: . dockerfile: api2.API/Dockerfile ports: - "8081:8081" API1 that uses port 8080 loads normally but API2 that uses 8081 get error 502 gateway error. If I switch the port on those same projects than API2 loads normally and API1 stops loading. I've been trying all kinds of stuff over last 2 days and nothing seems to work. Those same projects with the same configuration were working perfectly when I was using .net 6 with the same nginx version, but when I upgraded the project to .net 8 it broke. I need your help and suggestions. Anything will be helpfull.679Views0likes2CommentsAmplify fpm connections metric is misleading or incorrect
Hi, I have nginx and php-fpm monitoring in amplify. Recently I created a dashboard to analyze incoming requests, fpm connections and their correlation. I realized that "fpm.conn.accepted" chart without specifying pool (meaning all pools) displays bigger and sometimes 10x bigger numbers in comparison to if display the chart by specifying all pools to show each value separately and sum of them. I'd expect the sum of all pools to be equal to the first chart but it is not. Screenshot: https://prnt.sc/I_lWhkSDye3I As you can see the blue lines are far from being equal or close to each other.349Views0likes2Comments