NGINX
43 TopicsAutomating F5 NGINX Instance Manager Deployments on VMWare
With F5 NGINX One, customers can leverage the F5 NGINX One SaaS console to manage inventory, stage/push configs to cluster groups, and take advantage of our FCPs (Flexible Consumption Plans). However, the NGINX One console may not be feasible to customers with isolated environments with no connectivity outside the organization. In these cases, customers can run self-managed builds with the same NGINX management capabilities inside their isolated environments. In this article, I step through how to automate F5 NGINX Instance Manager deployments with packer and terraform. Prerequisites I will need a few prerequisites before getting started with the tutorial: Installing vCenter on my ESXI host; I need to install vCenter to login and access my vSphere console. A client instance with packer and terraform installed to run my build. I use a virtual machine on my ESXI host. NGINX license keys; I will need to pull my NGINX license keys from MyF5 and store them in my client VM instance where I will run the build. Deploying NGINX Instance Manager Deploying F5 NGINX Instance Manager in your environment involves two steps: Running a packer build outputting a VM template to my datastore Applying the terraform build using the VM template from step 1 to deploy and install NGINX Instance Manager Running the Packer Build Before running the packer build, I will need to SSH into my VM installer and download packer compatible ISO tools and plugins. $ sudo apt-get install mkisofs && packer plugins install github.com/hashicorp/vsphere && packer plugins install github.com/hashicorp/ansible Second, pull the GitHub repository and set the parameters for my packer build in the packer hcl file (nms.packer.hcl). $ git pull https://github.com/nginxinc/nginx-management-suite-iac.git $ cd nginx-management-suite-iac/packer/nms/vsphere $ cp nms.pkrvars.hcl.example nms.pkrvars.hcl The table below lists the variables that need to be updated. nginx_repo_crt Path to license certificate required to install NGINX Instance Manager (/etc/ssl/nginx/nginx-repo.crt) nginx_repo_key Path to license key required to install NGINX Instance Manager (/etc/ssl/nginx/nginx-repo.key) iso_path Path of the ISO where the VM template will boot from. The ISO must be stored in my vSphere datastore cluster_name The vSphere cluster datacenter The vSphere datacenter datastore The vSphere datastore network The vSphere network where the packer build will run. I can use static IPs if DHCP is not available. Now I can run my packer build $ export VSPHERE_URL="my-vcenter-url" $ export VSPHERE_PASSWORD="my-password" $ export VSPHERE_USER="my-username" $ ./packer-build.sh -var-file="nms.pkrvars.hcl" **Note: If DHCP is not available in my vSphere network, I need to assign static ips before running the packer build script. Running the Packer Build with Static IPs To assign static IPs, I modified the cloud init template in my packer build script (packer-build.sh). Under the auto-install field, I can add my Ubuntu Netplan configuration and manually assign my ethernet IP address, name servers, and default gateway. #cloud-config autoinstall: version: 1 network: version: 2 ethernets: addresses: - 10.144.xx.xx/20 nameservers: addresses: - 172.27.x.x - 8.8.x.x search: [] routes: - to: default via: 10.144.xx.xx identity: hostname: localhost username: ubuntu password: ${saltedPassword} Running the Terraform Build As mentioned in the previous section, the packer build will output a VM template to my vSphere datastore. I should be able to see the file template in nms-yyyy-mm-dd/nms-yyyy-mm-dd.vmtx directory of my datastore. Before running the terraform build, I set parameters in terraform parameter file (terraform.tfvars). $ cp terraform.ttfvars.example terraform.tfvars $ vi terraform.tfvars The table below lists the variables that need to be updated. cluster_name The vSphere cluster datacenter The vSphere datacenter datastore The vSphere datastore network The vSphere network to deploy and install NIM template_name The VM template generated by the packer build (nms-yyyy-mm-dd) ssh_pub_key The public SSH key (~/.ssh_id_rsa.pub) ssh_user The SSH user (ubuntu) Once parameters are set, I will need to set my env variables. $ export TF_VAR_vsphere_url="my-vcenter-url.com" $ export TF_VAR_vsphere_password="my-password" $ export TF_VAR_vsphere_user="my-username" #Set the admin password for NIM user $ export TF_VAR_admin_password="my-admin-password" And initialize/apply my terraform build. $ terraform init $ terraform apply **Note: If DHCP is not available in my vSphere network, I need to assign static IPs once again in my terraform script before running the build. Assigning Static IPs in Terraform Build (optional) To assign static IPs, I will need to modify the main terraform file (main.tf). I will add the following clone context inside my vsphere-virtual-machine vm resource and set the options to the appropriate IPs and netmask. clone { template_uuid = data.vsphere_virtual_machine.template.id customize { linux_options { host_name = "foo" domain = "example.com" } network_interface { ipv4_address = "10.144.xx.xxx" ipv4_netmask = 20 dns_server_list = ["172.27.x.x, 8.8.x.x"] } ipv4_gateway = "10.144.xx.xxx" } } Connecting to NGINX Instance Manager Once the terraform build is complete, I will see the NGINX Instance Manager VM running in the vSphere console. I can open a new tab in my browser and enter the IP address to connect and login with admin/$TF_VAR_admin_password creds. Conclusion Installing NGINX Instance Manager in your environment is now easier than ever. Following this tutorial, I can install NGINX Instance Manager in under 5 minutes and manage NGINX inventory inside my isolated environment.121Views0likes0CommentsEnhance your GenAI chatbot with the power of Agentic RAG and F5 platform
Agentic RAG (Retrieval-Augmented Generation) enhances the capabilities of a GenAI chatbot by integrating dynamic knowledge retrieval into its conversational abilities, making it more context-aware and accurate. In this demo, I will demonstrate an autonomous decision-making GenAI chatbot utilizing Agentic RAG. I will explore what Agentic RAG is and why it's crucial in today's AI landscape. I will also discuss how organizations can leverage GPUaaS (GPU as a Service) or AI Factory providers to accelerate their AI strategy. F5 platform provides robust security features that protect sensitive data while ensuring high availability and performance. They optimize the chatbot by streamlining traffic management and reducing latency, ensuring smooth interactions even during high demand. This integration ensures the GenAI chatbot is not only smart but also reliable and secure for enterprise use.247Views1like0CommentsUpcoming Action Required: F5 NGINX Plus R33 Release and Licensing Update
Hello community! The upcoming release of NGINX Plus R33 is scheduled for this quarter. This release brings changes to our licensing process, aligning it with industry best practices and the rest of the F5 licensing programs. These updates are designed to better serve our commercial customers by providing improved visibility into usage, streamlined license tracking, and enhanced customer service. Key Changes in NGINX Plus R33 Release: Q4, 2024 New Requirement: All commercial NGINX Plus instances will now require the placement of a JSON Web Token (JWT). This JWT file can be downloaded from your MyF5 account. License Validation: NGINX Plus instances will regularly validate their license status with the F5 licensing endpoint for connected customers. Offline environments can manage this through the NGINX Instance Manager. Usage Reporting: NGINX Plus R33 introduces a new requirement for commercial product usage reporting. NGINX's adoption of F5's standardized approach ensures easier and more precise license and usage tracking. Once our customers are utilizing R33 together with NGINX’s management options, tasks such as usage reporting and renewals will be much more streamlined and straightforward. Additionally, NGINX instance visibility and management will be much easier. Action Required To ensure a smooth transition and uninterrupted service, please take the following steps: Install the JWT: Make sure to install the JWT on all your commercial NGINX Plus instances. This is crucial to avoid any interruptions. Additional Steps: Refer to our detailed guide for any other necessary steps.See here for additional required next steps. IMPORTANT: Failure to followthese steps will result in NGINX Plus R33 and subsequent release instances not functioning. Critical Notes JWT Requirement: JWT files are essential for the startup of NGINX Plus R33. NGINX Ingress Controller: Users of NGINX Ingress Controller should not upgrade to NGINX Plus R33 until the next version of the Ingress Controller is released. No Changes for Earlier Versions: If you are using a version of NGINX Plus prior to R33, no action is required. Resources We are preparing a range of resources to help you through this transition: Support Documentation: Comprehensive support documentation will be available upon the release of NGINX Plus R33. Demonstration Videos: We will also provide demonstration videos to guide you through the new processes upon the release of NGINX Plus R33. NGINX Documentation: For more detailed information, visit our NGINX documentation. Need Assistance? If you have any questions or concerns, please do not hesitate to reach out: F5 Representative: Contact your dedicated representative for personalized support. MyF5 Account: Support is readily available through your MyF5 account. Stay tuned for more updates. Thank you for your continued partnership.535Views0likes0CommentsMitigate OWASP LLM Security Risk: Sensitive Information Disclosure Using F5 NGINX App Protect
This short WAF security article covered the critical security gaps present in current generative AI applications, emphasizing the urgent need for robust protection measures in LLM design deployments. Finally we also demonstrated how F5 Nginx App Protect v5 offers an effective solution to mitigate the OWASP LLM Top 10 risks.214Views2likes0CommentsExperience the power of F5 NGINX One with feature demos
Introduction Introducing F5 NGINX One, a comprehensive solution designed to enhance business operations significantly through improved reliability and performance. At the core of NGINX One is our data plane, which is built on our world-class, lightweight, and high-performance NGINX software. This foundation provides robust traffic management solutions that are essential for modern digital businesses. These solutions include API Gateway, Content Caching, Load Balancing, and Policy Enforcement. NGINX One includes a user-friendly, SaaS-based NGINX One Console that provides essential telemetry and overseas operations without requiring custom development or infrastructure changes. This visibility empowers teams to promptly address customer experience, security vulnerabilities, network performance, and compliance concerns. NGINX One's deployment across various environments empowers businesses to enhance their operations with improved reliability and performance. It is a versatile tool for strengthening operational efficiency, security posture, and overall digital experience. NGINX One has several promising features on the horizon. Let's highlight three key features: Monitor Certificates and CVEs, Editing and Update Configurations, and Config Sync Groups. Let's delve into these in details. Monitor Certificates and CVE’s: One of NGINX One's standout features is its ability to monitor Common Vulnerabilities and Exposures (CVEs) and Certificate status. This functionality is crucial for maintaining application security integrity in a continually evolving threat landscape. The CVE and Certificate Monitoring capability of NGINX One enables teams to: Prioritize Remediation Efforts: With an accurate and up-to-date database of CVEs and a comprehensive certificate monitoring system, NGINX One assists teams in prioritizing vulnerabilities and certificate issues according to their severity, guaranteeing that essential security concerns are addressed without delay. Maintain Compliance: Continuous monitoring for CVEs and certificates ensures that applications comply with security standards and regulations, crucial for industries subject to stringent compliance mandates. Edit and Update Configurations: This feature empowers users to efficiently edit configurations and perform updates directly within the NGINX One Console interface. With Configuration Editing, you can: Make Configuration Changes: Quickly adapt to changing application demands by modifying configurations, ensuring optimal performance and security. Simplify Management: Eliminate the need to SSH directly into each instance to edit or update configurations. Reduce Errors: The intuitive interface minimizes potential errors in configuration changes, enhancing reliability by offering helpful recommendations. Enhance Automation with NGINX One SaaS Console: Integrates seamlessly into CI/CD and GitOps workflows, including GitHub, through a comprehensive set of APIs. Config Sync Groups: The Config Sync Group feature is invaluable for environments running multiple NGINX instances. This feature ensures consistent configurations across all instances, enhancing application reliability and reducing administrative overhead. The Config Sync Group capability offers: Automated Synchronization: Configurations are seamlessly synchronized across NGINX instances, guaranteeing that all applications operate with the most current and secure settings. When a configuration sync group already has a defined configuration, it will be automatically pushed to instances as they join. Scalability Support: Organizations can easily incorporate new NGINX instances without compromising configuration integrity as their infrastructure expands. Minimized Configuration Drift: This feature is crucial for maintaining consistency across environments and preventing potential application errors or vulnerabilities from configuration discrepancies. Conclusion NGINX One Cloud Console redefines digital monitoring and management by combining all the NGINX core capabilities and use cases. This all-encompassing platform is equipped with sophisticated features to simplify user interaction, drastically cut operational overhead and expenses, bolster security protocols, and broaden operational adaptability. Read our announcement blog for moredetails on the launch. To explore the platform’s capabilities and see it in action, we invite you to tune in to our webinar on September 25th. This is a great opportunity to witness firsthand how NGINX One can revolutionize your digital monitoring and management strategies.586Views4likes0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.433Views5likes2CommentsNGINX Virtual Machine Building with cloud-init
Traditionally, building new servers was a manual process. A system administrator had a run book with all the steps required and would perform each task one by one. If the admin had multiple servers to build the same steps were repeated over and over. All public cloud compute platforms provide an automation tool called cloud-init that makes it easy to automate configuration tasks while a new VM instance is being launched. In this article, you will learn how to automate the process of building out a new NGINX Plus server usingcloud-init.349Views3likes4CommentsSecuring and Scaling Hybrid Apps with F5 NGINX Part 4
In previous parts of our series, we learned that NGINX is superior to cloud load balancers for two reasons: Breaking free from vendor lock-in;NGINX is a solution applicable to any infrastructure/environment Cloud providers offer basic load balancers that route and encrypt traffic to endpoints. They lack in: Visibility; Logs, traces, and statistics Functionality; Advanced traffic management and security use cases (see part 2 and 3) Functionality is especially important when scaling the environment to multiple cluster groups. The bulk of this section will be addressing best practices in scaling the architecture in part 2 and 3. Below I will depict a reference architecture that replicates my Kubernetes cluster with an NGINX Ingress Controller deployment and NGINX Load Balancer with HA (High Availability). If you recall from the part 2 and 3 of our series, I configured many ZT use cases on my NGINX Plus external load balancer. I replicated my NGINX Plus external load balancer to an active-active HA setup with NGINX Plus based on keepalived and VRRP. The method of fully rolling out HA in production will vary slightly depending on my environment. Public Cloud If I am scaling the architecture in a public cloud environment, I can replicate the NGINX Plus load balancers with cloud auto-scaling groups and front them with F5 BIG-IP. I can also enable health monitors on my BIG-IP so that unresponsive connections will fail-over to healthy NGINX Plus load balancers. On-Premises If I am scaling my architecture on-prem, I can replicate NGINX Plus load balancers with additional bare metal machines or use a virtualization software of my choosing. The HA solution with NGINX Plus on-prem can be setup in three different modes: Active-Passive; One instance is active, and the other is redundancy. the VIP (Virtual IP) will switch over to the redundancy node when the master node fails. Active-Active; Both instances are active and serving traffic. Two VIPs are required, where each VIP is assigned to an instance. If one instance becomes unavailable, the assigned VIP will switch over to the other instance and vice versa. Active-Active-Passive; Adding an additional redundancy node to the Active-Active HA pair resulting in a three node cluster group. The redundancy node will switch on when both active nodes are down. Choosing between these modes will depend on my current priorities. Going with the active-passive model, I compromise efficiency for lower cost. The active node is prone to overloading while the redundant node is idle and mostly not serving traffic. Going with the active-active or active-active-passive model, I compromise cost for better efficiency. However, I will need two VIPs and a DNS load balancer (F5 GTM) fronting my NGINX HA cluster. The table below depicts the three models measured by cost, efficiency and scale. Cost Efficiency Scale Active-Passive Low Low Low Active-Active Medium High Medium Active-Active-Passive High Medium High If I have the money to spend and choose both efficiency and scale, then active-active or active-active-passive is the right choice. Synchronizing data across NGINX Plus Cluster Nodes Recalling the part 2 and 3, we went through several ZT use cases with the NGINX Plus load balancer. Many of these ZT use cases require shared memory zones to store data and authenticate/authorize users. When scaling out the Zero Trust Architecture with HA, the key-value shared memory zone should be synchronized between NGINX Plus instances to ensure consistency. Take for example a popular ZT use case; OIDC authentication. Tokens are stored in the Key-Value storage to examine users attempting access to protected back-end applications. We can extend our configuration and enable Key-Val zone sync with two additional steps: Open a TCP medium where key-value data is exchanged. Now that you can also enable SSL on the TCP medium for extra security Append the optional sync directive to enable synchronization of key-value shared memory zones defined in openid_connect_configuration.conf from part 2. Testing the Synchronization You can test/validate the synchronization by leveraging the NGINX Plus API to pull data from individual cluster nodes and comparing them. The data pulled from each cluster node should be identical. You can connect to NGINX cluster nodes via SSH and enter: $ curl http://localhost:8010/api/7/http/keyvals/oidc_access_tokens The response data from each NGINX cluster node will match when zone sync is enabled. Data Telemetry with NGINX Management Interfaces As my IT organization grows, so will my NGINX cluster groups. Inevitably, I will need a solution that manage complexities arising from expanding NGINX cluster groups to alternative regions and cloud environments. With NGINX management tools, you can: Track your NGINX inventory for common CVEs and expired certificates Stage/Push config templates to NGINX cluster groups Collect and aggregate metrics from NGINX cluster groups Use our Flexible Consumption Model (FCP) model Installation and Deployment Methods There are two ways to get started with NGINX management capabilities: F5 Distributed Cloud (XC);No installation or deployment is required. Simply log into XC and access the NGINX One SaaS console. Get started with NGINX One Early Access. Self-Managed Installation; You can deploy NGINX Instance Manager to comply with policies and regulations that make the use of a SaaS console not feasible, for example, for air gapped environments inaccessible from the public internet. You caninstall and manage your own NGINX Instance Manager deployments by following our documentation. Once signed into my NGINX SaaS console, I can install agents on my NGINX HA clusters pair to discover them. $ curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="z" sh -s -- -y $ sudo systemctl start nginx-agent I can track my overall NGINX usage and telemetry from either the UI console or APIs. Under FCPs (Flexible Consumption Plans), consumption is measured yearly based on the number of managed instances. This model is becoming increasingly popular as customers increasingly opt for flexible pays-as-go licensing models. Setting up F5 BIG-IP I touched on 2 options to configure F5 BIG-IP depending on my cloud environment. In on-prem environments with active-active HA targets, I need to configure F5 DNS load balancing. I will now step through how to configure DNS load balancing on BIG-IP. The first step is to create a VS (Virtual Server) listening on UDP with service port 53 for DNS. Then I create a wide IP with name 'nginxdemo.f5lab.com'. This will be the domain I will use to connect to my BIG-IP DNS load balancer. If I click on the 'Pools' tab, I can see my gslbPool members, where each member will correspond to VIPs assigned to my NGINX Plus HA cluster nodes. I also need to create a datacenter with a Server List on the NGINX HA cluster nodes and the BIG-IP DNS system. Under DNS>>GSLB : Servers : Server List, I can start adding my NGINX members and BIG-IP DNS system. Note:In a public cloud environment, I typically will not need to configure GSLB on the BIG-IP. I can simply create a VirtualServer with HTTPS and service port 443 and NGINX Plus pool members with a health monitor for redundancy fail over. Conclusion As we progressed in our series, I expanded my architecture to address scalability concerns that inevitably surface to any business. However, the IT architecture of a business needs to be flexible and agile if it wants to thrive, especially in this modern competitive landscape. The solutions I presented in these series are fundamental building blocks that can technically be implemented anywhere. They enable organizations to quickly maneuver and seek out alternative options when current ones are no longer viable, which brings me to the topic of AI. How will enterprises adopt AI in the present and future? Ultimately it will come down to extending reference architectures (like the ones discussed in this series) with AI components (LLMs, Vector DBs, RAG, etc...). These components plug into the overall architecture to improve automation and efficiency in the overall business model. In the next series, we will discuss AI reference architectures with F5/NGINX.234Views0likes0CommentsWhat is Message Queue Telemetry Transport (MQTT)? How to secure MQTT?
MQTT is a messaging protocol broadly used in IoT and connected services, very lightweight and reliable even over poor quality networks. It is designed lightweight so it can work on constrained devices but, even in its latest version MQTTv5, the attack surface is very large.216Views0likes0Comments