devops
1567 TopicsInstalling and Locking a Specific Version of F5 NGINX Plus
A guide for installing and locking a specific version of NGINX Plus to ensure stability, meet internal policies, and prepare for controlled upgrades. Introduction The most common way to install F5 NGINX Plus is by using the package manager tool native to your Linux host (e.g., yum, apt-get, etc.). By default, the package manager installs the latest available version of NGINX Plus. However, there may be scenarios where you need to install an earlier version. To help you modify your automation scripts, we’ve provided example commands for selecting a specific version. Common Scenarios for Installing an Earlier Version of NGINX Plus Your internal policy requires sticking to internally tested versions before deploying the latest release. You prefer to maintain consistency by using the same version across your entire fleet for simplicity. You’d like to verify and meet additional requirements introduced in a newer release (e.g., NGINX PlusRelease 33) before upgrading. Commands for Installing and Holding a Specific Version of NGINX Plus Use the following commands based on your Linux distribution to install and lock a prior version of NGINX Plus: Ubuntu 20.04, 22.04, 24.04 LTS sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus Debian 11, 12 sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus AlmaLinux 8, 9 / Rocky Linux 8, 9 / Oracle Linux 8.1+, 9 / RHEL 8.1+, 9 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus Amazon Linux 2 LTS, 2023 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus SUSE Linux Enterprise Server 12, 15 SP5+ sudo zypper install nginx-plus=<VERSION> sudo zypper addlock nginx-plus Alpine Linux 3.17, 3.18, 3.19, 3.20 apk add nginx-plus=<VERSION> echo "nginx-plus hold" | sudo tee -a /etc/apk/world FreeBSD 13, 14 pkg install nginx-plus-<VERSION> pkg lock nginx-plus Notes Replace <VERSION> with the desired version (e.g., 32-2*). After installation, verify the installed version with the command: nginx -v. Holding or locking the package ensures it won’t be inadvertently upgraded during routine updates. Conclusion Installing and locking a specific version of NGINX Plus ensures stability, compliance with internal policies, and proper validation of new features before deployment. By following the provided commands tailored to your Linux distribution, you can confidently maintain control over your infrastructure while minimizing the risk of unintended upgrades. Regularly verifying the installed version and holding updates will help ensure consistency and reliability across your environments.119Views0likes0CommentsAnnouncing F5 NGINX Gateway Fabric 1.5.0 with NGINX Code Snippets
Today we are announcing the release of F5 NGINX Gateway Fabric 1.5.0 which comes with one of the most prominent features from F5 NGINX Ingress Controller: code snippets. As always, we have made this feature available through an extension of the Gateway API, a new resource called a SnippetFilter. With it, you can define custom NGINX configuration that can be applied to any Route rule! In addition to SnippetFilters, some other highlights of this release include: Ability to retain client IP information via Proxy Pass or XForwardedFor header Configurable NGINX error log level and reduced “info” log verbosity A new Getting Started guide for installing NGF for the first time F5 NGINX Plus R33 support Bug fixes! You can see the full changelog with details here! NGINX Code Snippets via SnippetFilters With a 20 year old legacy, there is a massive range of features that F5 NGINX provides that are not yet explicitly exposed through NGINX Gateway Fabric via the Gateway API. If you are familiar with NGINX directives and configuration and just want to use a native NGINX feature, you were unable to because our control plane would overwrite any configuration you wrote. We needed a way for your directives to merge with the configuration we pushed. With SnippetFilters, we have not only enabled you to append directives to whatever context you need them in but also built them as an extension to the Gateway API. Once you define a SnippetFilter, any route rule can then use that snippet filter to apply NGINX configuration for the context you supplied. You can even change higher level contexts using these snippets, such as main, to do things like importing modules – just make sure you coordinate with everyone when making these changes! As SnippetFilters do open the opportunity for misconfiguration in a way that may impact other application teams using the same Gateway, they do have to be explicitly enabled. But if you find yourself needing a feature from NGINX that is not yet available in NGINX Gateway Fabric, or you want to include your own NGINX logic, the SnippetFilter will fill that gap between our current first-class Gateway API features and the full power of NGINX. What’s Next Next release we will be focusing on a big architectural shift in NGINX Gateway Fabric that we have been planning for a while: separating our data and control planes. Currently, both our control plane (what configures NGINX for us) and our data plane (NGINX itself) are contained within the same pod. Ideally, these should be separate for security, performance, and scalability reasons. As we are also looking at supporting multiple Gateway resources for a single NGINX Gateway Fabric installation, we thought this was a good time to make that change, prompting our next release to be 2.0! In 2.0, you’ll see the control plane start as its own pod. As you create Gateways, or as you scale existing ones, the control plane will spin up new instances of NGINX in their own pods. This translates into less overhead, better RBAC security, and the ability to provision multiple Gateway resources. For more information, check out ourdesign on the separation! Resources For the complete changelog for NGINX Gateway Fabric 1.5.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub README.58Views0likes0CommentsIntroducing the single Installation script for F5 NGINX Instance Manager
F5 NGINX Instance Manager (NIM) is a centralised management tool designed to simplify the administration and monitoring of F5 NGINX instances across various environments, including on-premises, cloud, and hybrid infrastructures. It provides a single interface to efficiently oversee multiple NGINX instances, making it particularly useful for organisations using NGINX at scale. In the past, installing NGINX Instance Manager and its dependencies could often be a tedious and error-prone process. Whether you’re dealing with databases, NGINX proxies, or multiple components that require manual configuration, the time and effort involved can quickly add up. This is where our new single-installation script comes into play. Designed to streamline and simplify the installation of NGINX Instance Manager, this script allows you to set up everything you need on a supported operating system with minimal effort. Gone are the days of hunting for specific installation commands, configuring dependencies manually, or worrying about compatibility issues between different systems. With this script, you can install NGINX Instance Manager, theClickhousedatabase, and NGINX proxycomponents in one seamless operation. Key Features: One-command installation: The script automates the installation of NGINX Instance Manager, Clickhouse database, and the NGINX proxy. Just run the script, and it takes care of everything. Cross-platform compatibility: The script supports all the operating systems specified in our technical specification document, making it versatile for different environments. Offline installation support: For users working in environments without internet access, the script can pull all necessary binaries from our software repository, enabling installation on a disconnected host. Simply gather the required binaries beforehand, and you’re good to go! License key integration: The only thing you’ll need to run the script in a fresh operating system environment are your license keys. This makes the process even easier, as there’s no need for additional setup steps. What you need A fresh machine with one of thesupported operating systemsinstalled. Download the JSON web token, SSL certificate, and private key for NGINX Instance Manager from MyF5 (including trials). You can use the same files as F5 NGINX Plus in your MyF5 portal. Download the scripthere. Step 1 - Download and run the installation script By default, the script: Assumes you’re connected to the internet for installations and upgrades Reads SSL files from the /etc/ssl/nginx directory Installs the latest version of NGINX Open Source Installs Clickhouse Installs the latest version of NGINX Instance Manager If you want to use the script with non-default options (e.g. to use NGINX Plus instead of Open Source), use these switches: To point to a repository key stored in a directory other than/etc/ssl/nginx: -k /path/to/your/<nginx-repo.key> file To point to a repository certificate stored in a directory other than/etc/ssl/nginx:-c /path/to/your/<nginx-repo.crt>file To install NGINX Plus (instead of NGINX OSS):-p <nginx_plus_version> -j /path/to/licence.jwt You also need to specify the current operating system. To get the latest list supported by the script, run the following command: grep '\-d distribution' install-nim-bundle.sh To see other options in the script, run: sudo bash install-nim-bundle.sh -h For example, to use the script to install NGINX Instance Manager on Ubuntu 24.04, with repository keys in the default /etc/ssl/nginx directory, with the latest version of NGINX Plus, run the following command: sudo bash install-nim-bundle.sh -n latest -d ubuntu24.04 -j /path/to/license.jwt In most cases, the script completes the installation of NGINX Instance Manager and associated packages. At the end of the process, you’ll see an auto generated password. Save that password. You’ll need it when you sign in to NGINX Instance Manager. Step 2 – Access NGINX Instance Manager To access the NGINX Instance Manager web interface, open a web browser and go tohttps://<NIM_FQDN>, replacing<NIM_FQDN>with the Fully Qualified Domain Name of your NGINX Instance Manager host. The default administrator username isadmin, and the generated password is saved, in encrypted format, to the/etc/nms/nginx/.htpasswdfile. The password was displayed in the terminal during installation. If you’d like to change this password, refer to the “Set or Change User Passwords” section in the Basic Authentication topic. This single-installation solution significantly reduces the complexity of deploying Instance Manager, whether you’re operating in a connected or disconnected environment. Now, all it takes is a simple command to get everything up and running. For more information and alternative options when using the script for NGINX Instance manager, please see the instructionshere.168Views1like1Comment5 Technical Sessions That Should Be Great: F5 AppWorld 2025
These F5 Academy sessions explore modern app delivery, security, and operations. The full list of sessions is on the F5 AppWorld 2025 Academy page - if you haven't yet registered you can do so here: Register for F5 AppWorld 2025 LAB - F5 Distributed Cloud: Discovering & Securing APIs API security has never been more critical, and this lab dives straight into the tough stuff. Learn how to find hidden endpoints, detect sensitive data and authentication states, and apply integrated API security measures to keep your environment locked tight. TECHNICAL BRIEFING - LLM Security and Delivery with F5’s Distributed Cloud Security Ecosystem AI is fueling the next wave of applications—but it’s also introducing new security blind spots. This briefing explores how to secure LLMs and integrate the right solutions to ensure your AI-driven workloads remain fast, cost-effective, and protected. LAB - F5 NGINX Plus Ingress as an API Gateway for Kubernetes Containerized environments and microservices are here to stay, and this lab helps you navigate the complexity. Configure NGINX Plus Ingress as a powerful API gateway for your Kubernetes workloads, enabling schema enforcement, authorization, and rate-limiting all in one streamlined solution. LAB - Zero Trust at Scale With F5 NGINX Zero trust principles become a whole lot more meaningful when you can scale them. Get hands-on with NGINX Plus and BIG-IP GTM to build a robust, scalable zero trust architecture, ensuring secure and seamless app access across enterprises and multi-cluster Kubernetes environments. LAB - F5 Distributed Cloud: Security Automation & Zero Day Mitigation In this lab, you’ll learn how to leverage advanced matching criteria and custom rules to quickly respond to emerging threats. Shore up your defenses with automated policies that deliver frictionless security and agile zero-day mitigation. Session Updates Coming in January 🚨 AppWorld's Breakout Sessions officially drop in January 2025 but here is a sneak preview! Check back in January to add these to your agenda. Global App Delivery With a Global Network How Generative AI Breaks Traditional Application Security and What You Can Do About It The New Wave of Bots: A Deep Dive into Residential IP Proxy Networks From ZTNA to Universal ZTNA: Expanding Your App Security Strategy --- See you at F5 AppWorld 2025! #AppWorld2586Views1like0CommentsUpdating SSL Certificates on BIG-IP using REST API
Simple cURL REST API commands to seamlessly update SSL certificates on a BIG-IP system. This method is ideal for those who prefer automation and want to integrate the process into their workflows. By following this guide, you will be able to: Upload a certificate and private key. Install them on the BIG-IP system. Update an SSL profile with the new certificate and key.90Views1like1CommentIntroducing the New Docker Compose Installation Option for F5 NGINX Instance Manager
F5 NGINX Instance Manager (NIM) is a centralised management solution designed to simplify the administration and monitoring of F5 NGINX instances across various environments, including on-premises, cloud, and hybrid infrastructures. It provides a single interface to efficiently oversee multiple NGINX instances, making it particularly useful for organizations using NGINX at scale. We’re excited to introduce a new Docker Compose installation option for NGINX Instance Manager, designed to help you get up and running faster than ever before, in just a couple of steps. Key Features: Quick and Easy Installation: With just a couple of steps, you can pull and deploy NGINX Instance Manager on any Docker host, without having to manually configure multiple components. The image is available in our container registry, so once you have a valid license to access it, getting up and running is as simple as pulling the container. Fault-Tolerant and Resilient: This installation option is designed with fault tolerance in mind. Persistent storage ensures your data is safe even in the event of container restarts or crashes. Additionally, with a separate database container, your product’s data is isolated, adding an extra layer of resilience and making it easier to manage backups and restores. Seamless Upgrades: Upgrades are a breeze. You can update to the latest version of NGINX Instance Manager by simply updating the image tag in your Docker Compose file. This makes it easy to stay up-to-date with the latest features and improvements without worrying about downtime or complex upgrade processes. Backup and Restore Options: To ensure your data is protected, this installation option comes with built-in backup and restore capabilities. Easily back up your data to a safe location and restore it in case of any issues. Environment Configuration Flexibility: The Docker Compose setup allows you to define custom environment variables, giving you full control over configuration settings such as log levels, timeout values, and more. Production-Ready: Designed for scalability and reliability, this installation method is ready for production environments. With proper resource allocation and tuning, you can deploy NGINX Instance Manager to handle heavy workloads while maintaining performance. The following steps walk you through how to deploy and manage NGINX Instance Manager using Docker Compose. What you need A working version ofDocker Your NGINX subscription’s JSON Web Token from MyF5 This pre-configureddocker-compose.yamlfile: Download docker-compose.yaml file. Step 1 - Set up Docker for NGINX container registry Log in to the Docker registry using the contents of the JSON Web Token file you downloaded fromMyF5 : docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none Step 2 - Run “docker login” and then “docker compose up” in the directory where you downloaded docker-compose.yaml Note: You can optionally set the Administrator password for NGINX Instance Manager prior to running Docker Compose. ~$ docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none ~$ echo "admin" > admin_password.txt ~$ docker compose up -d [+] Running 6/6 ✔ Network nim_clickhouse Created 0.1s ✔ Network nim_external_network Created 0.2s ✔ Network nim_default Created 0.2s ✔ Container nim-precheck-1 Started 0.8s ✔ Container nim-clickhouse-1 Healthy 6.7s ✔ Container nim-nim-1 Started 7.4s. Step 3 – Access NGINX Instance Manager Go to the NGINX Instance Manager UI on https://<<DOCKER_HOST>>:443 and license the product using the same JSON Web Token you downloaded from MyF5 earlier. Conclusion With this new setup, you can install and run NGINX Instance Manager on any Docker host in just 3 steps, dramatically reducing setup time and simplifying deployment. Whether you are working in a development environment or deploying to production, the Docker Compose-based solution ensures a seamless and reliable experience. For more information on using the docker compose option with NGINX Instance manager such as running a backup and restore, using secrets, and many more, please see the instructionshere.50Views1like0CommentsHow I did it - "Securing NVIDIA’s Morpheus AI Framework with NGINX Plus Ingress Controller”
In this installment of "How I Did It," we continue our journey into AI security. I have documented how I deployed an NVIDIA Morpheus AI infrastructure along with F5's NGINX Plus Ingress Controller to provide secure and scalable external access.235Views2likes1CommentGitOps: Declarative Infrastructure and Application Delivery with NGINX App Protect
First thing first. What is GitOps? In a nutshell, GitOps is a practice (Git Operation) that allows you to use GIT and code repository as your configuration source of truth (Declarative Infrastructure as Code and Application Delivery as Code) couple with various supporting tools. The state of your git repository syncs with your infrastructure and application states. As operation team runs daily operations (CRUD - "Create, Read, Update and Delete") leveraging the goodness and philosophy of DevOps, they no longer required to store configuration manifest onto various configuration systems. THe Git repo will be the source of truth. Typically, the target systems or infrastructures runs on Kubernetes base platform. For further details and better explanation of GitOps, please refer to below or Google search. https://codefresh.io/learn/gitops/ https://www.atlassian.com/git/tutorials/gitops Why practice GitOps? I havebeen managing my lab environment for many years. I use my lab for research of technologies, customer demos/Proof-of-Concepts, applications testing, and code development. Due to the nature of constant changes to my environment (agile and dynamic nature), especially with my multiple versions of Kubernetes platform, I have been spending too much time updating, changing, building, deploying and testing various cloud native apps. Commands like docker build, kubectl, istioctl and git have been constantly and repetitively used to operate environments. Hence, I practice GitOps for my Kubernetes infrastructure. Of course, task/operation can be automated and orchestrated with tools such as Ansible, Terraform, Chef and Puppet. You may not necessarily need GitOps to achieve similar outcome. I managed with GitOps practice partly to learn the new "language" and toexperience first-hand the full benefit of GitOps. Here are some of my learnings and operations experience that I have been using to manage F5's NGINX App Protect and many demo apps protected by it,which may benefit you and give you some insight on how you can run your own GitOps. You may leverage your own GitOps workflow from here. For details and description on F5's NGINX App Protect, please refer to https://www.nginx.com/products/nginx-app-protect/ Key architecture decision of my GitOps Workflow. Modular architecture - allows me to swap in/out technologies without rework. "Lego block" Must reduce my operational works - saves time, no repetitive task, write once and deploy many. Centralize all my configuration manifest - single source of truth. Currently, my configuration exists everywhere - jump hosts, local laptop, cloud storage and etc. I had lost track of which configuration was the latest. Must be simple, modern, easy to understand and as native as possible. Use Case and desirable outcome Build and keep up to date NGINX Plus Ingress Controller with NGINX App Protect in my Kubernetes environment. Build and keep up to date NGINX App Protect's attack signature and Threat Champaign signature. Zero downtime/impact to apps protected by NGINX App Protect with frequent releases, update and patch cycle for NGINX App Protect. My Problem Statement I need to ensure that my infrastructure (Kubernetes Ingress controller) and web application firewall (NGINX App Protect) is kept up to date with ease. For example, when there are new NGINX-ingress and NGINX App Protect updates (e.g., new version, attack signature and threat campaign signature), I would like to seamlessly push changes out to NGINX-ingress and NGINX App Protect (as it protects my backend apps) without impacting applications protected by NGINX App Protect. GitOps Workflow Start small, start with clear workflow. Below is a depiction of the overall GitOps workflow. NGINX App Protect is the target application. Hence, before description of the full GitOps Workflow, let us understand deployment options for NGINX App Protect. NGINX App Protect Deployment model There are four deployment models for NGINX App Protect. A common deployment models are: Edge - external load balancer and proxies (Global enforcement) Dynamic module inside Ingress Controller (per service/URI/ingress resource enforcement) Per-Service Proxy model - Kubernetes service tier (per service enforcement) Per-pod Proxy model - proxy embedded in pod (per endpoint enforcement) Pipeline demonstrated in this article will workwith either NGINX App Protect deployed as Ingress Controller (2) or per-service proxy model (3). For the purposes of this article, NGINX App Protect is deployed at the Ingress controller at the entry point to Kubernetes (Kubernetes Edge Proxy). For those who prefer video, Video Demonstration Part 1 /3 – GitOps with NGINX Plus Ingress and NGINX App Protect - Overview Part 2 /3 – GitOps with NGINX Plus Ingress and NGINX App Protect – Demo in Action Part 3/3 - GitOps with NGINX Plus Ingress and NGINX App Protect - WAF Security Policy Management. Description of GitOps workflow Operation (myself) updates nginx-ingress + NAP image repo (e.g. .gitlab-ci.yml in nginx-plus-ingress) via VSCODE and perform code merge and commit changes to repo stored in Gitlab. Gitlab CI/CD pipeline triggered. Build, test and deploy job started. Git clone kubernetes-ingress repo from https://github.com/nginxinc/kubernetes-ingress/. Checkout latest kubernetes-ingress version and build new image with DockerfilewithAppProtectForPlus dockerfile. [ Ensure you have appropriate nginx app protect license in place ] As part of the build process, it triggers trivy container security scanning for vulnerabilities (Open Source version of AquaSec). Upon completion of static binary scanning, pipeline upload scanned report back to repo for continuous security improvement. Pipeline pushesimage to private repository. Private image repo has been configured to perform nightly container security scan (Clair Scanner). Leverage multiple scanning tools - check and balance. Pipeline clone nginx-ingress deployment repo (my-kubernetes-apps) and update nginx-ingress deployment manifest with the latest build image tag (refer excerpt of the manifest below). Gitlab triggers a webhook to ArgoCD to refresh/sync the desire application state on ArgoCD with deployment state in Kubernetes. By default, ArgoCD will sync with Kubernetes every 3 mins. Webhook will trigger instance sync. ArgoCD (deployed on independent K3S cluster) fetches new code from the repo and detects code changes. ArgoCD automates the deployment of the desired application states in the specified target environment. It tracks updates to git branches, tags or pinned to a specific version of manifest at a git commit. Kubernetes triggers an image pull from private repo, performs a rolling updates, and ensures zero interruption to existing traffic. New pods (nginx-ingress) will spin up and traffic will move to new pods before terminating theold pod. Depending on environment and organisation maturity, a successful build, test and deployment onto DEV environment can be pushed to production environment. Note: DAST Scanning (ZAP Scanner) is not shown in this demo. Currently, running offline non-automated scanning. ArgoCD and Gitlab are integrated with Slack notifications. Events are reported into Slack channel via webhook. NGINX App Protectevents are send to ELK stack for visibility and analytics. Snippet on where Gitlab CI update nginx-plus-ingress deployment manifest (Flow#6). Each new image build will be tagged with <branch>-hash-<version> ... spec: imagePullSecrets: - name: regcred serviceAccountName: nginx-ingress containers: - image: reg.foobz.com.au/apps/nginx-plus-ingress:master-f660306d-1.9.1 imagePullPolicy: IfNotPresent name: nginx-plus-ingress ... Gitlab CI/CD Pipeline Successful run of CI/CD pipeline to build, test, scan and push container image to private repository and execute code commit onto nginx-plus-ingress repo. Note: Trivy scanning report will be uploaded or committed back to the same repo. To prevent Gitlab CI triggering another build process ("pipeline loop"), the code commit is tagged with [skip ci]. ArgoCD continuous deployment ArgoCD constantly (default every 3 mins) syncs desired application state with my Kubernetes cluster. Its ensures configuration manifest stored in Git repository is always synchronised with the target environment. Mytrain-dev apps are protected by nginx-ingress + NGINX App Protect. Specific (per service/URI enforcement). NGINX App Protect policy is applied onto this service. nginx-ingress + NGINX App Protect is deployed as an Ingress Controller in Kubernetes. pod-template-hash=xxxx is labeled and tracked by ArgoCD. $ kubectl -n nginx-ingress get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-ingress-776b64dc89-pdtv7 1/1 Running 0 8h app=nginx-ingress,pod-template-hash=776b64dc89 nginx-ingress-776b64dc89-rwk8w 1/1 Running 0 8h app=nginx-ingress,pod-template-hash=776b64dc89 Please refer to the attached video links above for full demo in actions. References Tools involved NGINX Plus Ingress Controller - https://www.nginx.com/products/nginx-ingress-controller/ NGINX App Protect - https://www.nginx.com/products/nginx-app-protect/ Gitlab - https://gitlab.com Trivy Scanner - https://github.com/aquasecurity/trivy ArgoCD - https://argoproj.github.io/argo-cd/ Harbor Private Repository - https://goharbor.io/ Clair Scanner - https://github.com/quay/clair Slack - https://slack.com DAST Scanner - https://www.zaproxy.org/ Elasticsearch, Logstash and Kibana (ELK) - https://www.elastic.co/what-is/elk-stack, https://github.com/464d41/f5-waf-elk-dashboards K3S - https://k3s.io/ Source repo used for this demonstration Repo for building nginx-ingress + NGINX App Protect image repo https://github.com/fbchan/nginx-plus-ingress.git Repo use for deployment manifest of nginx-ingress controller with the NGINX App Protect policy. https://github.com/fbchan/my-kubernetes-apps.git Summary GitOps perhaps is a new buzzword. It may or may not make sense in your environment. It definitely makes sense for me. It integrated well with NGINX App Protect and allows me to constantly update and push new code changesinto my environment with ease. A few months down the road, when I need to update nginx-ingress and NGINX App Protect, I just need to trigger a CI job, and then everything works like magic. Your mileage may vary. Experience leads me to think along the line of - start small, start simple by "GitOps-ing" on one of your apps that may require frequency changes. Learn, revise and continuously improve from there. The outcome that GitOps provides will ease your operational burden with "do more with less". Ease of integration of nginx-ingress and NGINX App Protect into your declarative infrastructure and application delivery with GitOps and F5's industry leading Web Application firewall protection will definitely alleviate your organisation's risk exposure to external and internal applications threat.1.9KViews1like3CommentsService Discovery in F5 Distributed Cloud - Architecture Options
The F5 Distributed Cloud (XC) Virtual Edition (VE) Customer Edge (CE) platform can be deployed within your data center or cloud environment. It can perform service discovery for services in your Kubernetes (K8s) clusters. Why do Service discovery? Service discovery is important in systems that change and move around, like microservices architectures. It helps find and connect services automatically. Instead of hard coding network locations, service discovery makes sure that services can easily find and communicate with each other, even when they scale or change locations. This improves scalability, resilience, and simplifies managing services in complex environments like Kubernetes or cloud infrastructures. By reducing manual intervention, service discovery enhances the overall efficiency and reliability of application deployments. The F5 Distributed Cloud (XC) CE can use the native kube-apiserver, or Consul, to query for services as they come online enabling admins to reference these discovered services. These services become XC origin pool definitions and can then be published locally through a proxy (http load balancer) on the CE itself or via our Global Application Delivery Network (ADN) - (Regional Edge Deployment). The F5 XC Load Balancer does more than just balance packets. It offers a set of SAAS security services that are easy to use. Customers can have a globally redundant layer of security while serving content from private K8’s clusters. This write-up covers two distinct service discovery architecture options available with XC. Secure K8s Gateway (VE CE) Kubernetes Sitetype Customer Edge (K8s sitetype CE) Depending on your service discovery use-case you may end up with one of these two options or both as they are not mutually exclusive. The first option of using the CE as a Secure K8s Gateway, may be the easier option for folks not particularly versed with the nuances of Kubernetes. Architecture 1:Virtual Edition Customer Edge (VE CE) If a picture is worth a thousand words then a working lab environment is worth a million. This repo walks through an entire Secure K8s GW setup and will leave you with a config that could easily be expanded upon. You can quickly build a PoC and start getting familiar with these modern app capabilities by using these tools. The readme includes details on how to use everything and what functions the various tools provide. It's all shell script and yaml so it's very easy to read through these and understand what's going on. https://github.com/dober-man/ve-ce-secure-k8s-gw This repo is designed to automate the deployment and configuration of a secure Kubernetes Gateway in the F5 Distributed Cloud (F5 XC) environment. It provides scripts and YAML configurations to set up secure communication, networking policies, and infrastructure components required to establish a secure gateway in a Kubernetes cluster. The readme file also documents the pre-reqs but you will at a minimum need an XC tenant, an XC Virtual Edition CE and an Ubuntu 22.04 server. If you do not have an XC tenant or VE CE, reach out to your local F5 Account team. Please use the issues feature of Github to report any discrepancies with the builds or documentation. Architecture 2:Kubernetes Sitetype Customer Edge (K8s sitetype CE) In this architecture, the entire CE runs as a service within the cluster. This model is going to require a bit more fundamental knowledge of Kubernetes and more tightly integrates with the cluster. You can quickly build a PoC and start getting familiar with these modern app capabilities by using this repo. https://github.com/dober-man/k8s-sitetype-ce This repo is focused on automating the deployment of the k8s-sitetype CE in a Kubernetes cluster. It provides scripts to simplify the process of setting up a secure site gateway for handling network traffic between cloud environments, on-premises infrastructure, and edge locations. The readme file documents the pre-reqs, but you will at a minimum need an XC tenant and an Ubuntu 22.04 server. If you do not have an XC tenant, reach out to your local F5 Account team. Please use the issues feature of Github to report any discrepancies with the builds or documentation. Summary - F5 Distributed Cloud offers a number of Kubernetes integration options for service discovery but also offers several other capabilities including Virtual K8s (Namespace as a Service) and Managed K8s which will be covered in future articles. Please feel free to drop a like or leave a comment below.295Views2likes1Comment