distributed cloud
16 TopicsHow I Did it - Migrating Applications to Nutanix NC2 with F5 Distributed Cloud Secure Multicloud Networking
In this edition of "How I Did it", we will explore how F5 Distributed Cloud Services (XC) enables seamless application extension and migration from an on-premises environment to Nutanix NC2 clusters.730Views4likes0CommentsNGINX Unit running in Distributed Cloud vK8s
F5 Distributed Cloud provides a mechanism to easily deploy applications using virtual Kubernetes (vK8s) across a global network. This results in applications running closer to the end user. This article will demonstrate creating a NGINX Unit container to run a simple server-side WebAssembly (Wasm) application and deliver it via Distributed Cloud Regional Edge (RE) sites. Distributed Cloud provides a vK8s infrastructure for deploying modern apps as well as load balancing and security services to securely and reliably deliver applications at the edge. NGINX Unit Pod A Kubernetes pod is the smallest execution unit that can be deployed in Kubernetes. A pod is a single instance of an application and may contain single or multiple containers. For this article, a single NGINX Unit container will make up the application pod and replicas of the pod will be deployed to each of the configured Distributed Cloud Points of Presence (PoPs). Building the NGINX Unit Container The first step is to build a server-side WebAssemby (Wasm) application that Unit can execute. Instructions on creating a simple Hello World Wasm application are available here. After the Hello World Wasm component is built, a NGINX Unit Container needs to be created to execute the application. To build the NGINX Unit container, download the Unit Wasm Dockerfile that is available on the NGINX Unit github repo. The Dockerfile needs to be modified to allow the container to run as a non-root user in the Distributed Cloud virtual Kubernetes (vK8s) environment. Below is the Dockerfile that I used for this demonstration. FROM debian:bullseye-slim LABEL org.opencontainers.image.title="Unit (wasm)" LABEL org.opencontainers.image.description="Official build of Unit for Docker." LABEL org.opencontainers.image.url="https://unit.nginx.org" LABEL org.opencontainers.image.source="https://github.com/nginx/unit" LABEL org.opencontainers.image.documentation="https://unit.nginx.org/installation/#docker-images" LABEL org.opencontainers.image.vendor="NGINX Docker Maintainers <docker-maint@nginx.com>" LABEL org.opencontainers.image.version="1.32.1" RUN set -ex \ && savedAptMark="$(apt-mark showmanual)" \ && apt-get update \ && apt-get install --no-install-recommends --no-install-suggests -y ca-certificates git build-essential libssl-dev libpcre2-dev curl pkg-config \ && mkdir -p /usr/lib/unit/modules /usr/lib/unit/debug-modules \ && mkdir -p /usr/src/unit \ && mkdir -p /unit/var/lib/unit \ && mkdir -p /unit/var/run \ && mkdir -p /unit/var/log \ && mkdir -p /unit/var/tmp \ && mkdir /app \ && chmod -R 777 /unit/var/lib/unit \ && chmod -R 777 /unit/var/run \ && chmod -R 777 /unit/var/log \ && chmod -R 777 /unit/var/tmp \ && chmod 777 /app \ && cd /usr/src/unit \ && git clone --depth 1 -b 1.32.1-1 https://github.com/nginx/unit \ && cd unit \ && NCPU="$(getconf _NPROCESSORS_ONLN)" \ && DEB_HOST_MULTIARCH="$(dpkg-architecture -q DEB_HOST_MULTIARCH)" \ && CC_OPT="$(DEB_BUILD_MAINT_OPTIONS="hardening=+all,-pie" DEB_CFLAGS_MAINT_APPEND="-Wp,-D_FORTIFY_SOURCE=2 -fPIC" dpkg-buildflags --get CFLAGS)" \ && LD_OPT="$(DEB_BUILD_MAINT_OPTIONS="hardening=+all,-pie" DEB_LDFLAGS_MAINT_APPEND="-Wl,--as-needed -pie" dpkg-buildflags --get LDFLAGS)" \ && CONFIGURE_ARGS_MODULES="--prefix=/usr \ --statedir=/unit/var/lib/unit \ --debug \ --control=unix:/unit/var/run/control.unit.sock \ --runstatedir=/unit/var/run \ --pid=/unit/var/run/unit.pid \ --logdir=/unit/var/log \ --log=/unit/var/log/unit.log \ --tmpdir=/unit/var/tmp \ --openssl \ --libdir=/usr/lib/$DEB_HOST_MULTIARCH" \ && CONFIGURE_ARGS="$CONFIGURE_ARGS_MODULES \ --njs" \ && make -j $NCPU -C pkg/contrib .njs \ && export PKG_CONFIG_PATH=$(pwd)/pkg/contrib/njs/build \ && ./configure $CONFIGURE_ARGS --cc-opt="$CC_OPT" --ld-opt="$LD_OPT" --modulesdir=/usr/lib/unit/debug-modules --debug \ && make -j $NCPU unitd \ && install -pm755 build/sbin/unitd /usr/sbin/unitd-debug \ && make clean \ && ./configure $CONFIGURE_ARGS --cc-opt="$CC_OPT" --ld-opt="$LD_OPT" --modulesdir=/usr/lib/unit/modules \ && make -j $NCPU unitd \ && install -pm755 build/sbin/unitd /usr/sbin/unitd \ && make clean \ && apt-get install --no-install-recommends --no-install-suggests -y libclang-dev \ && export RUST_VERSION=1.76.0 \ && export RUSTUP_HOME=/usr/src/unit/rustup \ && export CARGO_HOME=/usr/src/unit/cargo \ && export PATH=/usr/src/unit/cargo/bin:$PATH \ && dpkgArch="$(dpkg --print-architecture)" \ && case "${dpkgArch##*-}" in \ amd64) rustArch="x86_64-unknown-linux-gnu"; rustupSha256="0b2f6c8f85a3d02fde2efc0ced4657869d73fccfce59defb4e8d29233116e6db" ;; \ arm64) rustArch="aarch64-unknown-linux-gnu"; rustupSha256="673e336c81c65e6b16dcdede33f4cc9ed0f08bde1dbe7a935f113605292dc800" ;; \ *) echo >&2 "unsupported architecture: ${dpkgArch}"; exit 1 ;; \ esac \ && url="https://static.rust-lang.org/rustup/archive/1.26.0/${rustArch}/rustup-init" \ && curl -L -O "$url" \ && echo "${rustupSha256} *rustup-init" | sha256sum -c - \ && chmod +x rustup-init \ && ./rustup-init -y --no-modify-path --profile minimal --default-toolchain $RUST_VERSION --default-host ${rustArch} \ && rm rustup-init \ && rustup --version \ && cargo --version \ && rustc --version \ && make -C pkg/contrib .wasmtime \ && install -pm 755 pkg/contrib/wasmtime/target/release/libwasmtime.so /usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ \ && ./configure $CONFIGURE_ARGS_MODULES --cc-opt="$CC_OPT" --modulesdir=/usr/lib/unit/debug-modules --debug \ && ./configure wasm --include-path=`pwd`/pkg/contrib/wasmtime/crates/c-api/include --lib-path=/usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ && ./configure wasm-wasi-component \ && make -j $NCPU wasm-install wasm-wasi-component-install \ && make clean \ && ./configure $CONFIGURE_ARGS_MODULES --cc-opt="$CC_OPT" --modulesdir=/usr/lib/unit/modules \ && ./configure wasm --include-path=`pwd`/pkg/contrib/wasmtime/crates/c-api/include --lib-path=/usr/lib/$(dpkg-architecture -q DEB_HOST_MULTIARCH)/ && ./configure wasm-wasi-component \ && make -j $NCPU wasm-install wasm-wasi-component-install \ && cd \ && rm -rf /usr/src/unit \ && for f in /usr/sbin/unitd /usr/lib/unit/modules/*.unit.so; do \ ldd $f | awk '/=>/{print $(NF-1)}' | while read n; do dpkg-query -S $n; done | sed 's/^\([^:]\+\):.*$/\1/' | sort | uniq >> /requirements.apt; \ done \ && apt-mark showmanual | xargs apt-mark auto > /dev/null \ && { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } \ && /bin/true \ && mkdir -p /var/lib/unit/ \ && mkdir -p /docker-entrypoint.d/ \ && apt-get update \ && apt-get --no-install-recommends --no-install-suggests -y install curl $(cat /requirements.apt) \ && apt-get purge -y --auto-remove build-essential \ && rm -rf /var/lib/apt/lists/* \ && rm -f /requirements.apt \ && ln -sf /dev/stderr /var/log/unit.log WORKDIR /app COPY --chmod=755 hello_wasi_http.wasm /app COPY ./*.json /docker-entrypoint.d/ COPY --chmod=755 docker-entrypoint.sh /usr/local/bin/ COPY welcome.* /usr/share/unit/welcome/ STOPSIGNAL SIGTERM ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] EXPOSE 8000 CMD ["unitd", "--no-daemon", "--control", "unix:/unit/var/run/control.unit.sock"] The changes to the Dockerfile are due to the non-root user not being able to write to the /var directory. Rather than modify the permissions on /var and its subdirectories, this docker file creates a /unit directory that is used for storing process information and logs. The docker-entrypoint.sh script also needed to be modified to use the /unit directory structure. The updated docker-entrypoint.sh script is shown below. #!/bin/sh set -e WAITLOOPS=5 SLEEPSEC=1 curl_put() { RET=$(/usr/bin/curl -s -w '%{http_code}' -X PUT --data-binary @$1 --unix-socket /unit/var/run/control.unit.sock http://localhost/$2) RET_BODY=$(echo $RET | /bin/sed '$ s/...$//') RET_STATUS=$(echo $RET | /usr/bin/tail -c 4) if [ "$RET_STATUS" -ne "200" ]; then echo "$0: Error: HTTP response status code is '$RET_STATUS'" echo "$RET_BODY" return 1 else echo "$0: OK: HTTP response status code is '$RET_STATUS'" echo "$RET_BODY" fi return 0 } if [ "$1" = "unitd" ] || [ "$1" = "unitd-debug" ]; then if /usr/bin/find "/unit/var/lib/unit/" -mindepth 1 -print -quit 2>/dev/null | /bin/grep -q .; then echo "$0: /unit/var/lib/unit/ is not empty, skipping initial configuration..." else echo "$0: Launching Unit daemon to perform initial configuration..." /usr/sbin/$1 --control unix:/unit/var/run/control.unit.sock for i in $(/usr/bin/seq $WAITLOOPS); do if [ ! -S /unit/var/run/control.unit.sock ]; then echo "$0: Waiting for control socket to be created..." /bin/sleep $SLEEPSEC else break fi done # even when the control socket exists, it does not mean unit has finished initialisation # this curl call will get a reply once unit is fully launched /usr/bin/curl -s -X GET --unix-socket /unit/var/run/control.unit.sock http://localhost/ if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -print -quit 2>/dev/null | /bin/grep -q .; then echo "$0: /docker-entrypoint.d/ is not empty, applying initial configuration..." echo "$0: Looking for certificate bundles in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.pem"); do echo "$0: Uploading certificates bundle: $f" curl_put $f "certificates/$(basename $f .pem)" done echo "$0: Looking for JavaScript modules in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.js"); do echo "$0: Uploading JavaScript module: $f" curl_put $f "js_modules/$(basename $f .js)" done echo "$0: Looking for configuration snippets in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.json"); do echo "$0: Applying configuration $f"; curl_put $f "config" done echo "$0: Looking for shell scripts in /docker-entrypoint.d/..." for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -name "*.sh"); do echo "$0: Launching $f"; "$f" done # warn on filetypes we don't know what to do with for f in $(/usr/bin/find /docker-entrypoint.d/ -type f -not -name "*.sh" -not -name "*.json" -not -name "*.pem" -not -name "*.js"); do echo "$0: Ignoring $f"; done else echo "$0: /docker-entrypoint.d/ is empty, creating 'welcome' configuration..." curl_put /usr/share/unit/welcome/welcome.json "config" fi echo "$0: Stopping Unit daemon after initial configuration..." kill -TERM $(/bin/cat /unit/var/run/unit.pid) for i in $(/usr/bin/seq $WAITLOOPS); do if [ -S /unit/var/run/control.unit.sock ]; then echo "$0: Waiting for control socket to be removed..." /bin/sleep $SLEEPSEC else break fi done if [ -S /unit/var/run/control.unit.sock ]; then kill -KILL $(/bin/cat /unit/var/run/unit.pid) rm -f /unit/var/run/control.unit.sock fi echo echo "$0: Unit initial configuration complete; ready for start up..." echo fi fi exec "$@" The Dockerfile copies all files ending in .json to a directory named /docker-entrypoint.d. The docker-entrypoint.sh script checks the /docker-entrypoint.d directory for configuration files that are used to configure NGINX Unit. The following config.json file was created in the same directory as the Dockerfile and docker-entrypoint.sh script. { "listeners": { "*:8000": { "pass": "applications/wasm" } }, "applications": { "wasm": { "type": "wasm-wasi-component", "component": "/app/hello_wasi_http.wasm" } } } This config file instructs NGINX unit to listen on port 8000 and pass any client request to the Wasm application component located at /app/hello_wasi_http.wasm. The last step before building the container is to copy the Wasm application component into the same directory as the Dockerfile, docker-entrypoint.sh, and config.json files. With these four files in the same directory, the docker build command is used to create a container: docker build --no-cache --platform linux/amd64 -t hello-wasi-xc:1.0 . Here I use the --platform option to specify that the container will run on a linux/amd64 platform. I also use -t to give the image a name. The portion of the name after the ":" can be used for versioning. For example, if I added a new feature to my application, I could create a new image with -t hello-wasi-xc:1.1 to represent version 1.1 of the hello-wasi-xc application. Once the image is built, it can be uploaded to the container registry of your choice. I have access to an Azure Container Registry, so that is what I chose to use. Distributed Cloud supports pulling images from both public and private container registries. To push the image to my registry, I had to first tag the image with the registry location ({{registry_name}} should be replaced with the name of your registry): docker tag hello-wasi-xc:1.0 {{registry_name}}/examples/hello-wasi-xc:1.0 Next, I logged into my Azure registry: az login az acr login --name {{registry_name}} I then pushed the image to the registry: docker push {{registry_name}}/examples/hello-wasi-xc:1.0 F5 Distributed Cloud Virtual Kubernetes F5 Distributed Cloud Services support a Kubernetes compatible API for centralized orchestration of applications across a fleet of sites (customer sites or F5 Distributed Cloud Regional Edges). Distributed Cloud utilizes a distributed control plane to manage scheduling and scaling of applications across multiple sites. Kubernetes Objects Virtual Kubernetes supports the following Kubernetes objects: Deployments, StatefulSets, Jobs, CronJob, DaemonSet, Service, ConfigMap, Secrets, PersistentVolumeClaim, ServiceAccount, Role, and Role Binding. For this article I will focus on the Deployments object. A deployment is commonly used for stateless applications like web-servers, front-end web-ui, etc. A deployment works well for this use case because the NGINX Unit container is serving up a stateless web application. Workload Object A workload within Distributed Cloud is used to configure and deploy components of an application in Virtual Kubernetes. Workload encapsulates all the operational characteristics of Kubernetes workload, storage, and network objects (deployments, statefulsets, jobs, persistent volume claims, configmaps, secrets, and services) configuration, as well as configuration related to where the workload is deployed and how it is advertised using L7 or L4 load balancers. Within Distributed Cloud there are four types of workloads: Simple Service, Service, Stateful Service, and Job. Namespaces In Distributed Cloud, tenant configuration objects are grouped under namespaces. Namespaces can be thought of as administrative domains. Within each Namespace, a user can create an object called vK8s and use that for application management. Each Namespace can have a maximum of one vK8s object. Distributed Cloud vK8s Configuration vK8s is configured under the Distributed Apps tile within the Distributed Cloud console. After clicking on that tile, I next created a new Virtual K8s. By clicking Add Virtual K8s from the Virtual K8s menu under Applications. This brings up the Virtual K8s configuration form. I provided a name for my vK8s site and click Save and Exit. This initiates the vK8s build. Deploy the Application Once the vK8s cluster is ready, I click on the name of my vk8s to configure the deployment of my application. For this demo, I am going to deploy my application via Workloads. I click on Workloads and then Add VK8s Workload. This brings up the Workload configuration form. I provide my Workload a name, select Service for Workload type and click Configure. Using Service allows me to have more granular controls on how my application will be advertised. Next, I click Add Item under containers to specify to Distribtued Cloud where it can pull my container image from. In the resulting form, I supply a name for my container, supply the public registry FQDN along with the container name and version and click Apply. Next I configure where to advertise the application. I want to manually configure my HTTP LB, so I chose to advertise in Cluster. Manually configuring the LB allows me to configure additional options such as Web Application Firewall (WAF). Within the Advertise in Cluster form, I specify port 8000, because that is the port my container is listening on. Click Apply and then click Save and Exit. Create an HTTP Load Balancer to Advertise the Application The next step is to create an HTTP Load Balancer to expose the application to the Internet. This can be done by clicking Manage, selecting Load Balancers, and clicking on HTTP Load Balancers. Next, click on Add HTTP Load Balancer to create a new load balancer. I gave the LB a Name, Domain, Load Balancer Type, and Checked the box to Automatically Manage DNS Records. Then I configured the Origin Pool by clicking Add Item in the Origin Pool section and then clicking Add Item under the Origin Pool. On the resulting form, I created a name for my Origin Pool and then clicked Add Item under Origin Servers to add an origin Server. In the Origin Server form, I selected K8s Service Name of Origin Server on given Sites, Service Name, and provided my vK8s workload name along with my namespace. I also selected Virtual Site and created a Virtual Site for the REs I wanted to use to access my application and clicked Continue. Back on the Origin Server form, I selected vK8s Networks on Site from the Select Network on the site drop down and then clicked Apply. On the Origin Pool form, I specified port 8000 for my origin server Port and then clicked Continue. I then clicked Apply to return to the LB configuration form. I added a WAF policy to protect my application and then clicked Save and Exit. At this point my application is now accessible on the domain name I specified in the HTTP Load Balancer config (http://aconley-wasm.amer-ent.f5demos.com/). Conclusion This is a basic demo of using NGINX Unit and Distributed Cloud Virtual Kubernetes to deploy a server-side WASM application. The principals in this demo could be expanded to build more complex applications.54Views0likes0CommentsF5 Distributed Cloud - Mitigation for Cross Tenant Origin Exposure (CTOE)
F5 Distributed Cloud (XC) offers a suite of powerful features designed to simplify the lives of administrators and engineers. A key aspect of this ease of use comes from shared objects, such as Regional Edge Proxies which utilize well-known public IP addresses. However, while this shared infrastructure enhances scalability and efficiency, it can also present risks if leveraged by attackers; and in this case, cross tenant origin exposure (CTOE). For instance: Customer(x) has tenant(x) in XC with a Load Balancer pointing to their public IP origin servers. These may be behind a perimeter firewall NAT (as diagrammed below) or be actual public IPs on the servers. Customers perimeter firewall is configured to deny all inbound traffic to public IP for site1.example.com Perimeter Firewall is configured to allow inbound traffic to public IP for site1.example.com for XC IP’s. (which is a well-known and public shared IP range) XC Proxy IP’s Reference Doc This setup is generally considered a minimum best practice because it restricts traffic to only those sources originating from XC. However, depending on the organization’s risk appetite, this level of security may be insufficient. The Risk Another account/tenant(y) within Distributed Cloud could create a load balancer and point to the public IP or DNS name of the origin pools for tenant(x). The attacker must know or learn the actual origin servers IP, or network segment to perform this attack. This discovery is fairly trivial and there are many approaches. In addition, what if the origin pool in tenant(x) is pointing to a DNS name that resolves to public IP’s? This is common with SaaS API gateways such as AWS and Azure to name a few and these gateways all use the same DNS name for the gateway respective to their cloud. Same DNS = Same IP’s = Easy to learn or guess Origin IP’s. For instance a common flow where a customer is using XC for WAF/WAAP and a 3rd party SAAS solution for an APIGW, may be Client–>XC(LB-WAAP)–>APIGW(pub-ip)–>API. In this default configuration, an attacker could learn the customers public NAT IP and add it to their Origin Pool. They can now instantiate attacks from their tenant(y) which will be sourced from the XC IP’s and allowed by the customer(x) perimeter firewall. Mitigation There are at least 4 ways to mitigate this risk. 1. L7 Header - If the origin servers (on-prem or SAAS) have something in front of them that is “L7 aware” or they themselves can be configured to do header validation, a custom HTTP request header could be injected into the flow by the load balancer in “tenant x”. Tenant y would not know or be able to see this header. Of course traffic not containing this header would still make it all the way to the L7 aware service before being dropped. While this would suffice for a L7 DoS or or other L7 type attack, it would not help with a L3/4 type attack which could still make it’s way through the infrastructure. 2. MTLS - A unique differentiator for F5 XC, is our ability to use server-side MTLS. If a customer has the capability on the Web Server/Service or something in front of it similar to the previous L7 header example, then we can add an additional layer of source validation by using mutual certificate authentication (mtls). Even a self-signed cert would add a lot of value here. No cert = no layer 7 access to the app or service. This does not prevent an L3/4 attack but will prevent unwanted application access. 3. Customer Edge (CE) proxies are deploy-able software that creates a private mesh back to our Application Delivery Network (ADN). These come with additional cost and need to be deployed at each location, thus creating a private mesh or overlay network that is unavailable outside of the tenant. in this scenario, the attacker traffic could potentially make it to the public IP of (or in front of) the CE and be dropped, thus protecting the application itself but still potentially allowing bad L3/4. 4. Private Link is a paid add-on to XC that enables connectivity between XC, clients, and resources. It offers many advantages, particularly when addressing regulatory and other security compliance requirements. Perimeter firewall rules can be simplified to allow traffic exclusively from Private Links, which are accessible only from the designated tenancy. Private Links can mitigate L3-L7 attacks because the link is entirely private by design. XC Private Link Overview A Word on L3/4 DDoS: L3/4 attacks were brought up several times above when talking about the technicalities of each mitigation method. While a L3/4 attack is not always distributed by nature, most are. One very important concept to keep in mind is the fact that XC natively provides L3/4 DDoS mitigation at our Regional Edges. Even in the examples above where “attack” traffic could make it all the way to the app or at least to the perimeter, if it was a true DDoS, this would get picked up by our Regional Edges and automatically mitigated. Conclusion In today’s interconnected cloud ecosystems, mitigating CTOE attacks is crucial to maintaining service availability and performance. By understanding the vulnerabilities that stem from cross-cloud communications and applying best practices, organizations can safeguard their systems from exploitation. As we continue to expand our cloud footprints, proactive security measures are not only necessary but must evolve alongside the complexity of the environments we manage. Effective CTOE prevention is an essential part of ensuring a resilient, high-performing network in this cloud-driven world. Like this article? Please drop a like or line below!123Views1like2CommentsSeamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud
As organizations endeavor to modernize their infrastructure, migrating applications to advanced virtualization platforms like Red Hat OpenShift Virtualization becomes a strategic imperative. However, they often encounter challenges such as minimizing downtime, maintaining seamless connectivity, ensuring consistent security, and reducing operational complexity. Addressing these challenges is crucial for a successful migration. This article explores howF5 Distributed Cloud (F5 XC), in collaboration with Red Hat's Migration Toolkit for Virtualization (MTV), provides a robust solution to facilitate a smooth, secure, and efficient migration to OpenShift Virtualization. The Joint Solution: F5 XC CE and Red Hat MTV Building upon our previous work ondeploying F5 Distributed Cloud Customer Edge (XC CE) in Red Hat OpenShift Virtualization, we delve into the next phase of our joint solution with Red Hat. By leveraging F5 XC CE in both VMware and OpenShift environments, alongside Red Hat’s MTV, organizations can achieve a seamless migration of virtual machines (VMs) from VMware NSX to OpenShift Virtualization. This integration not only streamlines the migration process but also ensures continuous application performance and security throughout the transition. Key Components: Red Hat Migration Toolkit for Virtualization (MTV): Facilitates the migration of VMs from VMware NSX to OpenShift Virtualization, an add-on to OpenShift Container Platform F5 Distributed Cloud Customer Edge (XC CE) in VMware: Manages and secures application traffic within the existing VMware NSX environment. F5 XC CE in OpenShift: Ensures consistent load balancing and security in the new OpenShift Virtualization environment. Demonstration Architecture To illustrate the effectiveness of this joint solution, let’s delve into the Demo Architecture employed in our demo: The architecture leverages F5 XC CE in both environments to provide a unified and secure load balancing mechanism. Red Hat MTV acts as the migration engine, seamlessly transferring VMs while F5 XC CE manages traffic distribution to ensure zero downtime and maintain application availability and security. Benefits of the Joint Solution 1. Seamless Migration: Minimal Downtime: The phased migration approach ensures that applications remain available to users throughout the process. IP Preservation: Maintaining the same IP addresses reduces the complexity of network reconfiguration and minimizes potential disruptions. 2. Enhanced Security: Consistent Policies: Security measures such as Web Application Firewalls (WAF), bot detection, and DoS protection are maintained across both environments. Centralized Management: F5 XC CE provides a unified interface for managing security policies, ensuring robust protection during and after migration. 3. Operational Efficiency: Unified Platform: Consolidating legacy and cloud-native workloads onto OpenShift Virtualization simplifies management and enhances operational workflows. Scalability: Leveraging Kubernetes and OpenShift’s orchestration capabilities allows for greater scalability and flexibility in application deployment. 4. Improved User Experience: Continuous Availability: Users experience uninterrupted access to applications, unaware of the backend migration activities. Performance Optimization: Intelligent load balancing ensures optimal application performance by efficiently distributing traffic across environments. Watch the Demo Video To see this joint solution in action, watch our detailed demo video on the F5 DevCentral YouTube channel. The video walks you through the migration process, showcasing how F5 XC CE and Red Hat MTV work together to facilitate a smooth and secure transition from VMware NSX to OpenShift Virtualization. Conclusion Migrating virtual machines (VMs) from VMware NSX to OpenShift Virtualization is a significant step towards modernizing your infrastructure. With the combined capabilities of F5 Distributed Cloud Customer Edge and Red Hat’s Migration Toolkit for Virtualization, organizations can achieve this migration with confidence, ensuring minimal disruption, enhanced security, and improved operational efficiency. Related Articles: Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization197Views1like0CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization510Views1like0CommentsF5 XC Session tracking with User Identification Policy
With F5 AWAF/ASM there is feature called session tracking that allows tracking and blocking users that do too many violations not only based on IP address but also things like the BIG-IP AWAF/ASM session cookie. What about F5 XC Distributed Cloud? Well now we will answer that question 😉 Why tracking on ip addresses some times is not enough? XC has a feature called malicious users that allows to block users if they generate too many service policy, waf , bot or other violations. By default users are tracked based on source IP addresses but what happens if there are proxies before the XC Cloud or NAT devices ? Well then all traffic for many users will come from a single ip address and when this IP address is blocked many users will get blocked, not just the one that did the violation. Now that we answered this question lets see what options we have. Reference: AI/ML detection of Malicious Users using F5 Distributed Cloud WAAP Trusted Client IP header This option is useful when the client real ip addresses are in something like a XFF header that the proxy before the F5 XC adds. By enabling this option automatically XC will use this header not the IP packet to get the client ip address and enforce Rate Limiting , Malicious Users blocking etc. Even in the XC logs now the ip address in the header will be shown as a source IP and if there is no such header the ip address in the packet will be used as backup. Reference: How to setup a Client IP as the Source IP on the HTTP Load Balancer headers? – F5 Distributed Cloud Services (zendesk.com) Overview of Trusted Client IP Headers in F5 Distributed Cloud Platform User Identification Policies The second more versatile feature is the XC user identification policies that by default is set to "Client IP" that will be the client ip from the IP packet or if "Trusted Client IP header" is configured the IP address from the configured header will be used. When customizing the feature allows the use of TLS fingerprints , HTTP headers like the "Authorization" header and more options to track the users and enforce rate limiters on them or if they make too many violations and Malicious users is enabled to block them based on the configured identifier if they make too many waf violations and so much more. The user identification will failover to the ip address in the packet if it can't identify the source user but multiple identification rules could be configured and evaluated one after another, as to only failover to the packet ip address if an identification rule can't be matched! If the backend upstream origin server application cookie is used for user identification and XC WAF App firewall is enabled and you can also use Cookie protection to protect the cookie from being send from another IP address! The demo juice shop app at https://demo.owasp-juice.shop/ can be used for such testing! References Lab 3: Malicious Users (f5.com) Malicious Users | F5 Distributed Cloud Technical Knowledge Configuring user session tracking (f5.com) How to configure Cookie Protection – F5 Distributed Cloud Services (zendesk.com)109Views0likes0CommentsSeamless App Connectivity with F5 and Nutanix Cloud Services
Modern enterprise networks face the challenge of connecting diverse cloud services and partner ecosystems, expanding routing domains and distributed application connections. As network complexity grows, the need to onboard applications quickly become paramount despite intricate networking operations. Automation tools like Ansible help manage network and security devices. But it is still hard to connect distributed applications across new systems like clouds and Kubernetes. This article explores F5 Distributed Cloud Services (XC), which facilitates intent-based application-to-application connectivity across varied infrastructures. Expanding and increasingly sophisticated enterprise networks Traditional data center networks typically rely on VLAN-isolated architectures with firewall ACLs for intra-network security. With digital transformation and cloud migrations, new systems work with existing networks. They move from VLANs to labels and namespaces like Kubernetes and Red Hat OpenShift. Future trends point to increased edge computing adoption, further complicating network infrastructures. Managing these complexities raises costs due to design, implementation, and testing impacts of network changes. Network infrastructure is becoming more complex due to factors such as increased NAT settings, routing adjustments, added ACLs, and the need to remove duplicate IP addresses when integrating new and existing networks. This growing complexity drives up the costs of design, implementation, and testing, amplifying the impact of network changes A typical network uses firewalls to control access between VLANs or isolated networks based on security policies. Access lists (ACLs) are organized by source and destination applications or clients. The order of ACLs is important because shorter prefixes can override longer ones. In some cases, ACLs exceed 10,000 lines, making it difficult to identify which rules impact application connectivity. As a result, ACLs for discontinued applications often remain in the firewall configuration, posing potential security risks. However, refactoring ACLs requires significant effort and testing. F5 XC addresses the ACL issue by managing all application connectivity across on-premises, data centers, and clouds. Its load balancer connects applications through VLANs, VPCs, and other networks. Each load balancer has its own application control and associated ACLs, ensuring changes to one load balancer don’t affect others. When an application is discontinued, its load balancer is deleted, preventing leftover configurations and reducing security risks. SNAT/DNAT is commonly used to resolve IP conflicts, but it reduces observability since each NAT table between the client and application must be referenced. ACL implementations vary across router/firewall vendors—some apply filters before NAT, others after. Logs may follow this behavior, making IP address normalization difficult. Pre-NAT IP addresses must be managed across the entire network, adding complexity and increasing costs. F5 XC simplifies distributed load balancing by terminating client sessions and forwarding them to endpoints. Clients and endpoints only need IP reachability to the CE interface. The load balancer can use a VIP for client access, but global management isn't required, as the client-side network sets it. It provides clear source/destination data, making application access easy to track without managing NAT IP addresses. F5 Distributed Cloud MCN/Network as a Service F5XC offers an all-in-one software package for networking and security, managed through a single console with intent-based configuration. This eliminates the need for separate appliance setups. Furthermore, firewall, load balancer, WAF changes, and function updates are done via SDN without causing downtime. The F5 XC Customer Edge (CE) works across physical appliances, VMs, containers, and clouds, creating overlay tunnels between CEs. Acting as an application gateway, it isolatesrouting domains and terminates L3 routing, while maintaining best practices for cloud, Kubernetes, and on-prem networks. This simplifies physical network configurations, speeding up provisioning and reducing costs. How CE works in enterprise network This design is straightforward: CEs are deployed in the cloud and the user’s data center. In this example, the data center CE connects four networks: Underlay network for CE-to-CE connectivity Cloud application network (VPC/VNET) Branch office network Data center application network The data center CE connects the branch office and application networks to their respective VRFs, while the cloud CE links the cloud network to a VRF. The application runs on VLAN 100 (192.168.1.1) in the data center, while clients are in the branch office. The load balancer configuration is as follows: Load balancer: Branch office VRF Domain: ent-app1.com Endpoint (Origin pool): Onpre-VLAN100 VRF, 192.168.1.1 Clients access "ent-app1.com," which resolves to a VIP or CE interface address. The CE identifies the application’s location, and since the endpoint is in the same CE within the Onpre-VLAN100 VRF, it sets the source IP as the CE and forwards traffic to 192.168.1.1. Next, the application is migrated to the cloud. The CE facilitates this transition from on-premises to the cloud by specifying the endpoint. The load balancer configuration is as follows: Load Balancer: Branch Office VRF Domain: ent-app1.com Endpoint (Origin Pool): AWS-VPC VRF, 10.0.0.1 When traffic arrives at the CE from a client in the branch office, the CE in the data center forwards the traffic to the CE on AWS through an overlay tunnel. The only configuration change required is updating the endpoint via the console. There is no need to modify the firewall, switch, or router in the underlay network. Kubernetes networking is unique because it abstracts IP addresses. Applications use FQDNs instead of IP addresses and are isolated by namespace or label. When accessing the external network, Kubernetes uses SNAT to map the IP address to that of the Kubernetes node, masking the source of the traffic. CE can run as a pod within a Kubernetes namespace. Combining a CE on Kubernetes with a CE on-premises offers flexible external access to Kubernetes applications without complex configurations like Multus. F5XC simplifies connectivity by providing a common configuration for linking Kubernetes to on-premises and cloud environments via a load balancer. The load balancer configuration is as follows: Load Balancer: Kubernetes Domain: ent-app1.com Endpoint (Origin Pool): Onprem-vlan100 VRF, 192.168.1.1 / AWS-VPC VRF, 10.0.0.1 * Related blog https://community.f5.com/kb/technicalarticles/multi-cluster-multi-cloud-networking-for-k8s-with-f5-distributed-cloud-%E2%80%93-archite/307125 Load balancer observability The Load Balancer in F5XC offers advanced observability features, including end-to-end network latency, L7 request logs, and API visualization. It automatically aggregates data from various logs and metrics, (see below). You can also generate API graphs from the aggregated data. This allows users to identify which APIs are in use and create policies to block unnecessary ones. Demo movie The video demonstration shows how F5 XC connects applications across different environments, including VM, Kubernetes, on-premises, and cloud. CE delivers both L3 and L7 connectivity across these environments. Conclusion F5 Distributed Cloud Services (F5XC) simplifies enterprise networking by integrating applications seamlessly across VMs, Kubernetes, on-premises, and cloud environments. By minimizing physical network modifications and leveraging best practices from diverse infrastructure systems, F5XC enables efficient, scalable, and secure application connectivity. Key Benefits: - Simplified physical network design - Network as a Service for seamless application communication and visibility - Integration with new networking paradigms like Kubernetes and SASE Looking Ahead As new networking systems emerge, F5XC remains adaptable, leveraging APIs for self-service network configurations and enabling future-ready enterprise networks.153Views1like0CommentsExperience the power of F5 NGINX One with feature demos
Introduction Introducing F5 NGINX One, a comprehensive solution designed to enhance business operations significantly through improved reliability and performance. At the core of NGINX One is our data plane, which is built on our world-class, lightweight, and high-performance NGINX software. This foundation provides robust traffic management solutions that are essential for modern digital businesses. These solutions include API Gateway, Content Caching, Load Balancing, and Policy Enforcement. NGINX One includes a user-friendly, SaaS-based NGINX One Console that provides essential telemetry and overseas operations without requiring custom development or infrastructure changes. This visibility empowers teams to promptly address customer experience, security vulnerabilities, network performance, and compliance concerns. NGINX One's deployment across various environments empowers businesses to enhance their operations with improved reliability and performance. It is a versatile tool for strengthening operational efficiency, security posture, and overall digital experience. NGINX One has several promising features on the horizon. Let's highlight three key features: Monitor Certificates and CVEs, Editing and Update Configurations, and Config Sync Groups. Let's delve into these in details. Monitor Certificates and CVE’s: One of NGINX One's standout features is its ability to monitor Common Vulnerabilities and Exposures (CVEs) and Certificate status. This functionality is crucial for maintaining application security integrity in a continually evolving threat landscape. The CVE and Certificate Monitoring capability of NGINX One enables teams to: Prioritize Remediation Efforts: With an accurate and up-to-date database of CVEs and a comprehensive certificate monitoring system, NGINX One assists teams in prioritizing vulnerabilities and certificate issues according to their severity, guaranteeing that essential security concerns are addressed without delay. Maintain Compliance: Continuous monitoring for CVEs and certificates ensures that applications comply with security standards and regulations, crucial for industries subject to stringent compliance mandates. Edit and Update Configurations: This feature empowers users to efficiently edit configurations and perform updates directly within the NGINX One Console interface. With Configuration Editing, you can: Make Configuration Changes: Quickly adapt to changing application demands by modifying configurations, ensuring optimal performance and security. Simplify Management: Eliminate the need to SSH directly into each instance to edit or update configurations. Reduce Errors: The intuitive interface minimizes potential errors in configuration changes, enhancing reliability by offering helpful recommendations. Enhance Automation with NGINX One SaaS Console: Integrates seamlessly into CI/CD and GitOps workflows, including GitHub, through a comprehensive set of APIs. Config Sync Groups: The Config Sync Group feature is invaluable for environments running multiple NGINX instances. This feature ensures consistent configurations across all instances, enhancing application reliability and reducing administrative overhead. The Config Sync Group capability offers: Automated Synchronization: Configurations are seamlessly synchronized across NGINX instances, guaranteeing that all applications operate with the most current and secure settings. When a configuration sync group already has a defined configuration, it will be automatically pushed to instances as they join. Scalability Support: Organizations can easily incorporate new NGINX instances without compromising configuration integrity as their infrastructure expands. Minimized Configuration Drift: This feature is crucial for maintaining consistency across environments and preventing potential application errors or vulnerabilities from configuration discrepancies. Conclusion NGINX One Cloud Console redefines digital monitoring and management by combining all the NGINX core capabilities and use cases. This all-encompassing platform is equipped with sophisticated features to simplify user interaction, drastically cut operational overhead and expenses, bolster security protocols, and broaden operational adaptability. Read our announcement blog for moredetails on the launch. To explore the platform’s capabilities and see it in action, we invite you to tune in to our webinar on September 25th. This is a great opportunity to witness firsthand how NGINX One can revolutionize your digital monitoring and management strategies.621Views4likes0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.531Views5likes2CommentsSite stuck during provisioning ?
Hi All, yesterday I installed the site with secure mesh and after some time the site got stuck in provisioning state, when I run some commands to see the status of the site and it shows below mentioned output "status ver" : "The service is being restarted" "config-network" : "The site is in provisioning state" Moreover the ssh and https access to the site was also lost and "health" command is also not showing any IP address configured, rebooted the site and nothing works. When checked the main console it shows like the site is being upgraded. Waited for 2 to 3 hours no difference. Also checked the F5 Zendesk document but couldn't able to relate to my situation. Left the site overnight and in the morning just rebooted the site and it actually provisioned. I observed the similar behavior during the F5 lab when the site was upgraded. it took atleast 6 to 7 hours of to upgrade and then the after the manual reboot the site got working. I just wana know is it a default behavior or am I missed something ?52Views0likes3Comments