Distributed Cloud Users
Discuss the integration of security, networking, and application delivery services
Showing results for 
Search instead for 
Did you mean: 

Distributed Cloud Users

Related Community Activity

Most Recent

Bolt-on Auth with NGINX Plus and F5 Distributed Cloud

Inarguably, we are well into the age wherein the user interface for a typical web application has shifted from server-generated markup to APIs as the preferred point of interaction. As developers, we are presented with a veritable cornucopia of tools, frameworks, and standards to aid us in the development of these APIs and the services behind them.

What about securing these APIs? Now more than ever, attackers have focused their efforts on abusing APIs to exfiltrate data or compromise systems at an increasingly alarming rate. In fact, a large portion of the 2023 OWASP Top 10 API Security Risks list items are caused by a lack of (or insufficient) authentication and authorization. How can we provide protection for existing APIs to prevent unauthorized access? What if my APIs have already been developed without considering access control? What are my options now?

Enter the use of a proxy to provide security services. Solutions such as F5 NGINX Plus can easily be configured to provide authorization and auditing for your APIs - irrespective of where they are deployed. For instance, you can enable OpenID Connect (OIDC) on NGINX Plus to provide authentication and authorization for your applications (including APIs) without having to change a single line of code.

In this article, we will present an existing application with an API deployed in an F5 Distributed Cloud cluster. This application lacks authentication and authorization features. The app we will be using is the Sentence demo app, deployed into a Kubernetes cluster on Distributed Cloud. The Kubernetes cluster we will be using in this walkthrough is a Distributed Cloud Virtual Kubernetes (vk8s) instance deployed to host application services in more than one Regional Edge site. Why? An immediate benefit is that as a developer, I don’t have to be concerned with managing my own Kubernetes cluster. We will use automation to declaratively configure a virtual Kubernetes cluster and deploy our application to it in a matter of seconds!

Once the Sentence demo app is up and running, we will deploy NGINX Plus into another vk8s cluster for the purpose of providing authorization services. What about authentication? We will walk through configuring Microsoft Entra ID (formerly Azure Active Directory) as the identity provider for our application, and then configure NGINX Plus to act as an OIDC Relying Party to provide security services for the deployed API.

Finally, we will make use of Distributed Cloud HTTP load balancers. We will provision one publicly available load balancer that will securely route traffic to the NGINX Plus authorization server. We will then provision an additional Load Balancer to provide application routing services to the Sentence app. This second load balancer differs from the first in that it is only “advertised” (and therefore only reachable) from services inside the namespace. This results in a configuration that makes it impossible for users to bypass the NGINX authorization server in an attempt to directly consume the Sentence app.

The following is a diagram representing what will be deployed:
Solution deployment diagramSolution deployment diagram
Let’s get to it!

Deployment Steps

The detailed steps to deploy this solution are located in a GitHub repository accompanying this article. Follow the steps here, and be sure to come back to this article for the wrap-up!


You did it! With the power and reach of Distributed Cloud combined with the security that NGINX Plus provides, we have been able to easily provide authorization for our example API-based application.

Where could we go from here? Do you remember we deployed these applications to two specific geographical sites? You could very easily extend the reach of this solution to more regions (distributed globally) to provide reliability and low-latency experiences for the end users of this application. Additionally, you can easily attach Distributed Cloud’s award-winning DDoS mitigation, WAF, and Bot mitigation to further protect your applications from attacks and fraudulent activity.

Thanks for taking this journey with me, and I welcome your comments below.


This article wouldn’t have been the same without the efforts of @Fouad_Chmainy, @Matt_Dierick, and Alexis Da Costa. They are the original authors of the distributed design, the Sentence app, and the NGINX Plus OIDC image optimized for Distributed Cloud. Additionally, special thanks to @Cody_Green and @Kevin_Reynolds for inspiration and assistance in the Terraform portion of the solution. Thanks, guys!

by Daniel_Edgar | F5 Employee
Posted in TechnicalArticles Sep 28, 2023 5:00:00 AM

Adaptive Apps: Replicate & deploy WAF application security policies across F5's security portfolio


Adaptive applications utilize an architectural approach that facilitates rapid and often fully-automated responses to changing conditions—for example, new cyberattacks, updates to security posture, application performance degradations, or conditions across one or more infrastructure environments.

Unlike the current state of many apps today that are labor-intensive to secure, deploy, and manage, adaptive apps are enabled by the collection and analysis of live application performance and security telemetry, service management policies, advanced analytic techniques such as machine learning, and automation toolchains.

This example seeks to demonstrate value in two key components of F5's Adaptive Apps vision: helping our customers more rapidly detect and neutralize application security threats and helping to speed deployments of new applications.

In today's interconnected digital landscape, the ability to share application security policies seamlessly across data centers, public clouds, and Software-as-a-Service (SaaS) environments is of paramount importance. As organizations increasingly rely on a hybrid IT infrastructure, where applications and data are distributed across various cloud providers and security platforms, maintaining consistent and robust security measures becomes a challenging task.

Using a consistent & centralized security policy architecture provides the following key benefits:

  • Reduced Infrastructure Complexity: Modern businesses often employ a combination of on-premises data centers, public cloud services, and SaaS applications. Managing separate security policies for each platform introduces complexity, making it challenging to ensure consistent protection and adherence to security standards.
  • Consistent Protection: A unified security policy approach guarantees consistent protection for applications and data, regardless of their location. This reduces the risk of security loopholes and ensures a standardized level of security across the entire infrastructure.

  • Improved Threat Response Efficiency: By sharing application security policies, organizations can respond more efficiently to emerging threats. A centralized approach allows for quicker updates and patches to be applied universally, strengthening the defense against new vulnerabilities.

  • Regulatory Compliance: Many industries have strict compliance requirements for data protection. Sharing security policies helps organizations meet these regulatory demands across all environments, avoiding compliance issues and potential penalties.

  • Streamlined Management: Centralizing security policies simplifies the management process. IT teams can focus on maintaining a single set of policies, reducing complexity, and ensuring a more effective and consistent security posture.

  • Cost-Effective Solutions: Investing in separate security solutions for each platform can be expensive. Sharing policies allows businesses to optimize security expenditure and resource allocation, achieving cost-effectiveness without compromising on protection.

  • Enhanced Collaboration: A shared security policy fosters collaboration among teams working with different environments. This creates a unified security culture, promoting information sharing and best practices for overall improvement.

  • Improved Business Agility: A unified security policy approach facilitates smoother transitions between different platforms and environments, supporting the organization's growth and scalability.

By having a consistent security policy framework, businesses can ensure that critical security policies, access controls, and threat prevention strategies are applied uniformly across all their resources. This approach not only streamlines the security management process but also helps fortify the overall defense against cyber threats, safeguard sensitive data, and maintain compliance with industry regulations. Ultimately, the need for sharing application security policies across diverse environments is fundamental in building a resilient and secure digital ecosystem.

In the spirit of enabling a unified security policy framework, this example shows the following two key use cases:

  1. Replicating and deploying an F5 BIG-IP Advanced WAF (AWAF) security policy to F5 Distributed Cloud WAAP (F5 XC WAAP)
  2. Replicating and deploying an F5 NGINX App Protect (NAP) security policy to F5 XC WAAP

Specifically, we show how to use F5's Policy Supervisor and Policy Supervisor Conversion Utility to import, convert, replicate, and deploy WAF policies across the F5 security proxy portfolio. Here we will show how the Policy Supervisor tool provides flexibility in offering both automated and manual ways to replicate and deploy your WAF policies across the F5 portfolio. Regardless of the use case, the steps are the same, enabling a consistent and simple methodology.

We'll show the following 2 use cases:

1. Manual BIG-IP AWAF to F5 XC WAAP policy replication & deployment:

  • Private BIG-IP AWAF deployment with security policy blocking specific attacks
  • Manual conversion of this BIG-IP AWAF policy to F5 XC WAAP policy using the Policy Supervisor Conversion Utility
  • F5 XC WAAP environment without application security policy
  • Manual deployment of converted BIG-IP AWAF security policy into F5 XC WAAP environment showing enablement of equivalent attack blocking

2. Automated NGINX NAP to F5 XC WAAP policy replication & deployment:

  • Private NGINX NAP deployment with security policy blocking specific attacks
  • Automated conversion of this NGINX NAP policy to F5 XC WAAP policy using the Policy Supervisor tool
  • F5 XC WAAP environment without application security policy
  • Automated deployment of converted NGINX NAP security policy into F5 XC WAAP environment showing enablement of equivalent attack blocking
Note there are additional resources available from F5's Technical Marketing team to help you better understand the capabilities of the F5 Policy Supervisor. For a more detailed look at the F5 Policy Supervisor, be sure to review the following additional excellent resources:

Use Case

Simple, easy way to replicate & deploy WAF application security policies across F5's BIG-IP AWAF, NGINX NAP, and F5 XC WAAP security portfolio.

While the Policy Supervisor supports all of the possible security policy replication & migration paths shown on the left below, this example is focused on demonstrating the two specific paths shown on the right below.


Solution Architecture


Problem Statement

Customers find it challenging, complex, and time-consuming to replicate & deploy application security policies across their WAF deployments which span the F5 portfolio (including BIG-IP, NAP, and F5XC WAAP) within on-prem, cloud, and edge environments.

Customer Outcome

By enforcing consistent WAAP security policies across multiple clouds and SaaS environments, organizations can establish a robust and standardized security posture, ensuring comprehensive protection, simplified management, and adherence to compliance requirements.

The Guide

Please refer to https://github.com/f5devcentral/adaptiveapps for detailed instructions and artifacts for deploying this example use case.

Demo Video

Watch the demo video: 

by Kevin_Delgadillo | F5 Employee
Posted in TechnicalArticles Sep 19, 2023 5:00:00 AM

Minimizing Security Complexity: Managing Distributed WAF Policies


In today's digital landscape, where cyber threats constantly evolve, safeguarding an enterprise's web applications is of paramount importance.  However, for security engineers tasked with protecting a large enterprise equipped with a substantial deployment of web application firewalls (WAFs), the task of managing distributed security policies across the entire application landscape presents a significant challenge.  Ensuring consistency and coherence, in both the effectiveness and deployment of these policies is essential, yet it's far from straightforward.  In this article and demo, we'll explore a few best practices and tools available to help organizations maintain robust security postures across their entire WAF infrastructure, and how embracing modern approaches like DevSecOps and the F5 Policy Supervisor and Conversion tools can help overcome these challenges.

Security Policy as Code:

Storing your WAF policies as code within a secure repository is a DevSecOps best practice that extends beyond consistency and tracking.  It's also the first step in making security an integral part of the development process, fostering a culture of security throughout the entire software development and delivery lifecycle.  This shift-left approach ensures that security concerns are addressed early in the development process, reducing the risk of vulnerabilities and enhancing collaboration between security, development, and operations teams.  It enables automation, version control, and rapid response to evolving threats, ultimately resulting in the delivery of secure applications with speed and quality.  

To help facilitate this, the entire F5 security product portfolio supports the ingestion of WAF policy in JSON format.  This enables you to store your policies as code in a Git repository and seamlessly reference them during your automation-driven deployments, guaranteeing that every WAF deployment is well-prepared to safeguard your critical applications. 

"wafPolicy": {
    "class": "WAF_Policy",
    "url": "https://raw.githubusercontent.com/knowbase/architectural-octopod/main/awaf/owasp-auto-tune.json",
    "enforcementMode": "blocking",
    "ignoreChanges": true

F5 Policy Supervisor:

Considering the sheer number of WAFs in large enterprises, managing distributed policies can easily overwhelm security teams.  Coordinating updates, rule changes, and incident response across the entire application security landscape requires efficient policy lifecycle management tools.  Using a centralized management system that provides visibility into the security posture of all WAFs and the state of deployed policies can help streamline these operations.  The F5 Policy Supervisor was designed to meet this critical need.

The Policy Supervisor allows you to easily create, convert, maintain, and deploy WAF polices across all F5 Application Security platforms.  With both an easily navigated UI and robust API, the Policy Supervisor tool greatly enhances your ability to easily manage security policies at scale.


In the context of the Policy Supervisor, providers are remote instances that provide WAF services, such as NGINX App Protect(NAP), BIG-IP Advanced WAF(AWAF), or F5 Distributed Cloud Web App and API Security(XC WAAP).  The "Providers" section serves as the command center where we oboard of all our WAF instances and gain insight into their status and deployments.  For BIG-IP and NGINX we employ agents to perform the onboarding.  An agent is a lightweight container that stores secrets in a vault and connects the instances to the SaaS layer.  For XC we use an API token, this can easily be generated by navigating to Account > Account Settings > Personal Management > Credentials> Add Credentials in the XC console.  Detailed instructions for adding both types of providers are readily accessible during the "Add Provider" workflow.

Screenshot 2023-09-14 at 3.15.20 PM.png

After successfully onboarding our providers, we can ingest the currently deployed policies and begin managing them on the platform.


The "Policies" section serves as the central hub for overseeing the complete lifecycle of policies onboarded onto the platform.  Within this section, we gain access to policy insights, including their current status and the timestamp of their last modification.  Selecting a specific policy opens up the "Policy Details" panel, offering a comprehensive suite of options.  Here, you can edit, convert, deploy, export, or remove the policy, while also accessing essential information regarding policy-related actions and reports detailing those actions.

Screenshot 2023-09-14 at 3.08.53 PM.png

The tool additionally features an editor equipped with real-time syntax validation and auto-completion, allowing you to create new or edit existing polices on the fly.

Screenshot 2023-09-14 at 2.56.50 PM.png

Policy Deployment:

Navigating the policy deployment process within the policy supervisor is a seamless and user-friendly experience.  To initiate the process select "Deploy" from the "Policy Details" panel then selecting the source and target or targets. The platform first begins the conversion process to ensure the policy aligns with the features supported by the targets.  Following this conversion, you'll receive a detailed report providing you with information on what was and was not converted.  Once you've reviewed the conversion results and are satisfied with the outcome, select the endpoints to apply the policy to, and click deploy.  That's it, it's that easy.

Screenshot 2023-09-14 at 2.51.04 PM.png


F5 Policy Conversion Utility:

The F5 Policy Conversion tool allows you to transform JSON or XML formatted policies from an NGINX or BIG-IP into a format compatible with your desired target - any application security product in the F5 portfolio. This user-friendly tool requires no authentication, offering hassle-free access at https://policysupervisor.io/convert

The interface has an intuitive design, simplifying the process: select your source and target types, upload your JSON or XML formatted policy, and with a simple click, initiate the conversion.  Upon completion, the tool provides a comprehensive package that includes a detailed report on the conversion process and your newly adapted policies, ready for deployment onto your chosen target.

Screenshot 2023-09-14 at 2.45.34 PM.png

Whether you are augmenting a F5 BIG-IP Advanced WAF fleet with F5 XC WAAP at the edge, decomposing a monolithic application and protecting the new microservice with NIGNX App Protect, or augmenting a multi-cloud security strategy with F5 XC WAAP at the edge, the Policy Conversion utility can help ensure you are providing consistent and robust protection across each platform.


Managing security policies across a large WAF footprint is a complex undertaking that requires constant vigilance, adaptability, and coordination. Security engineers must strike a delicate balance between safeguarding applications and ensuring their uninterrupted functionality while also staying ahead of evolving threats and maintaining a consistent security posture across the organization.  By harnessing the F5 Policy Supervisor and Conversion tools, coupled with DevSecOps principles, organizations can easily deploy and maintain consistent WAF policies throughout the organization's entire application security footprint.



F5 Hybrid Security Architectures:

F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) 
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)

For further information or to get started:


by Cameron_Delano | F5 Employee
Posted in TechnicalArticles Sep 17, 2023 9:18:59 PM

APIs Everywhere

Size of the problem

In a recent conversation, a customer mentioned they figured they had something on the order of 6000 API endpoints in their environment.  This struck me as odd, as I am pretty sure they have 1000+ HTTP-based applications running on their current platform.  If the 6000 API number is correct, each application has only six endpoints.  In reality, most apps will have dozens or hundreds of endpoints... that means there are probably 10s of thousands of API endpoints in their environment!

And you thought it was a pain to manage a WAF when you had a thousand apps!

But the good news is that you're not using all of them.  The further good news is that you can REDUCE your security exposure. 

When was the last time someone took things OFF your To-do list?

Tell me how!

The answer is to profile your application landscape.  Much like the industry did in the early 20-teens with Web Application Security, understanding your attack surface is the key to defining a plan to defend it.  This is what we call API Discovery.

By allowing your traffic to be profiled and APIs uncovered, you can begin to understand the scale and scope of your security journey.

You can do this by putting your client traffic through an engine that offers this, like F5's Distributed Cloud (or F5 XC).  With F5 XC, you can build a list of the URIs and their metadata and generate a threat assessment and data profile of the traffic it sees.

Interactive view of API callsInteractive view of API calls

This is a fantastic resource if you can push your traffic through an XC Load Balancer, but that isn't always possible.

Out of Band API Analysis

What are your options when you want to do this "Out of Band"?  Out of Band (or OOB) presents challenges, but luckily, F5 has answers.

If we can gather the traffic and make it available to the XC API Discovery process, generating the above graphic for your traffic is easy.

Replaying the Traffic

Replaying, or more accurately, "mimicking" the traffic can be done using a log process on the main proxy - BIG-IP or nginx are good examples, but any would work - and then sending that logged traffic to a process that will generate a request and response that traverses an XC Load Balancer.

API Discovery traffic flowAPI Discovery traffic flow

This diagram shows using an iRule to gather the request and response data, which is then sent to a custom logging service.  This service uses the data to recreate the request (and response) and sends that through the XC Load Balancer.

Both the iRule and the Logger service are available as open-source code here.

If you're interested in deploying this, F5 is here to help, but if you would like to deploy it on your own, here is a suggested architecture:


Deploying the logger as a container on F5 Distributed Cloud's AppStack on a Customer Edge instance allows the traffic to remain within your network enclave.  The metadata is pushed to the XC control plane, where it is analyzed, and the API characteristics are recorded.

What do you get?

The analysis provided in the dashboard is invaluable for determining your threat levels and attack surfaces and helping you build a mitigation plan.

From the main dashboard shown here, the operator can see if any sensitive data was exposed (and what type it might be), the threat level assessment and the authorization method.  Each can help determine a course of action to protect from data leakage or future breach attempts.


Drilling into these items, the operator is presented with details on the performance of the API (shown below).

endpoint detailsendpoint details

To promote sharing of information, all of the data gathered is exportable in Swagger/OpenAPI format:

swagger exportswagger export


Where to from here?

We will publish more on this in the coming weeks, so stay tuned.


by Scheff | F5 Employee
Posted in TechnicalArticles Sep 14, 2023 5:00:00 AM

Securing Applications using mTLS Supported by F5 Distributed Cloud


Mutual Transport Layer Security (mTLS) is a process that establishes encrypted and secure TLS connection between the parties and ensures both parties use X.509 digital certificates to authenticate each other. It helps to prevent the malicious third-party attacks which will imitate the genuine applications. This authentication method helps when a server needs to ensure the authenticity and validity of either a specific user or device. As the SSL became outdated several companies like Skype, Cloudfare are now using mTLS to secure business servers. Not using TLS or other encryption tools without secure authentication leads to ‘man in the middle attacks.’ Using mTLS we can provide an identity to a server that can be cryptographically verified and makes your resources more flexible.

mTLS with XFCC Header

Not only supporting the mTLS process, F5 Distributed Cloud WAF is giving the feasibility to forward the Client certificate attributes (subject, issuer, root CA etc..) to origin server via x-forwarded-client-cert header which provides additional level of security when the origin server ensures to authenticate the client by receiving multiple requests from different clients. This XFCC header contains the following attributes by supporting multiple load balancer types like HTTPS with Automatic Certificate and HTTPS with Custom Certificate. 

  • Cert 
  • Chain 
  • Subject 
  • URI 
  • DNS


How to Configure mTLS

In this Demo we are using httpbin as an origin server which is associated through F5 XC Load Balancer. Here is the procedure to deploy the httpbin application, creating the custom certificates and step-by-step process of configuring mTLS with different LB (Load Balancer) types using F5 XC. 

  • Deploying HttpBin Application 

    Here is the link to deploy the application using docker commands. 
  • Signing server/leaf cert with locally created Root CA

    Commands to generate CA Key and Cert: 
        openssl genrsa -out root-key.pem 4096 
        openssl req -new -x509 -days 3650 -key root-key.pem -out root-crt.pem 
    Commands to generate Server Certificate:
        openssl genrsa -out cert-key2.pem 4096
        openssl req -new -sha256 -subj "/CN=test-domain1.local" -key cert-key2.pem -out cert2.csr 
        echo "subjectAltName=DNS:test-domain1.local" >> extfile.cnf 
        openssl x509 -req -sha256 -days 501 -in cert2.csr -CA root-crt.pem -CAkey root-key.pem -out
        cert2.pem -extfile extfile.cnf -CAcreateserial 
    Add the TLS Certificate to XC console, create a LB(HTTP/TCP) and attach origin pools and TLS certificates to it. 
    In Ubuntu: 
    Move above created CA certificate (ca-crt.pem) to /usr/local/share/ca-certificates/ca-crt.pem  and modify "/etc/hosts" file by mapping the VIP(you can get this from your configured LB -> DNS info -> IP Addr) with domain, in this case the (test-domain1.local). 
  • mTLS with HTTPS Custom Certificate

    Log in the F5 Distributed Cloud Console and navigate to “Web APP & API Protection” module. 
    Go to Load Balancers and Click on ‘Add HTTP Load Balancer’. 
    Give the LB Name
    (test-mtls-cust-cert), Domain name (mtlscusttest.f5-hyd-demo.com), LB Type as HTTPS with Custom Certificate, Select the TLS configuration as Single Certificate and configure the certificate details.
    Click in
    Add Item’ under TLS Certificates and upload the cert and key files by clicking on import from files.Shajiya_Shaik_4-1691139279214.png

    Click on apply and
    enable the mutual TLS, import the root cert info, and add the XFCC header value.

    onfigure the origin pool by clicking on ‘Add Item’ under Origins. Select the created origin pool for httpbin.Shajiya_Shaik_5-1691139334520.png

    Click on
    ‘Apply’ and then save the LB configuration with ‘Save and Exit.
    we have created the Load Balancer with mTLS parameters. Let us verify the same with the origin server.

  • mTLS with HTTPS with Automatic Certificate

    Log in the F5 Distributed Cloud Console and navigate to “Web APP & API Protection” module. 

    Goto Load Balancers and Click on ‘Add HTTP Load Balancer’. 

    Give the LB Name(mtls-auto-cert), Domain name (mtlstest.f5-hyd-demo.com), LB Type as HTTPS with Automatic Certificate, enable the mutual TLS and add the root certificate. Also, enable x-forwarded-client-cert header to add the parameters. 

    Configure the origin pool by clicking on ‘Add Item’ under Origins. Select the created origin pool for

    Click on ‘Apply’ and then save the LB configuration with ‘Save and Exit
    we have created the HTTPS Auto Cert Load Balancer with mTLS parameters. Let us verify the same with the origin server.Shajiya_Shaik_1-1691148356034.png


As you can see from the demonstration, F5 Distributed Cloud WAF is providing the additional security to the origin servers by forwarding the client certificate info using mTLS XFCC header.  

Reference Links

by Shajiya_Shaik | F5 Employee
Posted in TechnicalArticles Sep 13, 2023 5:00:00 AM

F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)


For those of you following along with the F5 Hybrid Security Architectures series, welcome back!  If this is your first foray into the series and would like some background, have a look at the intro article.  This series is using the F5 Hybrid Security Architectures GitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles.  This repo is a community supported effort to provide not only a demo and workshop, but also a stepping stone for utilizing these practices in your own F5 deployments.  If you find any bugs or have any enhancement requests, open a issue or better yet contribute!

Use Case:

Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller.  To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service as well as F5 BIG-IP Access Policy Manger and Advanced WAF.  We will then use F5 Container Ingress Service and IngressLink to tie it all together.

Distributed Cloud WAAP: Available for SaaS-based deployments and provides comprehensive security solutions designed to safeguard web applications and APIs from a wide range of cyber threats. 

BIG-IP Access Policy Manager(APM) and Advanced WAF:  Available for on-premises / data center and public or private cloud (virtual edition) deployment, for robust, high-performance web application and API security with granular, self-managed controls.

BIG-IP Container Ingress Services: A container integration solution that helps developers and system teams manage Ingress HTTP routing, load-balancing, and application services in container deployments.  

F5 IngressLink: Combines BIG-IP, Container Ingress Services (CIS), and NGINX Ingress Controller to deliver unified app services for fast-changing, modern applications in Kubernetes environments.

NIGNX Ingress Controller for Kubernetes: A lightweight software solution that helps manage app connectivity at the edge of a Kubernetes cluster by directing requests to the appropriate services and pods.


XC WAAP + BIG-IP Access Policy Manager + F5 Container Ingress Services + NGINX Ingress Controller Workflow

GitHub Repo: 

F5 Hybrid Security Architectures



  • xc: F5 Distributed Cloud WAAP
  • nic: NGINX Ingress Controller
  • bigip-base: F5 BIG-IP Base deployment
  • bigip-cis: F5 Container Ingress Services
  • infra: AWS Infrastructure (VPC, IGW, etc.)
  • eks: AWS Elastic Kubernetes Service
  • brewz: Brewz SPA test web application


  • Cloud Provider: AWS
  • Infrastructure as Code: Terraform
  • Infrastructure as Code State: Terraform Cloud
  • CI/CD: GitHub Actions

Terraform Cloud

Workspaces: Create a workspace for each asset in the workflow chosen

Workflow Workspaces
xcbn-cis infra, bigip-base, bigip-cis, eks, nic, brewz, xc

Your Terraform Cloud console should resemble the following:

Screenshot 2023-08-21 at 11.25.15 AM.png

Variable Set: Create a Variable Set with the following values.
IMPORTANT: Ensure sensitive values are appropriately marked.

  • AWS_ACCESS_KEY_ID: Your AWS Access Key ID - Environment Variable
  • AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key - Environment Variable
  • AWS_SESSION_TOKEN: Your AWS Session Token - Environment Variable
  • VOLT_API_P12_FILE: Your F5 XC API certificate. Set this to api.p12 - Environment Variable
  • VES_P12_PASSWORD: Set this to the password you supplied when creating your F5 XC API key - Environment Variable
  • nginx_jwt: Your NGINX Java Web Token associated with your NGINX license - Terraform Variable
  • tf_cloud_organization: Your Terraform Cloud Organization name - Terraform Variable

Your Variable Set should resemble the following:

Screenshot 2023-06-26 at 1.59.11 PM.png


Fork and Clone Repo: F5 Hybrid Security Architectures  


Actions Secrets:
Create the following GitHub Actions secrets in your forked repo

  • XC_P12: The base64 encoded F5 XC API certificate
  • TF_API_TOKEN: Your Terraform Cloud API token
  • TF_CLOUD_ORGANIZATION: Your Terraform Cloud Organization
  • TF_CLOUD_WORKSPACE_workspace: Create for each workspace used in your workflow. EX: TF_CLOUD_WORKSPACE_XC would be created with the value xc

Your GitHub Actions Secrets should resemble the following:

Screenshot 2023-08-21 at 11.32.45 AM.png

Setup Deployment Branch and Terraform Local Variables:

Step 1: Check out a branch for the deploy workflow using the following naming convention

xcbn-cis deployment branch: deploy-xcbn-cis

Screenshot 2023-08-21 at 11.37.36 AM.png

Step 2: Upload the Brewz OAS file to XC
             * From the side menue under Manage, navigate to Files->Swagger Files and choose Add Swagger File

Screenshot 2023-08-21 at 12.09.12 PM.png

             * Upload Brewz OAS file from the repo f5-hybrid-security-architectures/brewz/brewz-oas.yaml

Screenshot 2023-08-21 at 11.58.36 AM.png

Step 3:
 Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data


project_prefix = "Your project identifier"
resource_owner = "You"

aws_region = "Your AWS region" ex: us-west-1
azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"] 

nic = true
nap = false
bigip = true
bigip-cis = true


Step 4: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data


#XC Global
api_url = "https://<Your Tenant>.console.ves.volterra.io/api"
xc_tenant = "Your XC Tenant ID"
xc_namespace = "Your XC namespace"

app_domain = "Your App Domain"

xc_waf_blocking = true

#XC AI/ML Settings for MUD, APIP - NOTE: Only set if using AI/ML settings from the shared namespace
xc_app_type = []
xc_multi_lb = false

#XC API Protection and Discovery
xc_api_disc = true
xc_api_pro = true
xc_api_spec = ["Path to uploaded API spec"] *See below screen shot for how to obtain this value.

#XC Bot Defense
xc_bot_def = false

xc_ddos = false

#XC Malicious User Detection
xc_mud = false


* For Path to API Spec navigate to Manage->Files->Swagger Files, click the three dots next to your OAS, and choose "Copy Latest Version's URL".  Paste this into the xc_api_spec in the xc/terraform.tfvars.

Screenshot 2023-06-26 at 2.07.20 PM.png

Step 5: Modify line 16 in the .gitignore and comment out the *.tfvars line with # and save the file

Screenshot 2023-02-21 at 8.14.58 AM.png

Step 6: Commit your changes
 Screenshot 2023-08-21 at 11.45.28 AM.png


Step 1: Push your deploy branch to the forked repo

Screenshot 2023-08-21 at 11.45.28 AM.png

Step 2: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build


Screenshot 2023-08-21 at 11.43.51 AM.png

Step 3: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC

Screenshot 2023-08-21 at 11.48.38 AM.png

Step 4: Check your Terraform Outputs for XC and verify your app is available by navigating to the FQDN

Screenshot 2023-06-26 at 2.17.31 PM.png

Step 5: Configure F5 APM and Advanced WAF following the guide here.

API Discovery:

The F5 XC WAAP platform learns the schema structure of the API by analyzing sampled request data, then reverse-engineering the schema to generates an OpenAPI spec.  The platform validates what is deploy versus what is discovered and tags any Shadow APIs that are found.  We can then download the learned schema and use it to augment our BIG-IP APM API protection configuration.

Screenshot 2023-06-26 at 2.19.56 PM.png 

Deployment Teardown:

Step 1: From your deployment branch check out a branch for the destroy workflow using the following naming convention

xcbn-cis destroy branch: destroy-xcbn-cis

Screenshot 2023-08-21 at 11.52.44 AM.png

Step 2: Push your destroy branch to the forked repo

Screenshot 2023-08-21 at 11.56.49 AM.png

Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build


Screenshot 2023-08-21 at 12.13.37 PM.png

Step 4: Once the pipeline completes, verify your assets were destroyed

Screenshot 2023-08-21 at 11.51.17 AM.png


In this article we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAAP, F5 BIG-IP, and NGINX Ingress Controller to protect a test API running in AWS EKS.  While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments. 

Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances.  Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect.  With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications.  Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution.


Article Series:

F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) 
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)

For further information or to get started:

  • F5 Distributed Cloud Platform (Link)
  • F5 Distributed Cloud WAAP Services (Link)
  • F5 Distributed Cloud WAAP YouTube series (Link)
  • F5 Distributed Cloud WAAP Get Started (Link)
by Cameron_Delano | F5 Employee
Posted in TechnicalArticles Sep 11, 2023 5:00:00 AM

Overview of F5 Distributed Cloud Dashboards

Table Of Contents:



As the modern digital application world keeps evolving and innovative, organizations are faced with an overwhelming amount of data coming from various sources. Navigating through this sea of data can be a daunting task, often leading to confusion and inefficiency in decision-making processes. Making sense of this data and extracting valuable insights is crucial in choosing the right decisions for protecting the applications and boosting their performance. This is where dashboards come to the rescue. Dashboards are powerful visual tools of consolidated results of complex data sets into user-friendly, interactive displays, offering a comprehensive overview of key metrics, trends, and insights in one place. 

By grouping different types of service details into visuals like graphs, charts, tables, and metrics, and displaying those visuals on a single page, dashboards provide valuable insights. They help users to review the summary on a regular basis which focusses on highlighting the key issues, security risks and current business trends. They provide users with quick and easy-to-understand real time insights with analysis. They can also be made interactive by providing advanced options of global search and filters to view the data that best suits each user’s needs.  

In a nutshell, "Dashboards are like canvas of your business data – offering a panoramic view of your applications data landscape which illuminates the hidden insights that drive application security decisions."


In this dashboards overview article, we will walk you through some of the enhanced F5 Distributed Cloud (XC) dashboards and their key insights. 


Security Dashboards

  • Web Application and API Protection (WAAP) dashboard

    WAAP service is upgraded to enhance dashboards with new features like Trends which focusses on application security at a glance. This dashboard captures performance and security with multiple types of metrics to give users a summary of different sections like malicious users, security events, threat intelligence, DDoS activity, performance statistics, throughput, etc. for current applications available in that namespace. These dashboards show multiple metrics like threat campaign data, security events details, DDoS and Bot traffic data, Top attack sources, API endpoints, load balancers health, active features, etc.  
    Fig 1: Image showing waap dashboardFig 1: Image showing waap dashboard
    Fig 2: Image showing waap performanceFig 2: Image showing waap performance


  • Client-Side Defense (CSD) Dashboard

    CSD is also part of security controls available in distributed cloud and its monitoring dashboard presents different types of details like detected 3rd party domains, mitigated & allowed domains, transactions observed, LastSeen, etc. as shown below - 
    Fig 3: Image showing CSD dashboardFig 3: Image showing CSD dashboard
  • Bot Defense Dashboard

    Bot defense service also is enhanced to give an amazing UI experience as well as clear picture of current bot defense posture of existing applications. UI has different widgets like bad traffic, traffic chart, API latency, etc. to analyze the traffic from multiple perspectives. 
    Fig 4: Image showing Bot Defense dashboardFig 4: Image showing Bot Defense dashboard

You can explore more about security dashboards in simulator by clicking this link: https://simulator.f5.com/s/xc-dashboards. 

Multi-Cloud Dashboards

  • Multi-Cloud Network Connect Dashboard

    The Multi-Cloud Network Connect is a L3 routing service between CE's, between RE's, and it uses a combination of point-to-point VPN's and the Global Network to provide the fastest multi-path mesh routing service. Network Connect dashboards provide visibility to the networking paths and infrastructure, and for Cloud CE's additional visibility of connected services at the cloud provider, routing tables, running instances, availability zones, etc. 

    The Multi-Cloud Network Connect service provides details for network operators so they can observe and act on their multi-cloud network focused each for Networking, Performance, Network Security & Site management. Users can navigate to each section of these dashboards to understand the insights. We can get details about different components like interfaces, data plane, control plane, Top 10 links and their statistics. 
    Fig 5: Image showing MCN Performance dashboardFig 5: Image showing MCN Performance dashboard
    Fig 6: Image showing MCN Security dashboardFig 6: Image showing MCN Security dashboard

    For more details on this feature check this MCN article. 

    Multi-Cloud App Connect Dashboard

  • App Connect is a L7 full proxy service using the F5 Global Network to provide apps with effective local connectivity. App Connect dashboards focus on details of how an app is connected both internally and externally by visualizing traffic ingress to the front end and egress to each service endpoint. This service also joined the trend by serving rich dashboards focused on application delivery. Application owners can now observe and take action on applications delivered across their multi-cloud network with a dashboard focused on applications and performance. Performance dashboard shows details like HTTP & TCP traffic overview, throughput, top load balancers, etc. Application dashboard focusses on load balancers health, active alerts, list of existing load balancers. 
    Fig 7: Image showing App Connect DashboardFig 7: Image showing App Connect Dashboard
    Fig 8: Image showing App dashboardFig 8: Image showing App dashboard

Content Delivery Network (CDN) Dashboard

When it comes to content delivery, performance plays a major role in smoother application streamlining. So, by keeping this in in picture, XC console has released CDN performance dashboard which features the cache hit ratio, allowing network operators and app owners to optimize the regional delivery of content that can be cached. They also show existing CDN distributions along with their metrics like requests count, Data transfer, etc.Fig 9: Image showing CDN dashboardFig 9: Image showing CDN dashboard

Note: This is our first overview article on XC dashboards series and stay tuned for our upcoming articles on these newly implemented rich dashboards. 


Dashboards are highly recommended tools to visualize data in a simple and clear way. In this article, we have provided some insights on newly enhanced rich security dashboards of important features which are helpful to users in identifying application concerns and taking necessary actions. 


For more details refer below links: 

  1. Overview of WAAP 
  2. Get started with F5 Distributed Cloud 
  3. Security Dashboards Simulator 
by Janibasha | F5 Employee
Posted in TechnicalArticles Sep 10, 2023 6:00:00 PM

Demo Guide: F5 Distributed Cloud DNS (SaaS Console)

DNS, a Domain Name Service is a mechanism of how humans and machines discover where to connect. It is the universal directory of addresses to names. It is the most prominent feature that every service on the Internet depends on. It will be very critical to keep our organizations online in the midst of DDoS attacks.

We usually encounter with multiple scenarios of DNS failure in single on-prem, CPE based DNS solutions with backup or a single cloud DNS solution struggling with increasing traffic demands. Also, when we extend our traditional DNS to an organization’s websites and applications across different environments, most of the on-premises DNS solutions don’t scale efficiently to support today’s ever expanding app footprints.

F5 Distributed Cloud DNS simplifies all these problems by acting as both Primary or Secondary nameservers and provides global security, automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS – DNSSEC support. With the increase in deployment of app in cloud, F5 XC DNS helps to scale up and provide regional DNS as well. 

It also acts as an intelligent DNS Load Balancer from F5 which directs application traffic across the environments globally. It performs health checks, provides disaster recovery, and automates responses to activities and events to maintain high performance among applications not only that it has regional DNS that helps to redirect traffic according to the Geographical location there by reducing load on single DNS server.

Here are the key areas where F5 Distributed Cloud DNS plays a vital role to solve:

  • Failover with Secondary DNS
  • Secure secondary DNS 
  • Primary DNS
  • Powerful DNS Load Balancing and Disaster Recovery

There is a GitHub repo that is available and helps to deploy the services for the above key features.

Finally, this demo guide supports the customers by giving a clear instruction set  and a detto deploy the services using F5 Distributed Cloud DNS.


by Shajiya_Shaik | F5 Employee
Posted in TechnicalArticles Sep 6, 2023 5:00:00 AM

Testing the security controls for a notional FDX Open Banking deployment


Unlike other Open Banking initiatives that are mandate-driven in a top-down approach, the North-American Open Banking standardisation effort is industry-led, in a bottom-up fashion by the Financial Data Exchange (FDX), a non-profit body. FDX's members are financial institutions, fintechs, payment networks and financial data stakeholders, collaboratively defining the technical standard for financial data sharing, known as FDX API.
As Security is a core principle followed in development of FDX API, it's worth examining one of the ways in which F5 customers can secure and test their FDX deployments.


To understand the general architecture of an Open Banking deployment like FDX, it is helpful to visualise the API endpoints and components that play a central role in the standard, versus the back-end functions of typical Financial Institutions (the latter elements displayed as gray in the following diagram):


In typical Open Banking deployments, technical functions can be broadly grouped in Identity & Consent, Access and API management areas. These are core functions of any Open Banking standard, including FDX.

If we are to start adding the Security Controls (green) to the diagram and also show the actors that interact with the Open Banking deployment, the architecture becomes:


It is important to understand that Security Controls like the API Gateway, Web Application and API Protection or Next Generation Firewalls are just functions, rather than instances or infrastructure elements. In some architectures these functions could be implemented by the same instances/devices while in some other architectures they could be separate instances.
To help decide the best architecture for Open Banking deployments, it is worth checking the essential capabilities that these Security Controls should have:

WAAP (Web Application and API Protection)
  • Negative security model / known attack vectors database
  • Positive security model / zero-day attack detection
  • Source reputation scoring
  • Security event logging
  • L7 Denial of Service attack prevention
  • Brute-force and leaked-credential attack protection
  • Logging and SIEM/SOAR integration
  • Bot identification and management
  • Denial of Service Protection
  • Advanced API Security:
      Adherence to the FDX API OpenAPI spec
      Discovery of shadow APIs
API Gateway
  • Authentication and authorization
  • Quota management
  • Layer 3-4 Denial of Service attack prevention
  • Prevention of port scanning
  • Anomaly detection
  • Privacy protection for data at-rest
Client-side protection
  • Fraud detection

One possible architecture that could satisfy these requirements would look similar to the one depicted in the following high-level diagram, where NGINX is providing API Gateway functionality while F5 Distributed Cloud provides WAF, Bot Management and DDoS protection.


In this case, just for demo purposes, the notional FDX backend has been deployed as a Kubernetes workload on GKE and NGINX API Gateway was deployed as an Ingress Controller while Distributed Cloud functionality was implemented in F5's Distributed Cloud (XC) Regional Edges, however there is a great degree of flexibility in deploying these elements on public/private clouds or on-premises.
To learn more on the flexibility in deploying XC WAAP, you can read the article Deploy WAAP Anywhere with F5 Distributed Cloud

Automated security testing

Once the architectural decisions have been made, the next critical step is testing this deployment (with a focus on Security Controls testing) and adjust the security policies. This, of course, should be done continuously throughout the life of the application, as it evolves.
The challenge in testing such an environment comes from the fact the Open Banking API is generally protected against unathorized access via JSON Web Token (JWT), which is checked for authentication and authorisation at the API Gateway level. "Fixing" the JWT to some static values defeats the purpose of testing the actual configuration that is in (or will be moved to) Production, while generating the JWT automatically, to enable scripted testing is fairly complex, as it involves going through all the stages a real user would need to go through to perform a financial transaction.

As an example of the consent journey an end-user and the Data Recipient have to go through to obtain the JWT can be seen in the following diagram:


One solution to this challenge would be to use an API Tester that can perform the same actions as a real end-user: obtain the JWT in a pre-testing stage and feed it as an input to the security testing stages.
One such tool was built using the Open Source components described in the diagram below and is available on GitHub.

The API Tester is using Robot Framework as a testing framework, orchestrating the other components. Selenium WebDriver is used to automate the end-user session that would authenticate to the Financial Institution and give the user consent for a particular type of transaction. The JWT that is obtained is then passed by Robot to the other testing stages which, for demo purposes, will perform functionality tests (ensuring valid calls are being allowed) and security testing (ensuring, for example, known API attacks are being blocked).Valentin_Tobi_0-1692239242918.png


The API Tester is automatically deployed and run using GitHub Actions and Terraform Cloud. A full pipeline will go through the deployment of the the GCP's GKE infrastructure required to host the notional FDX back-end and the NGINX Ingress Controller API Gateway, the F5 XC WAAP (Web Application and API Protection), and the API Tester hosted on the F5 XC vk8s infrastructure.
A run is initiated by creating a repository branch and, following the deployment and test run, a report is being received via email.

Here's the API Tester in action:


F5 XC WAAP and NGINX API Gateway can provide the levels of protection required by the Financial Services Industry, the current article focussing on a possible security architecture for FDX, the North-American standard for Open Banking.
To test the security posture of the FDX Security Controls, a new API Tester framework is needed and the main challenge that is solved is the automated generation of JWT, following the same journey as a real end-user.
This allows the testing of deployments having a configuration similar to the one found in Production.

For more information or to get started:

by Valentin_Tobi | F5 Employee
Posted in TechnicalArticles Sep 5, 2023 5:00:00 AM

F5 Distributed Cloud - Service Policy - Header Matching Logic & Processing


Who knows what an iRule is?  iRules have been used by F5 BIG-IP customers for a quarter of a century!  One of the most common use cases for iRules are for security decisions.  If you're not coming from a BIG-IP and iRules background, what if I told you that you could apply 1000s of combinations of L4-L7 match criteria in order to take action on specific traffic?  This is what a Service Policy provides similar to iRules.  The ability to match traffic and allow, deny, flag, or tune application security policy based on that match.  I often am asked, "Can F5 Distributed Cloud block ____ the same way I do with iRules?", and most commonly the answer is, absolutely, with a Service Policy.  

Story time

Recently, I had a customer come to me with a challenge for blocking a specific attack based on a combination of headers.  This is a common application security practice, specifically for L7 DDoS attacks, or even Account Take Over (ATO) attempts via Credential Stuffing/Brute Force Login.  While F5 Distributed Cloud's Bot Defense or Malicious Users feature sets might be more dynamic tools in the toolbox for these attacks, a Service Policy is great for taking quick action.  It is critical that you've clearly identified the match criteria inorder to ensure your service policy will not block good traffic.  

Service Policy Logic

As stated earlier, the attack was identified by a specific combination of headers and values of these headers.  The specific headers looked something like below (taken from my test environment and curl tests):

curl -I --location --request GET 'https://host2.domain.com' \
--header 'User-Agent: GoogleMobile-9.1.76' \
--header 'Content-Type: application/json; charset=UTF-8' \
--header 'Accept-Encoding: gzip, deflate, br' \
--header 'partner-name: GOOGLE' \
--header 'Referer: https://host.domain.com/'

The combination of these headers all had to be present, meaning, we needed an "and" logic for matching the headers and their values.  Seems pretty simple, but this is where the conversation between the customer and myself came into play.  When applying all of the headers to match as shown below, they were not matching.  Can you guess why?

Figure A: Headers - FlatFigure A: Headers - Flat

The first thought that comes to mind, is probably, case sensitivity in the values.  However, if we take a closer look specifically at the 'partner-name' header configuration, I've placed a transformation on this specific header.  So the 'partner-name' isn't the problem.Figure B: A transformer is applied to the request traffic attribute values before evaluating for match.Figure B: A transformer is applied to the request traffic attribute values before evaluating for match.

Give up?  The issue in this Service Policy configuration is the 'Accept-Encoding' header.  Specifically the ',' {comma} character in the value.  In the F5 Distributed Cloud Service Policy feature, we treat commas as seperate headers with each individual value.  The reason for this, is a request can have the same header multiple times, or it can have multiple values in a single header.  In order to keep it consistant when parsing the headers with comma deliminated values, we seperate them into multiple headers before matching.

I thought I could be smart when initially testing this, and added multiple values to a single header.  This will not match, for one because they are not seperate headers with values, but also because when there are multiple values within a single header.  This multiple value in a single header configurations within the service policy creates an "or" logic, and we're looking for an "and" logic for all headers and their exact values. 

Figure C: Multiple Values in Single HeaderFigure C: Multiple Values in Single Header Figure D : Multiple Values within a Single Header - "or" Logic for this headerFigure D : Multiple Values within a Single Header - "or" Logic for this header

In order to get the proper match with "and" logic across all headers, and the header values, we need to apply the same header name multiple times.  Important to note, the 'content-type header' has a ';' {semi-colon} which is not a deliminated value in F5 Distributedc Cloud serivce policy logic, and will match just fine the way it is in the defined policy shown below.

Figure E: Multiple Headers defined, with individual values, will provide "and" logic for all headers, and their values.Figure E: Multiple Headers defined, with individual values, will provide "and" logic for all headers, and their values.


In these tests, I am going to first provide an exact match to block the traffic.  When we match, we provide a 403 response code back to the client.  Within the individual Load Balancers objects of F5 Distributed Cloud, you can customize the messaging that comes along with the 403 response code or any response code for that matter.  For my tests, I'll simply use curl and update the different headers.  After this initial successful block, I'll show a few examples of changing the headers sent with the curl.  For the "and" logic, any changes to the headers should result in a 200 response code.  For the "or" logic, it'll depend on how I change the headers.

"and" logic

In this testing section, the service policy is configured like Figure E above.

All values are an exact match, with and logic, the 403 response code identifies the block from the F5 Distributed CloudAll values are an exact match, with and logic, the 403 response code identifies the block from the F5 Distributed Cloud
When removing the 'g' character from gzip, the "and" logic no longer matches, as not every value is exact.  This results in a 200 response code being from the origin server and F5 Distributed Cloud.When removing the 'g' character from gzip, the "and" logic no longer matches, as not every value is exact. This results in a 200 response code being from the origin server and F5 Distributed Cloud.

We've focused on the Accept-Encoding header, but within "and" logic, it doesn't matter which header we change.  If all headers do not match, we will not block.  In this case, we updated the User-Agent header, and received a response code of 200.We've focused on the Accept-Encoding header, but within "and" logic, it doesn't matter which header we change. If all headers do not match, we will not block. In this case, we updated the User-Agent header, and received a response code of 200.

"or" logic

In this testing section, the servicy policy is configured like Figure D above.  

This is an exact match, and the Service Policy blocked the request, sending a 403 response code back to the clientThis is an exact match, and the Service Policy blocked the request, sending a 403 response code back to the client

With or logic of the Accept-Encoding header, one of the values must match.  Since I removed the first letter of every value, there was not a match, and the F5 Distributed Cloud passed the traffic to the origin server.  The origin and the F5 Distributed Cloud returned a 200 response code.With or logic of the Accept-Encoding header, one of the values must match. Since I removed the first letter of every value, there was not a match, and the F5 Distributed Cloud passed the traffic to the origin server. The origin and the F5 Distributed Cloud returned a 200 response code.

When adding the 'g' back to gzip, but leaving all other values missing their first character, we once again get a block at the service policy, and a 403 response code.  Again, this is 'or' logic, so only 1 value must match.When adding the 'g' back to gzip, but leaving all other values missing their first character, we once again get a block at the service policy, and a 403 response code. Again, this is 'or' logic, so only 1 value must match.


A Serivce Policy is a very powerful engine within the F5 Distributed Cloud.  We've scratched the surface of service policies in this article as it pertains to header matching and logic.  Other match criteria examples are IP Threat Category (Reputation), ASN, HTTP Method, HTTP Path, HTTP Query Parameters, HTTP Headers, Cookies, Arguments, Request Body, and so on.  The combination of these match criteria, and the order of operations of each service policy rule, can make a huge difference in the security posture of your application.  These capabilites within the application layer is critical to he security of your application services.  As the F5 Distributed Cloud is your stragegic point of application delivery and control, I hope you're able to use service policies to elevate your application security posture.

by MattHarmon | F5 Employee
Posted in TechnicalArticles Aug 29, 2023 5:00:00 AM

F5 Distributed Cloud Network Connect and NetApp BlueXP - Data Disaster Recovery Across Hybrid Cloud

An important and long-standing need for enterprise storage is the ability to recover from disasters through both rapid and easy access to constantly replicated data volumes.   Beyond reducing corporate downtime from recovery events, the replicated volumes are also critical for cloning purposes to facilitate items such as research into data trends or to perform advanced analytics on enterprise data.  

A modern need exists to quickly replicate data across a wide breadth of sites, with diversity in the major cloud providers to be leveraged, providers such as AWS, Azure, and Google.   The ability to simultaneously replicate critical data to multiple of these hyperscalers prevents a major industry concern, that of vendor lock in.  Modern data stores must be efficiently and quickly saved to, and acted upon, using whichever cloud provider an enterprise desires.   Principal reasons for this hybrid cloud requirement include maximizing return on investment by shopping for attractive price points or more 9’s of reliability.

Although major cloud providers may have individual, unique VPN-style solutions to support data replication, for example Microsoft Azure VPN Gateway deployments, selecting concurrent, differing solutions can quickly become an administrative burden.  Each cloud provider offers slightly distinctive networking and security wares.   A critical concern is the shortage of advanced skill sets often required to maintain configuration and diagnostic processes in place for competing cloud storage solutions.   With flux to be expected in staffing, the long-term cost of trying to stitch disparate cloud technologies into one cohesive offering for the enterprise has been difficult.

This is the precise multi-cloud strategy where F5 Distributed Cloud (XC) can complement industry leading enterprise-grade storage solutions from a major player like NetApp.   With F5 Distributed Cloud Network Connect, multiple points of presence of an enterprise, including on-prem data centers and a multitude of cloud properties, are seamlessly tied together through a multi-cloud network (MCN) offering that leverages a 20 Tbps backbone.  Service turn up measured in minutes, not days.

An excellent, complementary use of the F5 XC hybrid secure network offering is NetApp’s modern approach to managing enterprise data estates, NetApp BlueXP.   This unified, cloud-based control plane from NetApp allows an enterprise to manage volumes both on-prem and in major cloud providers and in turn set up high-value services like data replication.  Congruent to the simple workflows F5 XC delivers for secure networking setup, NetApp BlueXP also consists of intuitive workflows.  For instance, simply drag one volume onto another volume on a point-and-click working canvas and standard SnapMirror is enacted.  F5 XC can underpin the connectivity requirement of a multi-cloud hybrid environment by handling truly seamless and secure network communications.

F5 Distributed Cloud Multi-Cloud Network (MCN) Setup

The first step in demonstrating the F5 and NetApp solutions working in concert to provide efficient disaster recovery of enterprise volumes was to set up F5 XC customer edge (CE) sites within Azure, AWS and On-Prem data center locations.   The CE is a security demarcation point, a virtualized or dedicated server appliance, allowing highly controlled access to key enterprise resources from specific locales selected by the enterprise.   For instance, a typical CE deployment for MCN purposes is a 2-port device with inside ports permitting selective access to important networks and resources.

Each CE will automatically multi-home to geographically close F5 regional edge (RE) sites, no administrative burden in incurred and no networking command line workflows need be learned, CE deployments are wizard-based workflows with automatic encrypted tunnels established.   The following screenshot demonstrates in the red highlighted area that a sample Azure CE site freshly deployed in the Azure Americas-2 region has automatic encrypted tunnels set up to New York and Washington, DC RE nodes.



Regardless of the site, be it an AWS VPC, an AWS services VPC supporting transit gateway (TGW), Azure VNET or an on-prem location, the net result is always a rapid setup with redundant auto-tunneling to the F5 international fabric provided by the global RE network.  Other CE attachment models can be invoked, such as direct site-to-site connectivity that bypasses the RE network, however the focus of this document is the most prevalent approach which harnesses the uptime and bandwidth advantages offered by RE gluing together of customer sites.

With connectivity available between inside interfaces of deployed CEs, standard firewall rules easily added, as well as service insertion of third party NGFW technology such as Palo Alto firewall instances, the plumbing to efficiently interconnect NetApp volumes for on-going replication is now possible.

The objective for the F5 XC deployment was to utilize the NetworkConnect module, to effectively allow layer 3 connectivity between inside ports of CEs regardless of site type.   In other words, connectivity between networked resources at on-prem sites or AWS sites or Azure sites, are all enabled quickly with a consistent and simple guided workflow.   The practical application of this layer-3 style of MCN that NetworkConnect allows was connectivity of NetApp volumes, as depicted in the following diagram.


NetApp BlueXP Data Management Overview

A widely embraced enterprise-class file storage offering is the industry-leading NetApp ONTAP solution.   When deployed on-prem, the solution allows shared file storage, often using NFS or SMB protocols for file storage, frequently with multiple nodes used to create a storage cluster.   Although originally hardware appliance-oriented in nature, modern incarnations of on-prem ONTAP solutions can easily and frequently utilize virtualized appliances.

Both NetApp and F5, in keeping with modern control plane industry trends, have moved towards a centralized, portal-based approach to configuration, whether it be storage appliances (NetApp) or multi cloud networking (F5).   This SaaS approach to configuration and monitoring means control plane software is always up-to-date and requires no day-to-day management.  In the case of NetApp, this modern control plane is instantiated with the BlueXP cloud-based portal.



The sample BlueXP canvas displayed above demonstrates the diversity of data estate entities that can be managed from one workspace, with volumes both on-premises and AWS cloud-based, along with Amazon S3 storage seen.

NetApp offers a widely used cloud-based implementation of file storage, Cloud Volumes ONTAP (CVO) which serves as an excellent repository for replicating traditional on-premises volumes.   In the demonstration environment both AWS and Azure were harnessed to quickly set up CVO instances.   For BlueXP to establish a workspace involving a managed CVO instance, a “Connector” is deployed in the AWS VPC or Azure VNet.   This connector is the entity which facilitates the BlueXP control plane management functions for hybrid-cloud storage.



Upon establishing on premises to AWS and Azure connectivity, enabled by the F5 Secure XC Customer Edge (CE) nodes deployed at sites, a vast and mature range of features are provided to the BlueXP operator.



As highlighted above, a core function of the BlueXP services is replication, in this workspace one can see the on-premises cluster being replicated automatically to an Azure CVO instance.



F5 and NetApp Joint Deployment Summary

The result of combining the F5 Distributed Cloud multi-cloud networking support with the NetApp ability to safeguard mission critical enterprise data, anywhere, was found to be a smooth, intuitive set of guided configuration steps.   Within an hour, protected inside networks were established in two popular cloud providers, AWS and Azure, as well as in an existing on premises data center.   With the connectivity encrypted and standard firewall rules available, including the option to run data flows through inline third-party NGFW instances, the focus upon practical usage of the cloud infrastructure could commence.

A multi-site file storage solution was deployed using the NetApp BlueXP SaaS console, whereby an on premises ONTAP cluster received local files through the NFS protocol.   To demonstrate the value of a multi-cloud deployment, the F5 XC NetworkConnect module allowed real-time file replication of the on-prem cluster contents to separate and independent volumes securely located within an AWS VPC and Azure VNet, respectively.  Using F5 XC, the target networks within the cloud providers were highly secured, only permitting access from the data center.

The net result is a solution that can accommodate disaster recovery requirements, for instance a clone of the AWS or Azure volumes could be created and utilized for business continuity in the event of data corruption or disk failure on premises.   Other use cases would be to clone the cloud-based volumes for research and development purposes, analytics, and further backup purposes that could utilize snapshotting or imaging of the data.   The inherent redundancy offered by using multiple, secured cloud instances could be enhanced easily by expanding to other hyperscalers, for instance Google Cloud Platform when business purposes dictate such a configuration is prudent.

Additional Resources

A simple and intuitive simulator is available to walk users quickly through the setup of an F5 Distributed Cloud MCN deployment such as the one reflected in this article.  The simulator can be found here.

For a complete, comprehensive walk-through of F5 Distributed Coud Multi-Cloud Networking MCN, including setup steps, please see this DevCentral article - Multi-Cloud Networking Walkthrough with Distributed Cloud.

by Steve_Gorman | F5 Employee
Posted in TechnicalArticles Aug 28, 2023 5:00:00 AM

Insights of F5 Distributed Cloud WAAP Events Export and Trends features

As part of release cycle management F5 Distributed Cloud (F5 XC) keeps on releasing new features. July[1] upgrade has released 2 new features in Web Application and Api Protection (WAAP).  

Let’s dive into them one by one. 

WAAP Events Export: 

Security dashboards capture different types of logging metrics and sometimes users may need these logs to analyze them offline. WAAP Exports feature addresses this problem by exporting the latest 500 security related logs in csv format. Users can export logs from events, incidents and requests tabs of security dashboard. 

Feature can be checked by following below step: 

  1. Login to F5 XC console and navigate to "Distributed Apps” menu
  2. Under "Load Balancers” section, click on “HTTP Load Balancers” page
  3. Click on “Security Monitoring” link under your load balancer name
  4. Navigate to “Security Analytics” tab
  5. Filter your needed logs and then click on “Download” button as below
    Fig 1: Image showing navigation pathFig 1: Image showing navigation path

  6. Logs can also be downloaded from “Incidents” and “Requests” tabs as below
    Fig 2: Image showing export feature in Incidents tabFig 2: Image showing export feature in Incidents tab
    Fig 3: Image showing export feature in Requests tabFig 3: Image showing export feature in Requests tab

WAAP Trends: 

Production security dashboards show plenty of logging information to understand the security posture of their Apps and API’s currently for the ongoing traffic. Owners can go through them to analyze the traffic and come to decisions if ongoing data is malicious and has any threats. This process is a little time-consuming and needs human expertise in traffic analysis. Users are looking for a top-level overview of how many attacks are seen in a specific period compared to the last period. 

WAAP Trends feature in security dashboards of HTTP load balancer enables users to view the change in metrics (up or down) compared with previous period. Incoming traffic is analyzed using internal tools to decide the sentiment (positive, negative or neutral) and is displayed in UI thereby saving lot of time. Users can instantly check the sentiment and if needed can update the existing configurations to safeguard the applications. 

As I was writing this article, I keep remembering this famous generic quote “Trend is your Friend” which conveys the importance of identifying the current trend in safeguarding your applications. 

Feature can be checked by following below step: 

  1. Login to F5 XC console and navigate to "Distributed Apps” menu
  2. Under "Load Balancers” section, click on “HTTP Load Balancers” page
  3. Click on “Security Monitoring” link under your load balancer name
  4. Trend is available for different features like API Security, Bot Defense, WAF, Security policy, etc. Check the current trend in each widget fields as shown below - 
    Fig 4: Image showing trends in security dashboardFig 4: Image showing trends in security dashboard

  5. Trend feature is also available in bot defense dashboard for different fields like Human & Malicious traffic, Good bots, etc. as displayed below -
    Fig 5: Image showing Bot Defense TrendsFig 5: Image showing Bot Defense Trends

I hope this article has provided a summary of newly implemented features of WAAP events export and trends which focus on logging and security dashboards.

Stay tuned for more feature article. For more details refer below links: 

  1. Overview of WAAP
  2. Load balancer creation steps
  3. Monitoring load balancer
  4. Get started with Distributed Cloud 
by Janibasha | F5 Employee
Posted in TechnicalArticles Aug 22, 2023 5:00:00 AM

Getting Started with Automating Deployment of MCN & Edge Compute


F5 Distributed Cloud services (XC) provide full REST APIs to enable automation of the deployment and management of multi-cloud infrastructure. Organizations looking to implement infrastructure-as-code operations for modern apps, distribute and secure multi-cloud deployments can utilize and adapt the Terraform and Ansible scripts in the many articles on F5 DevCentral that cover automation topics for F5 Distributed Cloud. Typically these scripts automate and help to consistently :

  • deliver resources, services, and apps into multiple cloud environments (AWS, Azure),  configuring of app resources such as Kubernetes (K8s), and setting up Multi-Cloud Networking (MCN) between environments using Terraform scripts with Terraform provider config for each target cloud;
  • secure application resources customers’ distributed cloud infrastructure with consistent networking and security policies;
  • operate and manage of such configurations across multiple clouds and across app stack layers (from VMs, K8s, runtimes), networking configuration (MCN), and app connectivity (App Connect).

This article focuses on only the Deliver part of the distributed app lifecycle, where using Terraform script with F5 Distributed Cloud Services organizations can easily deploy and configure multi-cloud networking & app connectivity of their distributed applications that span across:

  • Customer Edge (CE) public cloud
  • Edge Compute (Appstack, for such scenarios as Retail Branches or compute on-prem)
  • Regional Edge (RE) deployments

Getting Started with Automation

The easiest place to get started with Automation of Multi-Cloud Networking (MCN) and Edge Compute scenarios is by cloning the corresponding GitHub repositories from the Demo Guides, which include sample applications and provide opportunities to see automation scripts in action. The Terraform scripts within the following Demo Guides can be used as a template to quickly customize to your organization’s requirements to automate repetitive tasks or the creation of resources with just a quick update of variables unique to your environment to customize automation actions.

Multi-cloud networking use-cases Demo Guide where you can use Terraform to enable connectivity for multiple clouds and explore using HTTP and TCP load balancers to connect the provided sample application. You can use the provided scripts in the GitHub repositories to deploy the required sample app, and other components representative of a traditional 3-tier app architecture (backend + database + frontend).

Furthermore, the scripts provide flexibility of choosing the target clouds (AWS or Azure or both), which you can adapt to your environment and app topologies based on which clouds the different app services should be deployed to. Use the guide to get familiar with how to update variables for each cloud configuration, so that you can further customize to your environment to help automate and simplify deployment of the networking topologies across Azure and AWS, ultimately saving time and effort.

Edge Compute for Multi-cloud Apps Demo Guide where Terraform scripts help automate deployment of the application infrastructure across AWS (sample app and other components representative of a traditional 3-tier app architecture – backend, database, frontend). The result is a multi-cloud architecture, with components deployed on Microsoft Azure and Amazon AWS.

By adapting the included Terraform script you can easily deploy and securely network app services to create a distributed app model that spans across:

  • Customer Edge (CE) public cloud
  • Retail Branch (AppStack on a private cloud)
  • Regional Edge (RE)

In the process you get familiar with the configuration of TCP and HTTP Load Balancers, create a vK8s that spans multiple locations / clouds, and deploy distributed app services across those locations with the help of the Terraform scripts.

Deploying high-availability configurations Demo Guide is an important resource for getting familiar with and automating High-Availability (HA) configuration of backend resources. In this guide, as an example, you can use a PostgreSQL database HA deployment on a CE (Customer Edge), which is a common use-case leveraging F5 Distributed Cloud Customer Edge for deploying a backend. First, deploy the AWS Site environment, followed by deployment of a vK8s, and then customizing and running Bitnami Helm chart to configure a multi-node PostgreSQL deployment.

Of course, you can leverage this type of automation with a Helm chart of your choice to configure a different backend resource or database type. Adapt to your environment with a few changes to the script variables, and feel free to combine with scripts from the other two guides to deploy the app(s) and configure networking (MCN) should you choose to automate the entire workflow.

Customization and Adaptation

Terraform scripts represent ready-to-use code, which you can easily adapt to your own apps, environments, and services or extend as needed. The baseline for most scripts is using the Volterra Provider with required edits / updates of the variables in Terraform. These variables are special elements that allow us to store and pass values ​​to different aspects of modules without changing the code in the main configuration file. Variables allow the flexibility of updating the settings and parameters of the infrastructure, and it facilitates its configuration and support.

Variables are stored and can be found in .tf files of the respective folders. Using the Deploying high-availability configurations Demo Guide as example, you change the environment variable values related to your app, which you can find in the terraform folder and the application subfolder. Open the var.tf file to update the values:


More detailed information on variables can be found here.


In summary, Demo Guide repositories include Terraform scripts used to help automate different operations, including deployment of the environment required for the sample distributed app, as well as deploying the app itself. You can take a closer look at the Demo Guide use-cases together with their respective Terraform scripts, run a quick test to get familiar with the use-case, and then adapt the scripts to your environment and your applications.

Whether your app has high availability requirements or distributed multi-cloud infrastructure, using Terraform with F5 Distributed Cloud Services can simplify deployment, automate infrastructure on any cloud and save time and effort managing and securing app resources in any cloud or data center.


Edge Compute for Multi-cloud Apps Demo Guide

Terraform scripts & assets for the Edge Compute Demo Guide



by Nik_Garkusha | F5 Employee
Posted in TechnicalArticles Aug 21, 2023 5:00:00 AM

Making Mobile SDK Integration Ridiculously Easy with F5 XC Mobile SDK Integrator


To prevent attackers from exploiting mobile apps to launch bots, F5 provides customers with the F5 Distributed Cloud (XC) Mobile SDK, which collects signals for the detection of bots. To gain this protection, the SDK must be integrated into mobile apps, a process F5 explains in clear step-by-step  technical documentation. Now, F5 provides an even easier option, the F5 Distributed Cloud Mobile SDK Integrator, a console app that performs the integration directly into app binaries without any need for coding, which means no need for programmer resources, no need to integration delays.

The Mobile SDK Integrator supports most iOS and Android native apps. As a console application, it can be tied directly into CI/CD pipelines to support rapid deployments.

Screen Shot 2023-08-08 at 3.09.18 PM.png

Use Cases

While motivations for using SDK Integrator may vary, below are some of the more common reasons:

  1. Emergency integrations can be accomplished quickly and correctly. Customers experiencing active bot attacks may need to integrate with F5 Distributed Cloud Bot Defense immediately and minimize integration risks.
  2. Apps using 3rd-party libraries may not be suitable for manual integration, particularly when these libraries do not provide APIs for adding HTTP headers into network requests. In such cases, the SDK Integrator can inject SDK calls into the underlying network stack, bypassing the limitations of the network library.
  3. Customers who own multiple apps, which may have different architectures, or are managed by different owners, need a single integration method, one which works for all app architectures and is simple to roll out to multiple teams. The SDK Integrator facilitates a universal integration approach.

How It Works

The work of the SDK Integrator is done through two commands: the first command creates a configuration profile for the SDK injection, and the second performs the injection.

Step 1:

$ python3 ./create_config.py --target-os Android --apiguard-config ./base_configuration_android.json --url-filter "*.domain.com/*/login" --enable-logs --outfile my_app_android_profile.dat

In Step 1, apiguard-config lets the user specify the base configuration to be used in integration. With url-filter we specify the pattern for URLs which require Bot Defense protection, enable-logs allows for APIGuard logs to be seen in the console, outfile specifies the name of this integration profile.

Step 2:

$ java -jar SDK-Integrator.jar --plugin F5-XC-Mobile-SDK-Integrator-Android-plugin-4.1.1-4.dat --plugin my_app_android_profile.dat ./input_app.apk --output ./output_app.apk --keystore ~/my-key.keystore --keyname mykeyname --keypass xyz123 --storepass xyz123     

In Step 2, we specify which SDK Integrator plugin and configuration profile should be used. In the same step, we can optionally pass parameters for app-signing: keystore, keyname, keypass and storepass. Output parameter specifies the resulting file name. The resulting .apk or .aab file is a fully integrated app, which can be tested and released.

Injection steps for iOS are similar. The commands are described in greater detail in the SDK Integrator user guides distributed with the SDK Integrator.


Mobile SDK Integrator Video



In Conclusion

In order to thwart potential attackers from capitalizing on mobile apps to initiate automated bots, The F5 Distributed Cloud Mobile SDK Integrator seamlessly incorporates the SDK into app binaries, completely bypassing the necessity for coding making the process easy and fast.

by Kyle_Roberts | F5 Employee
Posted in TechnicalArticles Aug 18, 2023 4:17:57 PM

Re: Export BIG-IP AWAF URLs to Swagger File

To add to Nikoolayy1's comment, F5 can generate API Definitions within XC, and we are working on integration for BIG-IP and nginx deployments.  This will allow traffic that does not use XC for client traffic to provide the Swagger/OpenAPI files and security assessments available for XC Load Balancers.

Currently, you use a logger on the proxy (BIG-IP or nginx) to gather the request and response data and that is then sent in to XC via a separate service.  The advantage here is that you don't need to change the traffic flow of your client traffic.

If you're interested in learning more, please PM me or reach out to your local account team.


Tags: F5 XC
by Scheff | F5 Employee
Posted in TechnicalForum Aug 18, 2023 9:45:57 AM
Distributed Cloud Users Hub

An open group to foster discourse around the integration of security, networking, and application management services across public/private cloud and network edge compute services.

Other F5 XC Resources

Community Quicklinks

Group Hub Information
Distributed Cloud Users

Distributed Cloud Users

Discuss the integration of security, networking, and application delivery services
108 members
Open group
Created 08-Jun-2022
Members (108)