Distributed Cloud Users
Discuss the integration of security, networking, and application delivery services
Showing results for 
Search instead for 
Did you mean: 
Custom Alert Banner

Distributed Cloud Users

Related Community Activity

Most Recent

F5 Distributed Cloud Origin Server Subset Rules


F5 Distributed Cloud (XC) Origin server subset rules provide the ability to create match conditions on incoming source traffic to the HTTP load balancer. The match conditions include Country, ASN, Regional edge (RE), IP address, and client label selectors for subset selection of destination (origin servers). This helps in customized routing based on request information.

Scenario description:

As there is a decent increase in Holiday retail sales every year, which leads to an increase in ecommerce shopping during Thanksgiving, Cyber Monday, and Holiday season as well. It is observed that there is a spike in web traffic to 38% and Black Friday sees 3x the traffic than normal days during this time frame, and this has led to 1.7 billion online visits during the global holiday season. Under these circumstances, users in certain locations consume more than 50% of global traffic. An event of this nature requires infrastructure that must easily scale up to match the surge in traffic.

Solution suggested by F5 XC:

One of the most suitable solutions for this challenge is to identify the users’ demands and their geographical location and distribute the traffic by increasing further bandwidth to the existing or new servers. This diversification in traffic based on geo location helps the users to access the application specifically for their immediate needs there by avoiding wait time or outages during this period.

This is achieved using F5 XC Origin Server Subset rules, which helps to redirect the traffic based on Geo Location with subset rules.

F5 XC SaaS Console Configs:

Below are the steps to be followed to redirect the traffic that helps in solving the situation mentioned above,

  1. Create a label (key-value pair)
  2. Add labels to one or more origin servers
  3. Create subset rule in Load Balancer

Step 1: Creating a label (key-value pair).

  • Login to F5 XC Console and select Shared Configuration box.


  • Select Manage in left-menu and then select Labels > Known keys. Click on “Add Known key” button.


  • In the open tab, Enter Label key name along with Values for the key.
  • Use “+ Add Label Value” link to add more than one value.
  • Select Add Key button to complete creating the key-value pair.


  • Verify the labels that are displayed as key-value pairs by navigating to Manage > Labels > Known Labels.


Step 2: Adding labels to one or more Origin Servers.

  • At first, From the F5 XC home page, click on Multi-Cloud App Connect and Navigate to Origin pool using Manage > Load Balancers > Origin Pools. Select the Origin pool to which labels must be assigned and click on Edit configuration.
  • From the Origin server section, click on the pencil icon from the Actions column to add labels to the origin servers.


  • Enable the Show Advanced Fields option on the top right of the Origin server section.
  • Click on Add Labels under Origin Server Labels and Select the key as “subset-geo” and assign value as “US”. Click on Apply.



  • We can see labels being displayed under the Labels column to the respective origin server as shown below.


  • Repeat the same process to the servers for which user traffic from US Geo Location to be redirected to.


  • Assign the rest of the servers in the pool with the associated Label as per the requirement. In this scenario, I have assigned labels as “subset-geo” and “other”.
  • Click on Apply. This assigns the Labels to the Origin Servers.


  • The resultant display is shown above.
  • Scroll down to the Other Settings section and click on Configure.


  • From the Origin server Subsets section, Select Enable Subset Load Balancing from the Enable/Disable Subset Load Balancing dropdown and click on Configure.


  • Click on Add Item under Subset Classes section.


  • Enter the Key that was created in Step 1 in List of Keys for Subset section and Click on Apply button.


  • Click on Apply button twice that redirects back to Origin Pool configs and then click on Save and Exit to save the Origin Pool configs.

Step 3: Creating subset rule in Load Balancer.

  • From the F5 XC home page, click on Multi-Cloud App Connect and Navigate to Manage > Load Balancers > HTTP Load Balancers.
  • Select the Load Balancer to which Subset rule must be configured and click on Manage configuration.


  • Enable the Show Advanced Fields option on top right of the Origins section and click on configure under Origin Server Subset Rules.


  • Click on Add Item in Origin Server Subset Rules section to add the Origin Server Subset rules.


  • Create a rule with name, under Action section. Click on Add Label to provide key and value pair created in step 1.
  • From the Clients section, select United States from the drop-down menu of Country Codes List. Click on Apply.


  • Create a new rule like above by adding key and value pairs as “subset-geo” and “other” under Action section.
  • Include the necessary countries in the Country Codes List in the Clients Section.

This rule helps in redirecting the traffic from the countries mentioned below to use different servers apart from the server allotted for United States, thereby providing more Bandwidth to the Users in United States.

  • Scroll down to the bottom and click on Apply.



  • Click on Apply and then Save and Exit.


It is observed from the above logs that the users from US Geo location directed as per Origin Server Label associated with it.


Whereas the users apart from US get load-balanced to different Origin Server as mentioned in Origin Pool as per Label configs.

Thereby, users in the US could be able to experience the enhanced capability of their allocated servers. This helps in avoiding outages, bottlenecks.

Note: Given requirement can also be achieved using RE match condition as well by adding necessary REs as shown below.



F5 XC analyzes the traffic based on its origin, such as Regional Edges, Geo Location, IP Match and more and redirects the traffic as per Origin Server Subset Rules configuration. This simple and effective technique could be able to meet the users’ demands in no time and helps in solving major issues during peak usage hours of e-commerce sites.

Related Links:





by chaithanya_dileep | F5 Employee
Posted in TechnicalArticles Nov 21, 2023 5:00:00 AM

Support of WAF Signature Staging in F5 Distributed Cloud (XC)


Attack signatures are the rules and patterns which identifies attacks against your web application. When the Load balancer in the F5 Distributed Cloud (XC) console receives a client request, it compares the request to the attack signatures associated with your WAF policy and detects if the pattern is matched. Hence, it will trigger an "attack signature detected" violation and will either alarm or block based on the enforcement mode of your WAF policy. A generic WAF policy would include only the attack signatures needed to protect your application. If too many are included, you waste resources on keeping up with signatures that you don't need. Same way, if you don't include enough, you might let an attack compromise your application.

F5 XC WAF is supporting multiple states of attack signatures like enable, disable, suppress, auto-supress and staging. This article focusses on how F5 XC WAF supports staging and detects the staged attack signatures and gives the details of attack signatures by allowing them into the application.


A request that triggers a staged signature will not cause the request to be blocked, but you will see signature trigger details in the security event. When a new/updated attack signature(s) is automatically placed in staging then you won't know how that attack signature is going to affect your application until you had some time to test it first. After you test the new signature(s), then you can take them out of staging, apply respective event action to protect your application!


  • F5 Distributed Cloud Console
  • Security Dashboard


Here is the step-by-step process of configuring the WAF Staging Signatures and validating them with new and updated signature attacks.

  • Login to F5 Distributed Cloud Console and navigate to “Web App & API Protection” -> App Firewall and then click on `Add App Firewall`.
  • Name the App Firewall Policy and configure it with given values.
  • Navigate to “Web App & API Protection” à Load Balancers à HTTP Load Balancers and click on `Add HTTP Load Balancers`.
  • Name the Load Balancer and Configure it with given values and associate the origin pool.
  • Origin pool ``petstore-op`` configuration.
  • Associate the initially created APP firewall ``waf-sig-staging`` under LB WAF configuration section.

  • ``Save and Exit`` the configuration and Verify that the Load balancer has created successfully with the name ``petstore-op``.


    To verify the staging attacks, you need the signature attacks listed in attack signature DB. In this demo we are using the below newly added attack signature (200104860) and updated attack signature (200103281) Id’s.


    Now, Let’s try to access the LB domain with the updated attack signature Id i.e 200103281 and verify that the LB dashboard has detected the staged attack signature by reflecting the details.


    F5 XC Dashboard Event Log:


    Now try to access the LB domain with new signature attack adding the cookie in the request header.


    F5 XC Dashboard Event Log:


    Now, Disable the staging in WAF policy ``waf-sig-staging``.


    Let’s try to access the LB domain with new signature attack.


    F5 XC Dashboard Event Log:



    As you see from the demo, F5 XC WAF supports staging feature which will enhance the testing scope of newly added and updated attack signature(s).


    F5 Distributed Cloud WAF

    Attack Signatures



by Shajiya_Shaik | F5 Employee
Posted in TechnicalArticles Oct 16, 2023 5:00:00 AM

Harnessing F5 Distributed Cloud Customer Edge on the HPE GreenLake Platform

In today's fast-paced digital landscape, businesses are constantly seeking ways to enhance their IT infrastructure's performance, scalability, and security while optimizing costs. One solution to meet these demands is the integration of F5 Distributed Cloud (XC) Customer Edge (CE) within the HPE GreenLake platform. This strategic collaboration brings forth a combination of application delivery, security, and flexible consumption models that help organizations in their hybrid and multi-cloud environments.


Screenshot 2023-09-27 at 8.57.02 AM.png


What is F5 Distributed Cloud?

F5 Distributed Cloud Services are SaaS-based security, networking, and application management services that enable customers to deploy, secure, and operate their applications in a cloud-native environment wherever needed–data center, multi-cloud, or the network or enterprise edge.

What is HPE GreenLake?

HPE GreenLake provides companies with an easy way to use cloud computing services. It lets businesses pay only for the IT infrastructure they need and use. With HPE GreenLake, companies don't have to purchase and manage their own IT hardware and software. Instead, HPE sets up the cloud services and handles maintaining and upgrading the infrastructure. This flexible approach makes it simpler and more affordable for enterprises to leverage the power of the cloud. It also gives companies access to the latest technology from HPE without large upfront investments. 

The Power of F5 XC Customer Edge

F5 XC CE is an application delivery and security software from F5 that can improve company IT systems in several ways. When businesses use F5 XC CE with HPE GreenLake cloud services, they get a powerful combined solution.

The F5 software helps ensure applications run fast and reliably by optimizing how they are delivered to users. It also strengthens application security against threats.

By implementing F5 XC CE through HPE GreenLake's flexible cloud platform, companies can deploy and manage these benefits faster and more easily. They don't need to purchase and maintain the infrastructure on their own.

Together, F5 XC CE and HPE GreenLake provide companies with an efficient way to boost application performance, enhance security, simplify IT operations, and reduce costs. The integrated solution transforms IT infrastructure into a strategic advantage that aligns with business goals.

Optimal Application Delivery

F5 XC CE provides traffic management and optimization methods to keep applications running fast and smoothly. The software balances user requests across available servers to avoid overloading any one server. It also optimizes how content is delivered based on application and network conditions.

These features maximize application performance and maintain consistent speeds for users even when traffic spikes occur. If demand increases, companies can rapidly scale up their infrastructure through HPE GreenLake's flexible cloud platform. The service allows expanding IT resources on demand to support more users and heavier workloads.

By working together, F5 XC CE's application optimization and HPE GreenLake's scalable cloud infrastructure ensure applications stay speedy and reliable at all usage levels. Companies don't have to sacrifice performance as their needs grow.

Robust Security Posture

F5 XC CE provides powerful application security capabilities that protect companies from cyber threats.  It includes features like:

  • Web Application Firewall to safeguard against attacks aimed at websites and web apps
  • DDoS protection to block malicious traffic floods
  • SSL/TLS encryption to secure sensitive data in transit

By using F5 XC CE with HPE GreenLake, businesses get robust, layered security for their applications and data.

HPE GreenLake adds extra defenses at the cloud infrastructure level. Together, the solutions create an end-to-end security envelope to safeguard critical systems and information.

Companies can deploy F5 XC CE's security easily and cost-effectively through HPE GreenLake's cloud platform. The service handles the deployment and infrastructure management.

With cyberattacks growing, applications need strong security. F5 XC CE and HPE GreenLake together provide a flexible, comprehensive security environment. Companies can protect their apps, data, and users across cloud, on-premises, and hybrid environments.

Cost Efficiency and Flexibility

Together, the solutions let organizations add or reduce cloud services and F5 capabilities on demand. Companies can scale up seamlessly during busy periods and scale down during slower times.

This flexibility optimizes costs. Businesses don't pay for more than they need. However, they can expand resources instantly to maintain performance and security when workloads increase.

Simplified Management

F5 XC CE and HPE GreenLake make managing IT infrastructure easier.

The solutions provide:

  • Centralized control to manage resources from one platform
  • Automation to reduce manual, repetitive management tasks
  • Streamlined provisioning and scaling of resources
  • Simplified monitoring

With these capabilities, the joint solution minimizes the workload for IT teams. It allows them to spend less time on routine IT management. Instead, they can focus on delivering more business value through strategic initiatives and innovation.

Hybrid Cloud Enablement

F5 XC CE and HPE GreenLake are designed with hybrid and multi-cloud environments in mind. This compatibility ensures seamless integration between on-premises and cloud-based resources, allowing organizations to embrace cloud-native strategies while preserving investments in existing infrastructure.


Together, F5 XC CE and HPE GreenLake provide a strong IT infrastructure solution.

The key benefits:

- Optimized application performance
- Enhanced security
- Flexible, pay-as-you-go model
- Simplified infrastructure management

This partnership empowers businesses to:

- Meet changing needs
- Protect critical data
- Stay competitive

By combining F5's application expertise with HPE GreenLake's cloud platform, companies can confidently navigate technology challenges.

Kindly explore this demonstration that discusses the deployment of F5 Distributed Cloud Customer Edge within HPE GreenLake Central below. 


Tags: F5 XC
by Sanjay_Shitole | F5 Employee
Posted in TechnicalArticles Oct 13, 2023 5:00:00 AM

Is there a list of the F5 XC API protections OpenAPI/Swagger custom supported extensions ?

Hello to Everyone,



After my investigation of the OpenAPI/Swagger options for AWAF/ASM that I have questions under F5 AWAF/ASM support for wildcard url and parameter... - DevCentral that I have still have questions now I have the same questions for XC Swagger/OpenAPI supported custom options and if there is a list?


I see that XC supports Regex for wildcard in the path as shown below but what about parameters with wildcard names or if there could be a wildcard support for methods as to not have to specify each method under an http path or using custom word like "any" ? Other than that it will be nice if there is support for parameters to be specified as in any location not query or request body.


Any help will appreciated.





"servers": [
"url": "/"
"paths": {
"/niki.*": {



"parameters": [
"in": "query",
"name": "userId",
"schema": {
"type": "integer"
"required": true,
"description": "Numeric ID of the user to get"

by Nikoolayy1 | MVP
Posted in TechnicalForum Oct 6, 2023 5:37:58 AM

SIEM news! F5 Distributed Cloud’s remote logging adds IBM’s QRadar

Along with the likes of Splunk and DataDog, we can add another SIEM vendor in the Distributed Cloud (XC) external logging line up. QRadar has its own native integration drop-down from the Global Log Receiver menu. 

glr q.png


We know Distributed Cloud’s innate security and performance dashboards are rich with data. Even still, many customers prefer to use their existing SIEM environment to ingest the security events generated from Distributed Cloud. In support of this, a custom F5 XC specific content pack was created to hasten the ease of use within QRadar itself. The content pack consists of a zip file which contains what IBM calls a DSM (Device Support Module) which collects, maps and parses the security events in JSON format. The F5 XC content pack covers both security and access logs.

The content pack is discoverable on IBM’s X-Force App Exchange under F5 Distributed Cloud. 

QRadar is able to collect events forwarded via HTTP or HTTPs. For a deeper technical walkthrough please see the video I’ve created. 


by KristyM_F5 | F5 Employee
Posted in TechnicalArticles Oct 5, 2023 5:00:00 AM

Bolt-on Auth with NGINX Plus and F5 Distributed Cloud

Inarguably, we are well into the age wherein the user interface for a typical web application has shifted from server-generated markup to APIs as the preferred point of interaction. As developers, we are presented with a veritable cornucopia of tools, frameworks, and standards to aid us in the development of these APIs and the services behind them.

What about securing these APIs? Now more than ever, attackers have focused their efforts on abusing APIs to exfiltrate data or compromise systems at an increasingly alarming rate. In fact, a large portion of the 2023 OWASP Top 10 API Security Risks list items are caused by a lack of (or insufficient) authentication and authorization. How can we provide protection for existing APIs to prevent unauthorized access? What if my APIs have already been developed without considering access control? What are my options now?

Enter the use of a proxy to provide security services. Solutions such as F5 NGINX Plus can easily be configured to provide authorization and auditing for your APIs - irrespective of where they are deployed. For instance, you can enable OpenID Connect (OIDC) on NGINX Plus to provide authentication and authorization for your applications (including APIs) without having to change a single line of code.

In this article, we will present an existing application with an API deployed in an F5 Distributed Cloud cluster. This application lacks authentication and authorization features. The app we will be using is the Sentence demo app, deployed into a Kubernetes cluster on Distributed Cloud. The Kubernetes cluster we will be using in this walkthrough is a Distributed Cloud Virtual Kubernetes (vk8s) instance deployed to host application services in more than one Regional Edge site. Why? An immediate benefit is that as a developer, I don’t have to be concerned with managing my own Kubernetes cluster. We will use automation to declaratively configure a virtual Kubernetes cluster and deploy our application to it in a matter of seconds!

Once the Sentence demo app is up and running, we will deploy NGINX Plus into another vk8s cluster for the purpose of providing authorization services. What about authentication? We will walk through configuring Microsoft Entra ID (formerly Azure Active Directory) as the identity provider for our application, and then configure NGINX Plus to act as an OIDC Relying Party to provide security services for the deployed API.

Finally, we will make use of Distributed Cloud HTTP load balancers. We will provision one publicly available load balancer that will securely route traffic to the NGINX Plus authorization server. We will then provision an additional Load Balancer to provide application routing services to the Sentence app. This second load balancer differs from the first in that it is only “advertised” (and therefore only reachable) from services inside the namespace. This results in a configuration that makes it impossible for users to bypass the NGINX authorization server in an attempt to directly consume the Sentence app.

The following is a diagram representing what will be deployed:
Solution deployment diagramSolution deployment diagram
Let’s get to it!

Deployment Steps

The detailed steps to deploy this solution are located in a GitHub repository accompanying this article. Follow the steps here, and be sure to come back to this article for the wrap-up!


You did it! With the power and reach of Distributed Cloud combined with the security that NGINX Plus provides, we have been able to easily provide authorization for our example API-based application.

Where could we go from here? Do you remember we deployed these applications to two specific geographical sites? You could very easily extend the reach of this solution to more regions (distributed globally) to provide reliability and low-latency experiences for the end users of this application. Additionally, you can easily attach Distributed Cloud’s award-winning DDoS mitigation, WAF, and Bot mitigation to further protect your applications from attacks and fraudulent activity.

Thanks for taking this journey with me, and I welcome your comments below.


This article wouldn’t have been the same without the efforts of @Fouad_Chmainy, @Matt_Dierick, and Alexis Da Costa. They are the original authors of the distributed design, the Sentence app, and the NGINX Plus OIDC image optimized for Distributed Cloud. Additionally, special thanks to @Cody_Green and @Kevin_Reynolds for inspiration and assistance in the Terraform portion of the solution. Thanks, guys!

by Daniel_Edgar | F5 Employee
Posted in TechnicalArticles Sep 28, 2023 5:00:00 AM

Adaptive Apps: Replicate & deploy WAF application security policies across F5's security portfolio


Adaptive applications utilize an architectural approach that facilitates rapid and often fully-automated responses to changing conditions—for example, new cyberattacks, updates to security posture, application performance degradations, or conditions across one or more infrastructure environments.

Unlike the current state of many apps today that are labor-intensive to secure, deploy, and manage, adaptive apps are enabled by the collection and analysis of live application performance and security telemetry, service management policies, advanced analytic techniques such as machine learning, and automation toolchains.

This example seeks to demonstrate value in two key components of F5's Adaptive Apps vision: helping our customers more rapidly detect and neutralize application security threats and helping to speed deployments of new applications.

In today's interconnected digital landscape, the ability to share application security policies seamlessly across data centers, public clouds, and Software-as-a-Service (SaaS) environments is of paramount importance. As organizations increasingly rely on a hybrid IT infrastructure, where applications and data are distributed across various cloud providers and security platforms, maintaining consistent and robust security measures becomes a challenging task.

Using a consistent & centralized security policy architecture provides the following key benefits:

  • Reduced Infrastructure Complexity: Modern businesses often employ a combination of on-premises data centers, public cloud services, and SaaS applications. Managing separate security policies for each platform introduces complexity, making it challenging to ensure consistent protection and adherence to security standards.
  • Consistent Protection: A unified security policy approach guarantees consistent protection for applications and data, regardless of their location. This reduces the risk of security loopholes and ensures a standardized level of security across the entire infrastructure.

  • Improved Threat Response Efficiency: By sharing application security policies, organizations can respond more efficiently to emerging threats. A centralized approach allows for quicker updates and patches to be applied universally, strengthening the defense against new vulnerabilities.

  • Regulatory Compliance: Many industries have strict compliance requirements for data protection. Sharing security policies helps organizations meet these regulatory demands across all environments, avoiding compliance issues and potential penalties.

  • Streamlined Management: Centralizing security policies simplifies the management process. IT teams can focus on maintaining a single set of policies, reducing complexity, and ensuring a more effective and consistent security posture.

  • Cost-Effective Solutions: Investing in separate security solutions for each platform can be expensive. Sharing policies allows businesses to optimize security expenditure and resource allocation, achieving cost-effectiveness without compromising on protection.

  • Enhanced Collaboration: A shared security policy fosters collaboration among teams working with different environments. This creates a unified security culture, promoting information sharing and best practices for overall improvement.

  • Improved Business Agility: A unified security policy approach facilitates smoother transitions between different platforms and environments, supporting the organization's growth and scalability.

By having a consistent security policy framework, businesses can ensure that critical security policies, access controls, and threat prevention strategies are applied uniformly across all their resources. This approach not only streamlines the security management process but also helps fortify the overall defense against cyber threats, safeguard sensitive data, and maintain compliance with industry regulations. Ultimately, the need for sharing application security policies across diverse environments is fundamental in building a resilient and secure digital ecosystem.

In the spirit of enabling a unified security policy framework, this example shows the following two key use cases:

  1. Replicating and deploying an F5 BIG-IP Advanced WAF (AWAF) security policy to F5 Distributed Cloud WAAP (F5 XC WAAP)
  2. Replicating and deploying an F5 NGINX App Protect (NAP) security policy to F5 XC WAAP

Specifically, we show how to use F5's Policy Supervisor and Policy Supervisor Conversion Utility to import, convert, replicate, and deploy WAF policies across the F5 security proxy portfolio. Here we will show how the Policy Supervisor tool provides flexibility in offering both automated and manual ways to replicate and deploy your WAF policies across the F5 portfolio. Regardless of the use case, the steps are the same, enabling a consistent and simple methodology.

We'll show the following 2 use cases:

1. Manual BIG-IP AWAF to F5 XC WAAP policy replication & deployment:

  • Private BIG-IP AWAF deployment with security policy blocking specific attacks
  • Manual conversion of this BIG-IP AWAF policy to F5 XC WAAP policy using the Policy Supervisor Conversion Utility
  • F5 XC WAAP environment without application security policy
  • Manual deployment of converted BIG-IP AWAF security policy into F5 XC WAAP environment showing enablement of equivalent attack blocking

2. Automated NGINX NAP to F5 XC WAAP policy replication & deployment:

  • Private NGINX NAP deployment with security policy blocking specific attacks
  • Automated conversion of this NGINX NAP policy to F5 XC WAAP policy using the Policy Supervisor tool
  • F5 XC WAAP environment without application security policy
  • Automated deployment of converted NGINX NAP security policy into F5 XC WAAP environment showing enablement of equivalent attack blocking
Note there are additional resources available from F5's Technical Marketing team to help you better understand the capabilities of the F5 Policy Supervisor. For a more detailed look at the F5 Policy Supervisor, be sure to review the following additional excellent resources:

Use Case

Simple, easy way to replicate & deploy WAF application security policies across F5's BIG-IP AWAF, NGINX NAP, and F5 XC WAAP security portfolio.

While the Policy Supervisor supports all of the possible security policy replication & migration paths shown on the left below, this example is focused on demonstrating the two specific paths shown on the right below.


Solution Architecture


Problem Statement

Customers find it challenging, complex, and time-consuming to replicate & deploy application security policies across their WAF deployments which span the F5 portfolio (including BIG-IP, NAP, and F5XC WAAP) within on-prem, cloud, and edge environments.

Customer Outcome

By enforcing consistent WAAP security policies across multiple clouds and SaaS environments, organizations can establish a robust and standardized security posture, ensuring comprehensive protection, simplified management, and adherence to compliance requirements.

The Guide

Please refer to https://github.com/f5devcentral/adaptiveapps for detailed instructions and artifacts for deploying this example use case.

Demo Video

Watch the demo video: 

by Kevin_Delgadillo | F5 Employee
Posted in TechnicalArticles Sep 19, 2023 5:00:00 AM

Minimizing Security Complexity: Managing Distributed WAF Policies


In today's digital landscape, where cyber threats constantly evolve, safeguarding an enterprise's web applications is of paramount importance.  However, for security engineers tasked with protecting a large enterprise equipped with a substantial deployment of web application firewalls (WAFs), the task of managing distributed security policies across the entire application landscape presents a significant challenge.  Ensuring consistency and coherence, in both the effectiveness and deployment of these policies is essential, yet it's far from straightforward.  In this article and demo, we'll explore a few best practices and tools available to help organizations maintain robust security postures across their entire WAF infrastructure, and how embracing modern approaches like DevSecOps and the F5 Policy Supervisor and Conversion tools can help overcome these challenges.

Security Policy as Code:

Storing your WAF policies as code within a secure repository is a DevSecOps best practice that extends beyond consistency and tracking.  It's also the first step in making security an integral part of the development process, fostering a culture of security throughout the entire software development and delivery lifecycle.  This shift-left approach ensures that security concerns are addressed early in the development process, reducing the risk of vulnerabilities and enhancing collaboration between security, development, and operations teams.  It enables automation, version control, and rapid response to evolving threats, ultimately resulting in the delivery of secure applications with speed and quality.  

To help facilitate this, the entire F5 security product portfolio supports the ingestion of WAF policy in JSON format.  This enables you to store your policies as code in a Git repository and seamlessly reference them during your automation-driven deployments, guaranteeing that every WAF deployment is well-prepared to safeguard your critical applications. 

"wafPolicy": {
    "class": "WAF_Policy",
    "url": "https://raw.githubusercontent.com/knowbase/architectural-octopod/main/awaf/owasp-auto-tune.json",
    "enforcementMode": "blocking",
    "ignoreChanges": true

F5 Policy Supervisor:

Considering the sheer number of WAFs in large enterprises, managing distributed policies can easily overwhelm security teams.  Coordinating updates, rule changes, and incident response across the entire application security landscape requires efficient policy lifecycle management tools.  Using a centralized management system that provides visibility into the security posture of all WAFs and the state of deployed policies can help streamline these operations.  The F5 Policy Supervisor was designed to meet this critical need.

The Policy Supervisor allows you to easily create, convert, maintain, and deploy WAF polices across all F5 Application Security platforms.  With both an easily navigated UI and robust API, the Policy Supervisor tool greatly enhances your ability to easily manage security policies at scale.


In the context of the Policy Supervisor, providers are remote instances that provide WAF services, such as NGINX App Protect(NAP), BIG-IP Advanced WAF(AWAF), or F5 Distributed Cloud Web App and API Security(XC WAAP).  The "Providers" section serves as the command center where we oboard of all our WAF instances and gain insight into their status and deployments.  For BIG-IP and NGINX we employ agents to perform the onboarding.  An agent is a lightweight container that stores secrets in a vault and connects the instances to the SaaS layer.  For XC we use an API token, this can easily be generated by navigating to Account > Account Settings > Personal Management > Credentials> Add Credentials in the XC console.  Detailed instructions for adding both types of providers are readily accessible during the "Add Provider" workflow.

Screenshot 2023-09-14 at 3.15.20 PM.png

After successfully onboarding our providers, we can ingest the currently deployed policies and begin managing them on the platform.


The "Policies" section serves as the central hub for overseeing the complete lifecycle of policies onboarded onto the platform.  Within this section, we gain access to policy insights, including their current status and the timestamp of their last modification.  Selecting a specific policy opens up the "Policy Details" panel, offering a comprehensive suite of options.  Here, you can edit, convert, deploy, export, or remove the policy, while also accessing essential information regarding policy-related actions and reports detailing those actions.

Screenshot 2023-09-14 at 3.08.53 PM.png

The tool additionally features an editor equipped with real-time syntax validation and auto-completion, allowing you to create new or edit existing polices on the fly.

Screenshot 2023-09-14 at 2.56.50 PM.png

Policy Deployment:

Navigating the policy deployment process within the policy supervisor is a seamless and user-friendly experience.  To initiate the process select "Deploy" from the "Policy Details" panel then selecting the source and target or targets. The platform first begins the conversion process to ensure the policy aligns with the features supported by the targets.  Following this conversion, you'll receive a detailed report providing you with information on what was and was not converted.  Once you've reviewed the conversion results and are satisfied with the outcome, select the endpoints to apply the policy to, and click deploy.  That's it, it's that easy.

Screenshot 2023-09-14 at 2.51.04 PM.png


F5 Policy Conversion Utility:

The F5 Policy Conversion tool allows you to transform JSON or XML formatted policies from an NGINX or BIG-IP into a format compatible with your desired target - any application security product in the F5 portfolio. This user-friendly tool requires no authentication, offering hassle-free access at https://policysupervisor.io/convert

The interface has an intuitive design, simplifying the process: select your source and target types, upload your JSON or XML formatted policy, and with a simple click, initiate the conversion.  Upon completion, the tool provides a comprehensive package that includes a detailed report on the conversion process and your newly adapted policies, ready for deployment onto your chosen target.

Screenshot 2023-09-14 at 2.45.34 PM.png

Whether you are augmenting a F5 BIG-IP Advanced WAF fleet with F5 XC WAAP at the edge, decomposing a monolithic application and protecting the new microservice with NIGNX App Protect, or augmenting a multi-cloud security strategy with F5 XC WAAP at the edge, the Policy Conversion utility can help ensure you are providing consistent and robust protection across each platform.


Managing security policies across a large WAF footprint is a complex undertaking that requires constant vigilance, adaptability, and coordination. Security engineers must strike a delicate balance between safeguarding applications and ensuring their uninterrupted functionality while also staying ahead of evolving threats and maintaining a consistent security posture across the organization.  By harnessing the F5 Policy Supervisor and Conversion tools, coupled with DevSecOps principles, organizations can easily deploy and maintain consistent WAF policies throughout the organization's entire application security footprint.



F5 Hybrid Security Architectures:

F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) 
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)

For further information or to get started:


by Cameron_Delano | F5 Employee
Posted in TechnicalArticles Sep 17, 2023 9:18:59 PM

APIs Everywhere

Size of the problem

In a recent conversation, a customer mentioned they figured they had something on the order of 6000 API endpoints in their environment.  This struck me as odd, as I am pretty sure they have 1000+ HTTP-based applications running on their current platform.  If the 6000 API number is correct, each application has only six endpoints.  In reality, most apps will have dozens or hundreds of endpoints... that means there are probably 10s of thousands of API endpoints in their environment!

And you thought it was a pain to manage a WAF when you had a thousand apps!

But the good news is that you're not using all of them.  The further good news is that you can REDUCE your security exposure. 

When was the last time someone took things OFF your To-do list?

Tell me how!

The answer is to profile your application landscape.  Much like the industry did in the early 20-teens with Web Application Security, understanding your attack surface is the key to defining a plan to defend it.  This is what we call API Discovery.

By allowing your traffic to be profiled and APIs uncovered, you can begin to understand the scale and scope of your security journey.

You can do this by putting your client traffic through an engine that offers this, like F5's Distributed Cloud (or F5 XC).  With F5 XC, you can build a list of the URIs and their metadata and generate a threat assessment and data profile of the traffic it sees.

Interactive view of API callsInteractive view of API calls

This is a fantastic resource if you can push your traffic through an XC Load Balancer, but that isn't always possible.

Out of Band API Analysis

What are your options when you want to do this "Out of Band"?  Out of Band (or OOB) presents challenges, but luckily, F5 has answers.

If we can gather the traffic and make it available to the XC API Discovery process, generating the above graphic for your traffic is easy.

Replaying the Traffic

Replaying, or more accurately, "mimicking" the traffic can be done using a log process on the main proxy - BIG-IP or nginx are good examples, but any would work - and then sending that logged traffic to a process that will generate a request and response that traverses an XC Load Balancer.

API Discovery traffic flowAPI Discovery traffic flow

This diagram shows using an iRule to gather the request and response data, which is then sent to a custom logging service.  This service uses the data to recreate the request (and response) and sends that through the XC Load Balancer.

Both the iRule and the Logger service are available as open-source code here.

If you're interested in deploying this, F5 is here to help, but if you would like to deploy it on your own, here is a suggested architecture:


Deploying the logger as a container on F5 Distributed Cloud's AppStack on a Customer Edge instance allows the traffic to remain within your network enclave.  The metadata is pushed to the XC control plane, where it is analyzed, and the API characteristics are recorded.

What do you get?

The analysis provided in the dashboard is invaluable for determining your threat levels and attack surfaces and helping you build a mitigation plan.

From the main dashboard shown here, the operator can see if any sensitive data was exposed (and what type it might be), the threat level assessment and the authorization method.  Each can help determine a course of action to protect from data leakage or future breach attempts.


Drilling into these items, the operator is presented with details on the performance of the API (shown below).

endpoint detailsendpoint details

To promote sharing of information, all of the data gathered is exportable in Swagger/OpenAPI format:

swagger exportswagger export


Where to from here?

We will publish more on this in the coming weeks, so stay tuned.


by Scheff | F5 Employee
Posted in TechnicalArticles Sep 14, 2023 5:00:00 AM

Securing Applications using mTLS Supported by F5 Distributed Cloud


Mutual Transport Layer Security (mTLS) is a process that establishes encrypted and secure TLS connection between the parties and ensures both parties use X.509 digital certificates to authenticate each other. It helps to prevent the malicious third-party attacks which will imitate the genuine applications. This authentication method helps when a server needs to ensure the authenticity and validity of either a specific user or device. As the SSL became outdated several companies like Skype, Cloudfare are now using mTLS to secure business servers. Not using TLS or other encryption tools without secure authentication leads to ‘man in the middle attacks.’ Using mTLS we can provide an identity to a server that can be cryptographically verified and makes your resources more flexible.

mTLS with XFCC Header

Not only supporting the mTLS process, F5 Distributed Cloud WAF is giving the feasibility to forward the Client certificate attributes (subject, issuer, root CA etc..) to origin server via x-forwarded-client-cert header which provides additional level of security when the origin server ensures to authenticate the client by receiving multiple requests from different clients. This XFCC header contains the following attributes by supporting multiple load balancer types like HTTPS with Automatic Certificate and HTTPS with Custom Certificate. 

  • Cert 
  • Chain 
  • Subject 
  • URI 
  • DNS


How to Configure mTLS

In this Demo we are using httpbin as an origin server which is associated through F5 XC Load Balancer. Here is the procedure to deploy the httpbin application, creating the custom certificates and step-by-step process of configuring mTLS with different LB (Load Balancer) types using F5 XC. 

  • Deploying HttpBin Application 

    Here is the link to deploy the application using docker commands. 
  • Signing server/leaf cert with locally created Root CA

    Commands to generate CA Key and Cert: 
        openssl genrsa -out root-key.pem 4096 
        openssl req -new -x509 -days 3650 -key root-key.pem -out root-crt.pem 
    Commands to generate Server Certificate:
        openssl genrsa -out cert-key2.pem 4096
        openssl req -new -sha256 -subj "/CN=test-domain1.local" -key cert-key2.pem -out cert2.csr 
        echo "subjectAltName=DNS:test-domain1.local" >> extfile.cnf 
        openssl x509 -req -sha256 -days 501 -in cert2.csr -CA root-crt.pem -CAkey root-key.pem -out
        cert2.pem -extfile extfile.cnf -CAcreateserial 
    Add the TLS Certificate to XC console, create a LB(HTTP/TCP) and attach origin pools and TLS certificates to it. 
    In Ubuntu: 
    Move above created CA certificate (ca-crt.pem) to /usr/local/share/ca-certificates/ca-crt.pem  and modify "/etc/hosts" file by mapping the VIP(you can get this from your configured LB -> DNS info -> IP Addr) with domain, in this case the (test-domain1.local). 
  • mTLS with HTTPS Custom Certificate

    Log in the F5 Distributed Cloud Console and navigate to “Web APP & API Protection” module. 
    Go to Load Balancers and Click on ‘Add HTTP Load Balancer’. 
    Give the LB Name
    (test-mtls-cust-cert), Domain name (mtlscusttest.f5-hyd-demo.com), LB Type as HTTPS with Custom Certificate, Select the TLS configuration as Single Certificate and configure the certificate details.
    Click in
    Add Item’ under TLS Certificates and upload the cert and key files by clicking on import from files.Shajiya_Shaik_4-1691139279214.png

    Click on apply and
    enable the mutual TLS, import the root cert info, and add the XFCC header value.

    onfigure the origin pool by clicking on ‘Add Item’ under Origins. Select the created origin pool for httpbin.Shajiya_Shaik_5-1691139334520.png

    Click on
    ‘Apply’ and then save the LB configuration with ‘Save and Exit.
    we have created the Load Balancer with mTLS parameters. Let us verify the same with the origin server.

  • mTLS with HTTPS with Automatic Certificate

    Log in the F5 Distributed Cloud Console and navigate to “Web APP & API Protection” module. 

    Goto Load Balancers and Click on ‘Add HTTP Load Balancer’. 

    Give the LB Name(mtls-auto-cert), Domain name (mtlstest.f5-hyd-demo.com), LB Type as HTTPS with Automatic Certificate, enable the mutual TLS and add the root certificate. Also, enable x-forwarded-client-cert header to add the parameters. 

    Configure the origin pool by clicking on ‘Add Item’ under Origins. Select the created origin pool for

    Click on ‘Apply’ and then save the LB configuration with ‘Save and Exit
    we have created the HTTPS Auto Cert Load Balancer with mTLS parameters. Let us verify the same with the origin server.Shajiya_Shaik_1-1691148356034.png


As you can see from the demonstration, F5 Distributed Cloud WAF is providing the additional security to the origin servers by forwarding the client certificate info using mTLS XFCC header.  

Reference Links

by Shajiya_Shaik | F5 Employee
Posted in TechnicalArticles Sep 13, 2023 5:00:00 AM

F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)


For those of you following along with the F5 Hybrid Security Architectures series, welcome back!  If this is your first foray into the series and would like some background, have a look at the intro article.  This series is using the F5 Hybrid Security Architectures GitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles.  This repo is a community supported effort to provide not only a demo and workshop, but also a stepping stone for utilizing these practices in your own F5 deployments.  If you find any bugs or have any enhancement requests, open a issue or better yet contribute!

Use Case:

Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller.  To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service as well as F5 BIG-IP Access Policy Manger and Advanced WAF.  We will then use F5 Container Ingress Service and IngressLink to tie it all together.

Distributed Cloud WAAP: Available for SaaS-based deployments and provides comprehensive security solutions designed to safeguard web applications and APIs from a wide range of cyber threats. 

BIG-IP Access Policy Manager(APM) and Advanced WAF:  Available for on-premises / data center and public or private cloud (virtual edition) deployment, for robust, high-performance web application and API security with granular, self-managed controls.

BIG-IP Container Ingress Services: A container integration solution that helps developers and system teams manage Ingress HTTP routing, load-balancing, and application services in container deployments.  

F5 IngressLink: Combines BIG-IP, Container Ingress Services (CIS), and NGINX Ingress Controller to deliver unified app services for fast-changing, modern applications in Kubernetes environments.

NIGNX Ingress Controller for Kubernetes: A lightweight software solution that helps manage app connectivity at the edge of a Kubernetes cluster by directing requests to the appropriate services and pods.


XC WAAP + BIG-IP Access Policy Manager + F5 Container Ingress Services + NGINX Ingress Controller Workflow

GitHub Repo: 

F5 Hybrid Security Architectures



  • xc: F5 Distributed Cloud WAAP
  • nic: NGINX Ingress Controller
  • bigip-base: F5 BIG-IP Base deployment
  • bigip-cis: F5 Container Ingress Services
  • infra: AWS Infrastructure (VPC, IGW, etc.)
  • eks: AWS Elastic Kubernetes Service
  • brewz: Brewz SPA test web application


  • Cloud Provider: AWS
  • Infrastructure as Code: Terraform
  • Infrastructure as Code State: Terraform Cloud
  • CI/CD: GitHub Actions

Terraform Cloud

Workspaces: Create a workspace for each asset in the workflow chosen

Workflow Workspaces
xcbn-cis infra, bigip-base, bigip-cis, eks, nic, brewz, xc

Your Terraform Cloud console should resemble the following:

Screenshot 2023-08-21 at 11.25.15 AM.png

Variable Set: Create a Variable Set with the following values.
IMPORTANT: Ensure sensitive values are appropriately marked.

  • AWS_ACCESS_KEY_ID: Your AWS Access Key ID - Environment Variable
  • AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key - Environment Variable
  • AWS_SESSION_TOKEN: Your AWS Session Token - Environment Variable
  • VOLT_API_P12_FILE: Your F5 XC API certificate. Set this to api.p12 - Environment Variable
  • VES_P12_PASSWORD: Set this to the password you supplied when creating your F5 XC API key - Environment Variable
  • nginx_jwt: Your NGINX Java Web Token associated with your NGINX license - Terraform Variable
  • tf_cloud_organization: Your Terraform Cloud Organization name - Terraform Variable

Your Variable Set should resemble the following:

Screenshot 2023-06-26 at 1.59.11 PM.png


Fork and Clone Repo: F5 Hybrid Security Architectures  


Actions Secrets:
Create the following GitHub Actions secrets in your forked repo

  • XC_P12: The base64 encoded F5 XC API certificate
  • TF_API_TOKEN: Your Terraform Cloud API token
  • TF_CLOUD_ORGANIZATION: Your Terraform Cloud Organization
  • TF_CLOUD_WORKSPACE_workspace: Create for each workspace used in your workflow. EX: TF_CLOUD_WORKSPACE_XC would be created with the value xc

Your GitHub Actions Secrets should resemble the following:

Screenshot 2023-08-21 at 11.32.45 AM.png

Setup Deployment Branch and Terraform Local Variables:

Step 1: Check out a branch for the deploy workflow using the following naming convention

xcbn-cis deployment branch: deploy-xcbn-cis

Screenshot 2023-08-21 at 11.37.36 AM.png

Step 2: Upload the Brewz OAS file to XC
             * From the side menue under Manage, navigate to Files->Swagger Files and choose Add Swagger File

Screenshot 2023-08-21 at 12.09.12 PM.png

             * Upload Brewz OAS file from the repo f5-hybrid-security-architectures/brewz/brewz-oas.yaml

Screenshot 2023-08-21 at 11.58.36 AM.png

Step 3:
 Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data


project_prefix = "Your project identifier"
resource_owner = "You"

aws_region = "Your AWS region" ex: us-west-1
azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"] 

nic = true
nap = false
bigip = true
bigip-cis = true


Step 4: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data


#XC Global
api_url = "https://<Your Tenant>.console.ves.volterra.io/api"
xc_tenant = "Your XC Tenant ID"
xc_namespace = "Your XC namespace"

app_domain = "Your App Domain"

xc_waf_blocking = true

#XC AI/ML Settings for MUD, APIP - NOTE: Only set if using AI/ML settings from the shared namespace
xc_app_type = []
xc_multi_lb = false

#XC API Protection and Discovery
xc_api_disc = true
xc_api_pro = true
xc_api_spec = ["Path to uploaded API spec"] *See below screen shot for how to obtain this value.

#XC Bot Defense
xc_bot_def = false

xc_ddos = false

#XC Malicious User Detection
xc_mud = false


* For Path to API Spec navigate to Manage->Files->Swagger Files, click the three dots next to your OAS, and choose "Copy Latest Version's URL".  Paste this into the xc_api_spec in the xc/terraform.tfvars.

Screenshot 2023-06-26 at 2.07.20 PM.png

Step 5: Modify line 16 in the .gitignore and comment out the *.tfvars line with # and save the file

Screenshot 2023-02-21 at 8.14.58 AM.png

Step 6: Commit your changes
 Screenshot 2023-08-21 at 11.45.28 AM.png


Step 1: Push your deploy branch to the forked repo

Screenshot 2023-08-21 at 11.45.28 AM.png

Step 2: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build


Screenshot 2023-08-21 at 11.43.51 AM.png

Step 3: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC

Screenshot 2023-08-21 at 11.48.38 AM.png

Step 4: Check your Terraform Outputs for XC and verify your app is available by navigating to the FQDN

Screenshot 2023-06-26 at 2.17.31 PM.png

Step 5: Configure F5 APM and Advanced WAF following the guide here.

API Discovery:

The F5 XC WAAP platform learns the schema structure of the API by analyzing sampled request data, then reverse-engineering the schema to generates an OpenAPI spec.  The platform validates what is deploy versus what is discovered and tags any Shadow APIs that are found.  We can then download the learned schema and use it to augment our BIG-IP APM API protection configuration.

Screenshot 2023-06-26 at 2.19.56 PM.png 

Deployment Teardown:

Step 1: From your deployment branch check out a branch for the destroy workflow using the following naming convention

xcbn-cis destroy branch: destroy-xcbn-cis

Screenshot 2023-08-21 at 11.52.44 AM.png

Step 2: Push your destroy branch to the forked repo

Screenshot 2023-08-21 at 11.56.49 AM.png

Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build


Screenshot 2023-08-21 at 12.13.37 PM.png

Step 4: Once the pipeline completes, verify your assets were destroyed

Screenshot 2023-08-21 at 11.51.17 AM.png


In this article we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAAP, F5 BIG-IP, and NGINX Ingress Controller to protect a test API running in AWS EKS.  While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments. 

Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances.  Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect.  With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications.  Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution.


Article Series:

F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) 
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)

For further information or to get started:

  • F5 Distributed Cloud Platform (Link)
  • F5 Distributed Cloud WAAP Services (Link)
  • F5 Distributed Cloud WAAP YouTube series (Link)
  • F5 Distributed Cloud WAAP Get Started (Link)
by Cameron_Delano | F5 Employee
Posted in TechnicalArticles Sep 11, 2023 5:00:00 AM

Overview of F5 Distributed Cloud Dashboards

Table Of Contents:



As the modern digital application world keeps evolving and innovative, organizations are faced with an overwhelming amount of data coming from various sources. Navigating through this sea of data can be a daunting task, often leading to confusion and inefficiency in decision-making processes. Making sense of this data and extracting valuable insights is crucial in choosing the right decisions for protecting the applications and boosting their performance. This is where dashboards come to the rescue. Dashboards are powerful visual tools of consolidated results of complex data sets into user-friendly, interactive displays, offering a comprehensive overview of key metrics, trends, and insights in one place. 

By grouping different types of service details into visuals like graphs, charts, tables, and metrics, and displaying those visuals on a single page, dashboards provide valuable insights. They help users to review the summary on a regular basis which focusses on highlighting the key issues, security risks and current business trends. They provide users with quick and easy-to-understand real time insights with analysis. They can also be made interactive by providing advanced options of global search and filters to view the data that best suits each user’s needs.  

In a nutshell, "Dashboards are like canvas of your business data – offering a panoramic view of your applications data landscape which illuminates the hidden insights that drive application security decisions."


In this dashboards overview article, we will walk you through some of the enhanced F5 Distributed Cloud (XC) dashboards and their key insights. 


Security Dashboards

  • Web Application and API Protection (WAAP) dashboard

    WAAP service is upgraded to enhance dashboards with new features like Trends which focusses on application security at a glance. This dashboard captures performance and security with multiple types of metrics to give users a summary of different sections like malicious users, security events, threat intelligence, DDoS activity, performance statistics, throughput, etc. for current applications available in that namespace. These dashboards show multiple metrics like threat campaign data, security events details, DDoS and Bot traffic data, Top attack sources, API endpoints, load balancers health, active features, etc.  
    Fig 1: Image showing waap dashboardFig 1: Image showing waap dashboard
    Fig 2: Image showing waap performanceFig 2: Image showing waap performance


  • Client-Side Defense (CSD) Dashboard

    CSD is also part of security controls available in distributed cloud and its monitoring dashboard presents different types of details like detected 3rd party domains, mitigated & allowed domains, transactions observed, LastSeen, etc. as shown below - 
    Fig 3: Image showing CSD dashboardFig 3: Image showing CSD dashboard
  • Bot Defense Dashboard

    Bot defense service also is enhanced to give an amazing UI experience as well as clear picture of current bot defense posture of existing applications. UI has different widgets like bad traffic, traffic chart, API latency, etc. to analyze the traffic from multiple perspectives. 
    Fig 4: Image showing Bot Defense dashboardFig 4: Image showing Bot Defense dashboard

You can explore more about security dashboards in simulator by clicking this link: https://simulator.f5.com/s/xc-dashboards. 

Multi-Cloud Dashboards

  • Multi-Cloud Network Connect Dashboard

    The Multi-Cloud Network Connect is a L3 routing service between CE's, between RE's, and it uses a combination of point-to-point VPN's and the Global Network to provide the fastest multi-path mesh routing service. Network Connect dashboards provide visibility to the networking paths and infrastructure, and for Cloud CE's additional visibility of connected services at the cloud provider, routing tables, running instances, availability zones, etc. 

    The Multi-Cloud Network Connect service provides details for network operators so they can observe and act on their multi-cloud network focused each for Networking, Performance, Network Security & Site management. Users can navigate to each section of these dashboards to understand the insights. We can get details about different components like interfaces, data plane, control plane, Top 10 links and their statistics. 
    Fig 5: Image showing MCN Performance dashboardFig 5: Image showing MCN Performance dashboard
    Fig 6: Image showing MCN Security dashboardFig 6: Image showing MCN Security dashboard

    For more details on this feature check this MCN article. 

    Multi-Cloud App Connect Dashboard

  • App Connect is a L7 full proxy service using the F5 Global Network to provide apps with effective local connectivity. App Connect dashboards focus on details of how an app is connected both internally and externally by visualizing traffic ingress to the front end and egress to each service endpoint. This service also joined the trend by serving rich dashboards focused on application delivery. Application owners can now observe and take action on applications delivered across their multi-cloud network with a dashboard focused on applications and performance. Performance dashboard shows details like HTTP & TCP traffic overview, throughput, top load balancers, etc. Application dashboard focusses on load balancers health, active alerts, list of existing load balancers. 
    Fig 7: Image showing App Connect DashboardFig 7: Image showing App Connect Dashboard
    Fig 8: Image showing App dashboardFig 8: Image showing App dashboard

Content Delivery Network (CDN) Dashboard

When it comes to content delivery, performance plays a major role in smoother application streamlining. So, by keeping this in in picture, XC console has released CDN performance dashboard which features the cache hit ratio, allowing network operators and app owners to optimize the regional delivery of content that can be cached. They also show existing CDN distributions along with their metrics like requests count, Data transfer, etc.Fig 9: Image showing CDN dashboardFig 9: Image showing CDN dashboard

Note: This is our first overview article on XC dashboards series and stay tuned for our upcoming articles on these newly implemented rich dashboards. 


Dashboards are highly recommended tools to visualize data in a simple and clear way. In this article, we have provided some insights on newly enhanced rich security dashboards of important features which are helpful to users in identifying application concerns and taking necessary actions. 


For more details refer below links: 

  1. Overview of WAAP 
  2. Get started with F5 Distributed Cloud 
  3. Security Dashboards Simulator 
by Janibasha | F5 Employee
Posted in TechnicalArticles Sep 10, 2023 6:00:00 PM

Demo Guide: F5 Distributed Cloud DNS (SaaS Console)

DNS, a Domain Name Service is a mechanism of how humans and machines discover where to connect. It is the universal directory of addresses to names. It is the most prominent feature that every service on the Internet depends on. It will be very critical to keep our organizations online in the midst of DDoS attacks.

We usually encounter with multiple scenarios of DNS failure in single on-prem, CPE based DNS solutions with backup or a single cloud DNS solution struggling with increasing traffic demands. Also, when we extend our traditional DNS to an organization’s websites and applications across different environments, most of the on-premises DNS solutions don’t scale efficiently to support today’s ever expanding app footprints.

F5 Distributed Cloud DNS simplifies all these problems by acting as both Primary or Secondary nameservers and provides global security, automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS – DNSSEC support. With the increase in deployment of app in cloud, F5 XC DNS helps to scale up and provide regional DNS as well. 

It also acts as an intelligent DNS Load Balancer from F5 which directs application traffic across the environments globally. It performs health checks, provides disaster recovery, and automates responses to activities and events to maintain high performance among applications not only that it has regional DNS that helps to redirect traffic according to the Geographical location there by reducing load on single DNS server.

Here are the key areas where F5 Distributed Cloud DNS plays a vital role to solve:

  • Failover with Secondary DNS
  • Secure secondary DNS 
  • Primary DNS
  • Powerful DNS Load Balancing and Disaster Recovery

There is a GitHub repo that is available and helps to deploy the services for the above key features.

Finally, this demo guide supports the customers by giving a clear instruction set  and a detto deploy the services using F5 Distributed Cloud DNS.


by Shajiya_Shaik | F5 Employee
Posted in TechnicalArticles Sep 6, 2023 5:00:00 AM

Testing the security controls for a notional FDX Open Banking deployment


Unlike other Open Banking initiatives that are mandate-driven in a top-down approach, the North-American Open Banking standardisation effort is industry-led, in a bottom-up fashion by the Financial Data Exchange (FDX), a non-profit body. FDX's members are financial institutions, fintechs, payment networks and financial data stakeholders, collaboratively defining the technical standard for financial data sharing, known as FDX API.
As Security is a core principle followed in development of FDX API, it's worth examining one of the ways in which F5 customers can secure and test their FDX deployments.


To understand the general architecture of an Open Banking deployment like FDX, it is helpful to visualise the API endpoints and components that play a central role in the standard, versus the back-end functions of typical Financial Institutions (the latter elements displayed as gray in the following diagram):


In typical Open Banking deployments, technical functions can be broadly grouped in Identity & Consent, Access and API management areas. These are core functions of any Open Banking standard, including FDX.

If we are to start adding the Security Controls (green) to the diagram and also show the actors that interact with the Open Banking deployment, the architecture becomes:


It is important to understand that Security Controls like the API Gateway, Web Application and API Protection or Next Generation Firewalls are just functions, rather than instances or infrastructure elements. In some architectures these functions could be implemented by the same instances/devices while in some other architectures they could be separate instances.
To help decide the best architecture for Open Banking deployments, it is worth checking the essential capabilities that these Security Controls should have:

WAAP (Web Application and API Protection)
  • Negative security model / known attack vectors database
  • Positive security model / zero-day attack detection
  • Source reputation scoring
  • Security event logging
  • L7 Denial of Service attack prevention
  • Brute-force and leaked-credential attack protection
  • Logging and SIEM/SOAR integration
  • Bot identification and management
  • Denial of Service Protection
  • Advanced API Security:
      Adherence to the FDX API OpenAPI spec
      Discovery of shadow APIs
API Gateway
  • Authentication and authorization
  • Quota management
  • Layer 3-4 Denial of Service attack prevention
  • Prevention of port scanning
  • Anomaly detection
  • Privacy protection for data at-rest
Client-side protection
  • Fraud detection

One possible architecture that could satisfy these requirements would look similar to the one depicted in the following high-level diagram, where NGINX is providing API Gateway functionality while F5 Distributed Cloud provides WAF, Bot Management and DDoS protection.


In this case, just for demo purposes, the notional FDX backend has been deployed as a Kubernetes workload on GKE and NGINX API Gateway was deployed as an Ingress Controller while Distributed Cloud functionality was implemented in F5's Distributed Cloud (XC) Regional Edges, however there is a great degree of flexibility in deploying these elements on public/private clouds or on-premises.
To learn more on the flexibility in deploying XC WAAP, you can read the article Deploy WAAP Anywhere with F5 Distributed Cloud

Automated security testing

Once the architectural decisions have been made, the next critical step is testing this deployment (with a focus on Security Controls testing) and adjust the security policies. This, of course, should be done continuously throughout the life of the application, as it evolves.
The challenge in testing such an environment comes from the fact the Open Banking API is generally protected against unathorized access via JSON Web Token (JWT), which is checked for authentication and authorisation at the API Gateway level. "Fixing" the JWT to some static values defeats the purpose of testing the actual configuration that is in (or will be moved to) Production, while generating the JWT automatically, to enable scripted testing is fairly complex, as it involves going through all the stages a real user would need to go through to perform a financial transaction.

As an example of the consent journey an end-user and the Data Recipient have to go through to obtain the JWT can be seen in the following diagram:


One solution to this challenge would be to use an API Tester that can perform the same actions as a real end-user: obtain the JWT in a pre-testing stage and feed it as an input to the security testing stages.
One such tool was built using the Open Source components described in the diagram below and is available on GitHub.

The API Tester is using Robot Framework as a testing framework, orchestrating the other components. Selenium WebDriver is used to automate the end-user session that would authenticate to the Financial Institution and give the user consent for a particular type of transaction. The JWT that is obtained is then passed by Robot to the other testing stages which, for demo purposes, will perform functionality tests (ensuring valid calls are being allowed) and security testing (ensuring, for example, known API attacks are being blocked).Valentin_Tobi_0-1692239242918.png


The API Tester is automatically deployed and run using GitHub Actions and Terraform Cloud. A full pipeline will go through the deployment of the the GCP's GKE infrastructure required to host the notional FDX back-end and the NGINX Ingress Controller API Gateway, the F5 XC WAAP (Web Application and API Protection), and the API Tester hosted on the F5 XC vk8s infrastructure.
A run is initiated by creating a repository branch and, following the deployment and test run, a report is being received via email.

Here's the API Tester in action:


F5 XC WAAP and NGINX API Gateway can provide the levels of protection required by the Financial Services Industry, the current article focussing on a possible security architecture for FDX, the North-American standard for Open Banking.
To test the security posture of the FDX Security Controls, a new API Tester framework is needed and the main challenge that is solved is the automated generation of JWT, following the same journey as a real end-user.
This allows the testing of deployments having a configuration similar to the one found in Production.

For more information or to get started:

by Valentin_Tobi | F5 Employee
Posted in TechnicalArticles Sep 5, 2023 5:00:00 AM

F5 Distributed Cloud - Service Policy - Header Matching Logic & Processing


Who knows what an iRule is?  iRules have been used by F5 BIG-IP customers for a quarter of a century!  One of the most common use cases for iRules are for security decisions.  If you're not coming from a BIG-IP and iRules background, what if I told you that you could apply 1000s of combinations of L4-L7 match criteria in order to take action on specific traffic?  This is what a Service Policy provides similar to iRules.  The ability to match traffic and allow, deny, flag, or tune application security policy based on that match.  I often am asked, "Can F5 Distributed Cloud block ____ the same way I do with iRules?", and most commonly the answer is, absolutely, with a Service Policy.  

Story time

Recently, I had a customer come to me with a challenge for blocking a specific attack based on a combination of headers.  This is a common application security practice, specifically for L7 DDoS attacks, or even Account Take Over (ATO) attempts via Credential Stuffing/Brute Force Login.  While F5 Distributed Cloud's Bot Defense or Malicious Users feature sets might be more dynamic tools in the toolbox for these attacks, a Service Policy is great for taking quick action.  It is critical that you've clearly identified the match criteria inorder to ensure your service policy will not block good traffic.  

Service Policy Logic

As stated earlier, the attack was identified by a specific combination of headers and values of these headers.  The specific headers looked something like below (taken from my test environment and curl tests):

curl -I --location --request GET 'https://host2.domain.com' \
--header 'User-Agent: GoogleMobile-9.1.76' \
--header 'Content-Type: application/json; charset=UTF-8' \
--header 'Accept-Encoding: gzip, deflate, br' \
--header 'partner-name: GOOGLE' \
--header 'Referer: https://host.domain.com/'

The combination of these headers all had to be present, meaning, we needed an "and" logic for matching the headers and their values.  Seems pretty simple, but this is where the conversation between the customer and myself came into play.  When applying all of the headers to match as shown below, they were not matching.  Can you guess why?

Figure A: Headers - FlatFigure A: Headers - Flat

The first thought that comes to mind, is probably, case sensitivity in the values.  However, if we take a closer look specifically at the 'partner-name' header configuration, I've placed a transformation on this specific header.  So the 'partner-name' isn't the problem.Figure B: A transformer is applied to the request traffic attribute values before evaluating for match.Figure B: A transformer is applied to the request traffic attribute values before evaluating for match.

Give up?  The issue in this Service Policy configuration is the 'Accept-Encoding' header.  Specifically the ',' {comma} character in the value.  In the F5 Distributed Cloud Service Policy feature, we treat commas as seperate headers with each individual value.  The reason for this, is a request can have the same header multiple times, or it can have multiple values in a single header.  In order to keep it consistant when parsing the headers with comma deliminated values, we seperate them into multiple headers before matching.

I thought I could be smart when initially testing this, and added multiple values to a single header.  This will not match, for one because they are not seperate headers with values, but also because when there are multiple values within a single header.  This multiple value in a single header configurations within the service policy creates an "or" logic, and we're looking for an "and" logic for all headers and their exact values. 

Figure C: Multiple Values in Single HeaderFigure C: Multiple Values in Single Header Figure D : Multiple Values within a Single Header - "or" Logic for this headerFigure D : Multiple Values within a Single Header - "or" Logic for this header

In order to get the proper match with "and" logic across all headers, and the header values, we need to apply the same header name multiple times.  Important to note, the 'content-type header' has a ';' {semi-colon} which is not a deliminated value in F5 Distributedc Cloud serivce policy logic, and will match just fine the way it is in the defined policy shown below.

Figure E: Multiple Headers defined, with individual values, will provide "and" logic for all headers, and their values.Figure E: Multiple Headers defined, with individual values, will provide "and" logic for all headers, and their values.


In these tests, I am going to first provide an exact match to block the traffic.  When we match, we provide a 403 response code back to the client.  Within the individual Load Balancers objects of F5 Distributed Cloud, you can customize the messaging that comes along with the 403 response code or any response code for that matter.  For my tests, I'll simply use curl and update the different headers.  After this initial successful block, I'll show a few examples of changing the headers sent with the curl.  For the "and" logic, any changes to the headers should result in a 200 response code.  For the "or" logic, it'll depend on how I change the headers.

"and" logic

In this testing section, the service policy is configured like Figure E above.

All values are an exact match, with and logic, the 403 response code identifies the block from the F5 Distributed CloudAll values are an exact match, with and logic, the 403 response code identifies the block from the F5 Distributed Cloud
When removing the 'g' character from gzip, the "and" logic no longer matches, as not every value is exact.  This results in a 200 response code being from the origin server and F5 Distributed Cloud.When removing the 'g' character from gzip, the "and" logic no longer matches, as not every value is exact. This results in a 200 response code being from the origin server and F5 Distributed Cloud.

We've focused on the Accept-Encoding header, but within "and" logic, it doesn't matter which header we change.  If all headers do not match, we will not block.  In this case, we updated the User-Agent header, and received a response code of 200.We've focused on the Accept-Encoding header, but within "and" logic, it doesn't matter which header we change. If all headers do not match, we will not block. In this case, we updated the User-Agent header, and received a response code of 200.

"or" logic

In this testing section, the servicy policy is configured like Figure D above.  

This is an exact match, and the Service Policy blocked the request, sending a 403 response code back to the clientThis is an exact match, and the Service Policy blocked the request, sending a 403 response code back to the client

With or logic of the Accept-Encoding header, one of the values must match.  Since I removed the first letter of every value, there was not a match, and the F5 Distributed Cloud passed the traffic to the origin server.  The origin and the F5 Distributed Cloud returned a 200 response code.With or logic of the Accept-Encoding header, one of the values must match. Since I removed the first letter of every value, there was not a match, and the F5 Distributed Cloud passed the traffic to the origin server. The origin and the F5 Distributed Cloud returned a 200 response code.

When adding the 'g' back to gzip, but leaving all other values missing their first character, we once again get a block at the service policy, and a 403 response code.  Again, this is 'or' logic, so only 1 value must match.When adding the 'g' back to gzip, but leaving all other values missing their first character, we once again get a block at the service policy, and a 403 response code. Again, this is 'or' logic, so only 1 value must match.


A Serivce Policy is a very powerful engine within the F5 Distributed Cloud.  We've scratched the surface of service policies in this article as it pertains to header matching and logic.  Other match criteria examples are IP Threat Category (Reputation), ASN, HTTP Method, HTTP Path, HTTP Query Parameters, HTTP Headers, Cookies, Arguments, Request Body, and so on.  The combination of these match criteria, and the order of operations of each service policy rule, can make a huge difference in the security posture of your application.  These capabilites within the application layer is critical to he security of your application services.  As the F5 Distributed Cloud is your stragegic point of application delivery and control, I hope you're able to use service policies to elevate your application security posture.

by MattHarmon | F5 Employee
Posted in TechnicalArticles Aug 29, 2023 5:00:00 AM
Distributed Cloud Users Hub

An open group to foster discourse around the integration of security, networking, and application management services across public/private cloud and network edge compute services.

Other F5 XC Resources

Community Quicklinks

Group Hub Information
Distributed Cloud Users

Distributed Cloud Users

Discuss the integration of security, networking, and application delivery services
115 members
Open group
Created 08-Jun-2022
Members (115)