Adaptive applications utilize an architectural approach that facilitates rapid and often fully-automated responses to changing conditions—for example, new cyberattacks, updates to security posture, application performance degradations, or conditions across one or more infrastructure environments.
Unlike the current state of many apps today that are labor-intensive to secure, deploy, and manage, adaptive apps are enabled by the collection and analysis of live application performance and security telemetry, service management policies, advanced analytic techniques such as machine learning, and automation toolchains.
This example seeks to demonstrate value in two key components of F5's Adaptive Apps vision: helping our customers more rapidly detect and neutralize application security threats and helping to speed deployments of new applications.
In today's interconnected digital landscape, the ability to share application security policies seamlessly across data centers, public clouds, and Software-as-a-Service (SaaS) environments is of paramount importance. As organizations increasingly rely on a hybrid IT infrastructure, where applications and data are distributed across various cloud providers and security platforms, maintaining consistent and robust security measures becomes a challenging task.
Using a consistent & centralized security policy architecture provides the following key benefits:
Consistent Protection: A unified security policy approach guarantees consistent protection for applications and data, regardless of their location. This reduces the risk of security loopholes and ensures a standardized level of security across the entire infrastructure.
Improved Threat Response Efficiency: By sharing application security policies, organizations can respond more efficiently to emerging threats. A centralized approach allows for quicker updates and patches to be applied universally, strengthening the defense against new vulnerabilities.
Regulatory Compliance: Many industries have strict compliance requirements for data protection. Sharing security policies helps organizations meet these regulatory demands across all environments, avoiding compliance issues and potential penalties.
Streamlined Management: Centralizing security policies simplifies the management process. IT teams can focus on maintaining a single set of policies, reducing complexity, and ensuring a more effective and consistent security posture.
Cost-Effective Solutions: Investing in separate security solutions for each platform can be expensive. Sharing policies allows businesses to optimize security expenditure and resource allocation, achieving cost-effectiveness without compromising on protection.
Enhanced Collaboration: A shared security policy fosters collaboration among teams working with different environments. This creates a unified security culture, promoting information sharing and best practices for overall improvement.
Improved Business Agility: A unified security policy approach facilitates smoother transitions between different platforms and environments, supporting the organization's growth and scalability.
By having a consistent security policy framework, businesses can ensure that critical security policies, access controls, and threat prevention strategies are applied uniformly across all their resources. This approach not only streamlines the security management process but also helps fortify the overall defense against cyber threats, safeguard sensitive data, and maintain compliance with industry regulations. Ultimately, the need for sharing application security policies across diverse environments is fundamental in building a resilient and secure digital ecosystem.
In the spirit of enabling a unified security policy framework, this example shows the following two key use cases:
Specifically, we show how to use F5's Policy Supervisor and Policy Supervisor Conversion Utility to import, convert, replicate, and deploy WAF policies across the F5 security proxy portfolio. Here we will show how the Policy Supervisor tool provides flexibility in offering both automated and manual ways to replicate and deploy your WAF policies across the F5 portfolio. Regardless of the use case, the steps are the same, enabling a consistent and simple methodology.
We'll show the following 2 use cases:
1. Manual BIG-IP AWAF to F5 XC WAAP policy replication & deployment:
2. Automated NGINX NAP to F5 XC WAAP policy replication & deployment:
Simple, easy way to replicate & deploy WAF application security policies across F5's BIG-IP AWAF, NGINX NAP, and F5 XC WAAP security portfolio.
While the Policy Supervisor supports all of the possible security policy replication & migration paths shown on the left below, this example is focused on demonstrating the two specific paths shown on the right below.
Customers find it challenging, complex, and time-consuming to replicate & deploy application security policies across their WAF deployments which span the F5 portfolio (including BIG-IP, NAP, and F5XC WAAP) within on-prem, cloud, and edge environments.
By enforcing consistent WAAP security policies across multiple clouds and SaaS environments, organizations can establish a robust and standardized security posture, ensuring comprehensive protection, simplified management, and adherence to compliance requirements.
Please refer to https://github.com/f5devcentral/adaptiveapps for detailed instructions and artifacts for deploying this example use case.
Watch the demo video:
In today's digital landscape, where cyber threats constantly evolve, safeguarding an enterprise's web applications is of paramount importance. However, for security engineers tasked with protecting a large enterprise equipped with a substantial deployment of web application firewalls (WAFs), the task of managing distributed security policies across the entire application landscape presents a significant challenge. Ensuring consistency and coherence, in both the effectiveness and deployment of these policies is essential, yet it's far from straightforward. In this article and demo, we'll explore a few best practices and tools available to help organizations maintain robust security postures across their entire WAF infrastructure, and how embracing modern approaches like DevSecOps and the F5 Policy Supervisor and Conversion tools can help overcome these challenges.
Storing your WAF policies as code within a secure repository is a DevSecOps best practice that extends beyond consistency and tracking. It's also the first step in making security an integral part of the development process, fostering a culture of security throughout the entire software development and delivery lifecycle. This shift-left approach ensures that security concerns are addressed early in the development process, reducing the risk of vulnerabilities and enhancing collaboration between security, development, and operations teams. It enables automation, version control, and rapid response to evolving threats, ultimately resulting in the delivery of secure applications with speed and quality.
To help facilitate this, the entire F5 security product portfolio supports the ingestion of WAF policy in JSON format. This enables you to store your policies as code in a Git repository and seamlessly reference them during your automation-driven deployments, guaranteeing that every WAF deployment is well-prepared to safeguard your critical applications.
"wafPolicy": {
"class": "WAF_Policy",
"url": "https://raw.githubusercontent.com/knowbase/architectural-octopod/main/awaf/owasp-auto-tune.json",
"enforcementMode": "blocking",
"ignoreChanges": true
}
Considering the sheer number of WAFs in large enterprises, managing distributed policies can easily overwhelm security teams. Coordinating updates, rule changes, and incident response across the entire application security landscape requires efficient policy lifecycle management tools. Using a centralized management system that provides visibility into the security posture of all WAFs and the state of deployed policies can help streamline these operations. The F5 Policy Supervisor was designed to meet this critical need.
The Policy Supervisor allows you to easily create, convert, maintain, and deploy WAF polices across all F5 Application Security platforms. With both an easily navigated UI and robust API, the Policy Supervisor tool greatly enhances your ability to easily manage security policies at scale.
In the context of the Policy Supervisor, providers are remote instances that provide WAF services, such as NGINX App Protect(NAP), BIG-IP Advanced WAF(AWAF), or F5 Distributed Cloud Web App and API Security(XC WAAP). The "Providers" section serves as the command center where we oboard of all our WAF instances and gain insight into their status and deployments. For BIG-IP and NGINX we employ agents to perform the onboarding. An agent is a lightweight container that stores secrets in a vault and connects the instances to the SaaS layer. For XC we use an API token, this can easily be generated by navigating to Account > Account Settings > Personal Management > Credentials> Add Credentials in the XC console. Detailed instructions for adding both types of providers are readily accessible during the "Add Provider" workflow.
After successfully onboarding our providers, we can ingest the currently deployed policies and begin managing them on the platform.
The "Policies" section serves as the central hub for overseeing the complete lifecycle of policies onboarded onto the platform. Within this section, we gain access to policy insights, including their current status and the timestamp of their last modification. Selecting a specific policy opens up the "Policy Details" panel, offering a comprehensive suite of options. Here, you can edit, convert, deploy, export, or remove the policy, while also accessing essential information regarding policy-related actions and reports detailing those actions.
The tool additionally features an editor equipped with real-time syntax validation and auto-completion, allowing you to create new or edit existing polices on the fly.
Navigating the policy deployment process within the policy supervisor is a seamless and user-friendly experience. To initiate the process select "Deploy" from the "Policy Details" panel then selecting the source and target or targets. The platform first begins the conversion process to ensure the policy aligns with the features supported by the targets. Following this conversion, you'll receive a detailed report providing you with information on what was and was not converted. Once you've reviewed the conversion results and are satisfied with the outcome, select the endpoints to apply the policy to, and click deploy. That's it, it's that easy.
The F5 Policy Conversion tool allows you to transform JSON or XML formatted policies from an NGINX or BIG-IP into a format compatible with your desired target - any application security product in the F5 portfolio. This user-friendly tool requires no authentication, offering hassle-free access at https://policysupervisor.io/convert.
The interface has an intuitive design, simplifying the process: select your source and target types, upload your JSON or XML formatted policy, and with a simple click, initiate the conversion. Upon completion, the tool provides a comprehensive package that includes a detailed report on the conversion process and your newly adapted policies, ready for deployment onto your chosen target.
Whether you are augmenting a F5 BIG-IP Advanced WAF fleet with F5 XC WAAP at the edge, decomposing a monolithic application and protecting the new microservice with NIGNX App Protect, or augmenting a multi-cloud security strategy with F5 XC WAAP at the edge, the Policy Conversion utility can help ensure you are providing consistent and robust protection across each platform.
Managing security policies across a large WAF footprint is a complex undertaking that requires constant vigilance, adaptability, and coordination. Security engineers must strike a delicate balance between safeguarding applications and ensuring their uninterrupted functionality while also staying ahead of evolving threats and maintaining a consistent security posture across the organization. By harnessing the F5 Policy Supervisor and Conversion tools, coupled with DevSecOps principles, organizations can easily deploy and maintain consistent WAF policies throughout the organization's entire application security footprint.
F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)
For further information or to get started:
In a recent conversation, a customer mentioned they figured they had something on the order of 6000 API endpoints in their environment. This struck me as odd, as I am pretty sure they have 1000+ HTTP-based applications running on their current platform. If the 6000 API number is correct, each application has only six endpoints. In reality, most apps will have dozens or hundreds of endpoints... that means there are probably 10s of thousands of API endpoints in their environment!
But the good news is that you're not using all of them. The further good news is that you can REDUCE your security exposure.
When was the last time someone took things OFF your To-do list?
The answer is to profile your application landscape. Much like the industry did in the early 20-teens with Web Application Security, understanding your attack surface is the key to defining a plan to defend it. This is what we call API Discovery.
By allowing your traffic to be profiled and APIs uncovered, you can begin to understand the scale and scope of your security journey.
You can do this by putting your client traffic through an engine that offers this, like F5's Distributed Cloud (or F5 XC). With F5 XC, you can build a list of the URIs and their metadata and generate a threat assessment and data profile of the traffic it sees.
Interactive view of API calls
This is a fantastic resource if you can push your traffic through an XC Load Balancer, but that isn't always possible.
What are your options when you want to do this "Out of Band"? Out of Band (or OOB) presents challenges, but luckily, F5 has answers.
If we can gather the traffic and make it available to the XC API Discovery process, generating the above graphic for your traffic is easy.
Replaying, or more accurately, "mimicking" the traffic can be done using a log process on the main proxy - BIG-IP or nginx are good examples, but any would work - and then sending that logged traffic to a process that will generate a request and response that traverses an XC Load Balancer.
API Discovery traffic flow
This diagram shows using an iRule to gather the request and response data, which is then sent to a custom logging service. This service uses the data to recreate the request (and response) and sends that through the XC Load Balancer.
Both the iRule and the Logger service are available as open-source code here.
If you're interested in deploying this, F5 is here to help, but if you would like to deploy it on your own, here is a suggested architecture:
Deploying the logger as a container on F5 Distributed Cloud's AppStack on a Customer Edge instance allows the traffic to remain within your network enclave. The metadata is pushed to the XC control plane, where it is analyzed, and the API characteristics are recorded.
The analysis provided in the dashboard is invaluable for determining your threat levels and attack surfaces and helping you build a mitigation plan.
From the main dashboard shown here, the operator can see if any sensitive data was exposed (and what type it might be), the threat level assessment and the authorization method. Each can help determine a course of action to protect from data leakage or future breach attempts.
Drilling into these items, the operator is presented with details on the performance of the API (shown below).
endpoint details
To promote sharing of information, all of the data gathered is exportable in Swagger/OpenAPI format:
swagger export
We will publish more on this in the coming weeks, so stay tuned.
Mutual Transport Layer Security (mTLS) is a process that establishes encrypted and secure TLS connection between the parties and ensures both parties use X.509 digital certificates to authenticate each other. It helps to prevent the malicious third-party attacks which will imitate the genuine applications. This authentication method helps when a server needs to ensure the authenticity and validity of either a specific user or device. As the SSL became outdated several companies like Skype, Cloudfare are now using mTLS to secure business servers. Not using TLS or other encryption tools without secure authentication leads to ‘man in the middle attacks.’ Using mTLS we can provide an identity to a server that can be cryptographically verified and makes your resources more flexible.
Not only supporting the mTLS process, F5 Distributed Cloud WAF is giving the feasibility to forward the Client certificate attributes (subject, issuer, root CA etc..) to origin server via x-forwarded-client-cert header which provides additional level of security when the origin server ensures to authenticate the client by receiving multiple requests from different clients. This XFCC header contains the following attributes by supporting multiple load balancer types like HTTPS with Automatic Certificate and HTTPS with Custom Certificate.
In this Demo we are using httpbin as an origin server which is associated through F5 XC Load Balancer. Here is the procedure to deploy the httpbin application, creating the custom certificates and step-by-step process of configuring mTLS with different LB (Load Balancer) types using F5 XC.
Click on apply and enable the mutual TLS, import the root cert info, and add the XFCC header value.
Click on ‘Apply’ and then save the LB configuration with ‘Save and Exit’.
Now, we have created the Load Balancer with mTLS parameters. Let us verify the same with the origin server.
Log in the F5 Distributed Cloud Console and navigate to “Web APP & API Protection” module.
Goto Load Balancers and Click on ‘Add HTTP Load Balancer’.
Configure the origin pool by clicking on ‘Add Item’ under Origins. Select the created origin pool for httpbin.
As you can see from the demonstration, F5 Distributed Cloud WAF is providing the additional security to the origin servers by forwarding the client certificate info using mTLS XFCC header.
For those of you following along with the F5 Hybrid Security Architectures series, welcome back! If this is your first foray into the series and would like some background, have a look at the intro article. This series is using the F5 Hybrid Security Architectures GitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles. This repo is a community supported effort to provide not only a demo and workshop, but also a stepping stone for utilizing these practices in your own F5 deployments. If you find any bugs or have any enhancement requests, open a issue or better yet contribute!
Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller. To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service as well as F5 BIG-IP Access Policy Manger and Advanced WAF. We will then use F5 Container Ingress Service and IngressLink to tie it all together.
Distributed Cloud WAAP: Available for SaaS-based deployments and provides comprehensive security solutions designed to safeguard web applications and APIs from a wide range of cyber threats.
BIG-IP Access Policy Manager(APM) and Advanced WAF: Available for on-premises / data center and public or private cloud (virtual edition) deployment, for robust, high-performance web application and API security with granular, self-managed controls.
BIG-IP Container Ingress Services: A container integration solution that helps developers and system teams manage Ingress HTTP routing, load-balancing, and application services in container deployments.
F5 IngressLink: Combines BIG-IP, Container Ingress Services (CIS), and NGINX Ingress Controller to deliver unified app services for fast-changing, modern applications in Kubernetes environments.
NIGNX Ingress Controller for Kubernetes: A lightweight software solution that helps manage app connectivity at the edge of a Kubernetes cluster by directing requests to the appropriate services and pods.
Workspaces: Create a workspace for each asset in the workflow chosen
Workflow | Workspaces |
xcbn-cis | infra, bigip-base, bigip-cis, eks, nic, brewz, xc |
Your Terraform Cloud console should resemble the following:
Variable Set: Create a Variable Set with the following values.
IMPORTANT: Ensure sensitive values are appropriately marked.
Your Variable Set should resemble the following:
Fork and Clone Repo: F5 Hybrid Security Architectures
Actions Secrets: Create the following GitHub Actions secrets in your forked repo
Your GitHub Actions Secrets should resemble the following:
Step 1: Check out a branch for the deploy workflow using the following naming convention
xcbn-cis deployment branch: deploy-xcbn-cis
Step 2: Upload the Brewz OAS file to XC
* From the side menue under Manage, navigate to Files->Swagger Files and choose Add Swagger File
* Upload Brewz OAS file from the repo f5-hybrid-security-architectures/brewz/brewz-oas.yaml
Step 3: Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data
#Global
project_prefix = "Your project identifier"
resource_owner = "You"
#AWS
aws_region = "Your AWS region" ex: us-west-1
azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"]
#Assets
nic = true
nap = false
bigip = true
bigip-cis = true
Step 4: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data
#XC Global
api_url = "https://<Your Tenant>.console.ves.volterra.io/api"
xc_tenant = "Your XC Tenant ID"
xc_namespace = "Your XC namespace"
#XC LB
app_domain = "Your App Domain"
#XC WAF
xc_waf_blocking = true
#XC AI/ML Settings for MUD, APIP - NOTE: Only set if using AI/ML settings from the shared namespace
xc_app_type = []
xc_multi_lb = false
#XC API Protection and Discovery
xc_api_disc = true
xc_api_pro = true
xc_api_spec = ["Path to uploaded API spec"] *See below screen shot for how to obtain this value.
#XC Bot Defense
xc_bot_def = false
#XC DDoS
xc_ddos = false
#XC Malicious User Detection
xc_mud = false
* For Path to API Spec navigate to Manage->Files->Swagger Files, click the three dots next to your OAS, and choose "Copy Latest Version's URL". Paste this into the xc_api_spec in the xc/terraform.tfvars.
Step 5: Modify line 16 in the .gitignore and comment out the *.tfvars line with # and save the file
Step 6: Commit your changes
Step 1: Push your deploy branch to the forked repo
Step 2: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build
Step 3: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC
Step 4: Check your Terraform Outputs for XC and verify your app is available by navigating to the FQDN
Step 5: Configure F5 APM and Advanced WAF following the guide here.
The F5 XC WAAP platform learns the schema structure of the API by analyzing sampled request data, then reverse-engineering the schema to generates an OpenAPI spec. The platform validates what is deploy versus what is discovered and tags any Shadow APIs that are found. We can then download the learned schema and use it to augment our BIG-IP APM API protection configuration.
Step 1: From your deployment branch check out a branch for the destroy workflow using the following naming convention
xcbn-cis destroy branch: destroy-xcbn-cis
Step 2: Push your destroy branch to the forked repo
Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build
Step 4: Once the pipeline completes, verify your assets were destroyed
In this article we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAAP, F5 BIG-IP, and NGINX Ingress Controller to protect a test API running in AWS EKS. While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments.
Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances. Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect. With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications. Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution.
F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)
For further information or to get started:
Table Of Contents:
As the modern digital application world keeps evolving and innovative, organizations are faced with an overwhelming amount of data coming from various sources. Navigating through this sea of data can be a daunting task, often leading to confusion and inefficiency in decision-making processes. Making sense of this data and extracting valuable insights is crucial in choosing the right decisions for protecting the applications and boosting their performance. This is where dashboards come to the rescue. Dashboards are powerful visual tools of consolidated results of complex data sets into user-friendly, interactive displays, offering a comprehensive overview of key metrics, trends, and insights in one place.
By grouping different types of service details into visuals like graphs, charts, tables, and metrics, and displaying those visuals on a single page, dashboards provide valuable insights. They help users to review the summary on a regular basis which focusses on highlighting the key issues, security risks and current business trends. They provide users with quick and easy-to-understand real time insights with analysis. They can also be made interactive by providing advanced options of global search and filters to view the data that best suits each user’s needs.
In a nutshell, "Dashboards are like canvas of your business data – offering a panoramic view of your applications data landscape which illuminates the hidden insights that drive application security decisions."
In this dashboards overview article, we will walk you through some of the enhanced F5 Distributed Cloud (XC) dashboards and their key insights.
You can explore more about security dashboards in simulator by clicking this link: https://simulator.f5.com/s/xc-dashboards.
App Connect is a L7 full proxy service using the F5 Global Network to provide apps with effective local connectivity. App Connect dashboards focus on details of how an app is connected both internally and externally by visualizing traffic ingress to the front end and egress to each service endpoint. This service also joined the trend by serving rich dashboards focused on application delivery. Application owners can now observe and take action on applications delivered across their multi-cloud network with a dashboard focused on applications and performance. Performance dashboard shows details like HTTP & TCP traffic overview, throughput, top load balancers, etc. Application dashboard focusses on load balancers health, active alerts, list of existing load balancers. Fig 7: Image showing App Connect Dashboard
Fig 8: Image showing App dashboard
When it comes to content delivery, performance plays a major role in smoother application streamlining. So, by keeping this in in picture, XC console has released CDN performance dashboard which features the cache hit ratio, allowing network operators and app owners to optimize the regional delivery of content that can be cached. They also show existing CDN distributions along with their metrics like requests count, Data transfer, etc.Fig 9: Image showing CDN dashboard
Note: This is our first overview article on XC dashboards series and stay tuned for our upcoming articles on these newly implemented rich dashboards.
Dashboards are highly recommended tools to visualize data in a simple and clear way. In this article, we have provided some insights on newly enhanced rich security dashboards of important features which are helpful to users in identifying application concerns and taking necessary actions.
For more details refer below links:
DNS, a Domain Name Service is a mechanism of how humans and machines discover where to connect. It is the universal directory of addresses to names. It is the most prominent feature that every service on the Internet depends on. It will be very critical to keep our organizations online in the midst of DDoS attacks.
We usually encounter with multiple scenarios of DNS failure in single on-prem, CPE based DNS solutions with backup or a single cloud DNS solution struggling with increasing traffic demands. Also, when we extend our traditional DNS to an organization’s websites and applications across different environments, most of the on-premises DNS solutions don’t scale efficiently to support today’s ever expanding app footprints.
F5 Distributed Cloud DNS simplifies all these problems by acting as both Primary or Secondary nameservers and provides global security, automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS – DNSSEC support. With the increase in deployment of app in cloud, F5 XC DNS helps to scale up and provide regional DNS as well.
It also acts as an intelligent DNS Load Balancer from F5 which directs application traffic across the environments globally. It performs health checks, provides disaster recovery, and automates responses to activities and events to maintain high performance among applications not only that it has regional DNS that helps to redirect traffic according to the Geographical location there by reducing load on single DNS server.
Here are the key areas where F5 Distributed Cloud DNS plays a vital role to solve:
There is a GitHub repo that is available and helps to deploy the services for the above key features.
Finally, this demo guide supports the customers by giving a clear instruction set and a detto deploy the services using F5 Distributed Cloud DNS.
Unlike other Open Banking initiatives that are mandate-driven in a top-down approach, the North-American Open Banking standardisation effort is industry-led, in a bottom-up fashion by the Financial Data Exchange (FDX), a non-profit body. FDX's members are financial institutions, fintechs, payment networks and financial data stakeholders, collaboratively defining the technical standard for financial data sharing, known as FDX API.
As Security is a core principle followed in development of FDX API, it's worth examining one of the ways in which F5 customers can secure and test their FDX deployments.
To understand the general architecture of an Open Banking deployment like FDX, it is helpful to visualise the API endpoints and components that play a central role in the standard, versus the back-end functions of typical Financial Institutions (the latter elements displayed as gray in the following diagram):
In typical Open Banking deployments, technical functions can be broadly grouped in Identity & Consent, Access and API management areas. These are core functions of any Open Banking standard, including FDX.
If we are to start adding the Security Controls (green) to the diagram and also show the actors that interact with the Open Banking deployment, the architecture becomes:
It is important to understand that Security Controls like the API Gateway, Web Application and API Protection or Next Generation Firewalls are just functions, rather than instances or infrastructure elements. In some architectures these functions could be implemented by the same instances/devices while in some other architectures they could be separate instances.
To help decide the best architecture for Open Banking deployments, it is worth checking the essential capabilities that these Security Controls should have:
WAAP (Web Application and API Protection) |
|
API Gateway |
|
(NG)FW |
|
IDS/IPS |
|
Databases |
|
Client-side protection |
|
One possible architecture that could satisfy these requirements would look similar to the one depicted in the following high-level diagram, where NGINX is providing API Gateway functionality while F5 Distributed Cloud provides WAF, Bot Management and DDoS protection.
In this case, just for demo purposes, the notional FDX backend has been deployed as a Kubernetes workload on GKE and NGINX API Gateway was deployed as an Ingress Controller while Distributed Cloud functionality was implemented in F5's Distributed Cloud (XC) Regional Edges, however there is a great degree of flexibility in deploying these elements on public/private clouds or on-premises.
To learn more on the flexibility in deploying XC WAAP, you can read the article Deploy WAAP Anywhere with F5 Distributed Cloud
Once the architectural decisions have been made, the next critical step is testing this deployment (with a focus on Security Controls testing) and adjust the security policies. This, of course, should be done continuously throughout the life of the application, as it evolves.
The challenge in testing such an environment comes from the fact the Open Banking API is generally protected against unathorized access via JSON Web Token (JWT), which is checked for authentication and authorisation at the API Gateway level. "Fixing" the JWT to some static values defeats the purpose of testing the actual configuration that is in (or will be moved to) Production, while generating the JWT automatically, to enable scripted testing is fairly complex, as it involves going through all the stages a real user would need to go through to perform a financial transaction.
As an example of the consent journey an end-user and the Data Recipient have to go through to obtain the JWT can be seen in the following diagram:
One solution to this challenge would be to use an API Tester that can perform the same actions as a real end-user: obtain the JWT in a pre-testing stage and feed it as an input to the security testing stages.
One such tool was built using the Open Source components described in the diagram below and is available on GitHub.
The API Tester is automatically deployed and run using GitHub Actions and Terraform Cloud. A full pipeline will go through the deployment of the the GCP's GKE infrastructure required to host the notional FDX back-end and the NGINX Ingress Controller API Gateway, the F5 XC WAAP (Web Application and API Protection), and the API Tester hosted on the F5 XC vk8s infrastructure.
A run is initiated by creating a repository branch and, following the deployment and test run, a report is being received via email.
Here's the API Tester in action:
F5 XC WAAP and NGINX API Gateway can provide the levels of protection required by the Financial Services Industry, the current article focussing on a possible security architecture for FDX, the North-American standard for Open Banking.
To test the security posture of the FDX Security Controls, a new API Tester framework is needed and the main challenge that is solved is the automated generation of JWT, following the same journey as a real end-user.
This allows the testing of deployments having a configuration similar to the one found in Production.
Who knows what an iRule is? iRules have been used by F5 BIG-IP customers for a quarter of a century! One of the most common use cases for iRules are for security decisions. If you're not coming from a BIG-IP and iRules background, what if I told you that you could apply 1000s of combinations of L4-L7 match criteria in order to take action on specific traffic? This is what a Service Policy provides similar to iRules. The ability to match traffic and allow, deny, flag, or tune application security policy based on that match. I often am asked, "Can F5 Distributed Cloud block ____ the same way I do with iRules?", and most commonly the answer is, absolutely, with a Service Policy.
Recently, I had a customer come to me with a challenge for blocking a specific attack based on a combination of headers. This is a common application security practice, specifically for L7 DDoS attacks, or even Account Take Over (ATO) attempts via Credential Stuffing/Brute Force Login. While F5 Distributed Cloud's Bot Defense or Malicious Users feature sets might be more dynamic tools in the toolbox for these attacks, a Service Policy is great for taking quick action. It is critical that you've clearly identified the match criteria inorder to ensure your service policy will not block good traffic.
As stated earlier, the attack was identified by a specific combination of headers and values of these headers. The specific headers looked something like below (taken from my test environment and curl tests):
curl -I --location --request GET 'https://host2.domain.com' \
--header 'User-Agent: GoogleMobile-9.1.76' \
--header 'Content-Type: application/json; charset=UTF-8' \
--header 'Accept-Encoding: gzip, deflate, br' \
--header 'partner-name: GOOGLE' \
--header 'Referer: https://host.domain.com/'
The combination of these headers all had to be present, meaning, we needed an "and" logic for matching the headers and their values. Seems pretty simple, but this is where the conversation between the customer and myself came into play. When applying all of the headers to match as shown below, they were not matching. Can you guess why?
Figure A: Headers - Flat
The first thought that comes to mind, is probably, case sensitivity in the values. However, if we take a closer look specifically at the 'partner-name' header configuration, I've placed a transformation on this specific header. So the 'partner-name' isn't the problem.Figure B: A transformer is applied to the request traffic attribute values before evaluating for match.
Give up? The issue in this Service Policy configuration is the 'Accept-Encoding' header. Specifically the ',' {comma} character in the value. In the F5 Distributed Cloud Service Policy feature, we treat commas as seperate headers with each individual value. The reason for this, is a request can have the same header multiple times, or it can have multiple values in a single header. In order to keep it consistant when parsing the headers with comma deliminated values, we seperate them into multiple headers before matching.
I thought I could be smart when initially testing this, and added multiple values to a single header. This will not match, for one because they are not seperate headers with values, but also because when there are multiple values within a single header. This multiple value in a single header configurations within the service policy creates an "or" logic, and we're looking for an "and" logic for all headers and their exact values.
Figure C: Multiple Values in Single Header
Figure D : Multiple Values within a Single Header - "or" Logic for this header
In order to get the proper match with "and" logic across all headers, and the header values, we need to apply the same header name multiple times. Important to note, the 'content-type header' has a ';' {semi-colon} which is not a deliminated value in F5 Distributedc Cloud serivce policy logic, and will match just fine the way it is in the defined policy shown below.
Figure E: Multiple Headers defined, with individual values, will provide "and" logic for all headers, and their values.
In these tests, I am going to first provide an exact match to block the traffic. When we match, we provide a 403 response code back to the client. Within the individual Load Balancers objects of F5 Distributed Cloud, you can customize the messaging that comes along with the 403 response code or any response code for that matter. For my tests, I'll simply use curl and update the different headers. After this initial successful block, I'll show a few examples of changing the headers sent with the curl. For the "and" logic, any changes to the headers should result in a 200 response code. For the "or" logic, it'll depend on how I change the headers.
In this testing section, the service policy is configured like Figure E above.
All values are an exact match, with and logic, the 403 response code identifies the block from the F5 Distributed Cloud
When removing the 'g' character from gzip, the "and" logic no longer matches, as not every value is exact. This results in a 200 response code being from the origin server and F5 Distributed Cloud.
In this testing section, the servicy policy is configured like Figure D above.
This is an exact match, and the Service Policy blocked the request, sending a 403 response code back to the client
With or logic of the Accept-Encoding header, one of the values must match. Since I removed the first letter of every value, there was not a match, and the F5 Distributed Cloud passed the traffic to the origin server. The origin and the F5 Distributed Cloud returned a 200 response code.
When adding the 'g' back to gzip, but leaving all other values missing their first character, we once again get a block at the service policy, and a 403 response code. Again, this is 'or' logic, so only 1 value must match.
A Serivce Policy is a very powerful engine within the F5 Distributed Cloud. We've scratched the surface of service policies in this article as it pertains to header matching and logic. Other match criteria examples are IP Threat Category (Reputation), ASN, HTTP Method, HTTP Path, HTTP Query Parameters, HTTP Headers, Cookies, Arguments, Request Body, and so on. The combination of these match criteria, and the order of operations of each service policy rule, can make a huge difference in the security posture of your application. These capabilites within the application layer is critical to he security of your application services. As the F5 Distributed Cloud is your stragegic point of application delivery and control, I hope you're able to use service policies to elevate your application security posture.
An important and long-standing need for enterprise storage is the ability to recover from disasters through both rapid and easy access to constantly replicated data volumes. Beyond reducing corporate downtime from recovery events, the replicated volumes are also critical for cloning purposes to facilitate items such as research into data trends or to perform advanced analytics on enterprise data.
A modern need exists to quickly replicate data across a wide breadth of sites, with diversity in the major cloud providers to be leveraged, providers such as AWS, Azure, and Google. The ability to simultaneously replicate critical data to multiple of these hyperscalers prevents a major industry concern, that of vendor lock in. Modern data stores must be efficiently and quickly saved to, and acted upon, using whichever cloud provider an enterprise desires. Principal reasons for this hybrid cloud requirement include maximizing return on investment by shopping for attractive price points or more 9’s of reliability.
Although major cloud providers may have individual, unique VPN-style solutions to support data replication, for example Microsoft Azure VPN Gateway deployments, selecting concurrent, differing solutions can quickly become an administrative burden. Each cloud provider offers slightly distinctive networking and security wares. A critical concern is the shortage of advanced skill sets often required to maintain configuration and diagnostic processes in place for competing cloud storage solutions. With flux to be expected in staffing, the long-term cost of trying to stitch disparate cloud technologies into one cohesive offering for the enterprise has been difficult.
This is the precise multi-cloud strategy where F5 Distributed Cloud (XC) can complement industry leading enterprise-grade storage solutions from a major player like NetApp. With F5 Distributed Cloud Network Connect, multiple points of presence of an enterprise, including on-prem data centers and a multitude of cloud properties, are seamlessly tied together through a multi-cloud network (MCN) offering that leverages a 20 Tbps backbone. Service turn up measured in minutes, not days.
An excellent, complementary use of the F5 XC hybrid secure network offering is NetApp’s modern approach to managing enterprise data estates, NetApp BlueXP. This unified, cloud-based control plane from NetApp allows an enterprise to manage volumes both on-prem and in major cloud providers and in turn set up high-value services like data replication. Congruent to the simple workflows F5 XC delivers for secure networking setup, NetApp BlueXP also consists of intuitive workflows. For instance, simply drag one volume onto another volume on a point-and-click working canvas and standard SnapMirror is enacted. F5 XC can underpin the connectivity requirement of a multi-cloud hybrid environment by handling truly seamless and secure network communications.
The first step in demonstrating the F5 and NetApp solutions working in concert to provide efficient disaster recovery of enterprise volumes was to set up F5 XC customer edge (CE) sites within Azure, AWS and On-Prem data center locations. The CE is a security demarcation point, a virtualized or dedicated server appliance, allowing highly controlled access to key enterprise resources from specific locales selected by the enterprise. For instance, a typical CE deployment for MCN purposes is a 2-port device with inside ports permitting selective access to important networks and resources.
Each CE will automatically multi-home to geographically close F5 regional edge (RE) sites, no administrative burden in incurred and no networking command line workflows need be learned, CE deployments are wizard-based workflows with automatic encrypted tunnels established. The following screenshot demonstrates in the red highlighted area that a sample Azure CE site freshly deployed in the Azure Americas-2 region has automatic encrypted tunnels set up to New York and Washington, DC RE nodes.
Regardless of the site, be it an AWS VPC, an AWS services VPC supporting transit gateway (TGW), Azure VNET or an on-prem location, the net result is always a rapid setup with redundant auto-tunneling to the F5 international fabric provided by the global RE network. Other CE attachment models can be invoked, such as direct site-to-site connectivity that bypasses the RE network, however the focus of this document is the most prevalent approach which harnesses the uptime and bandwidth advantages offered by RE gluing together of customer sites.
With connectivity available between inside interfaces of deployed CEs, standard firewall rules easily added, as well as service insertion of third party NGFW technology such as Palo Alto firewall instances, the plumbing to efficiently interconnect NetApp volumes for on-going replication is now possible.
The objective for the F5 XC deployment was to utilize the NetworkConnect module, to effectively allow layer 3 connectivity between inside ports of CEs regardless of site type. In other words, connectivity between networked resources at on-prem sites or AWS sites or Azure sites, are all enabled quickly with a consistent and simple guided workflow. The practical application of this layer-3 style of MCN that NetworkConnect allows was connectivity of NetApp volumes, as depicted in the following diagram.
A widely embraced enterprise-class file storage offering is the industry-leading NetApp ONTAP solution. When deployed on-prem, the solution allows shared file storage, often using NFS or SMB protocols for file storage, frequently with multiple nodes used to create a storage cluster. Although originally hardware appliance-oriented in nature, modern incarnations of on-prem ONTAP solutions can easily and frequently utilize virtualized appliances.
Both NetApp and F5, in keeping with modern control plane industry trends, have moved towards a centralized, portal-based approach to configuration, whether it be storage appliances (NetApp) or multi cloud networking (F5). This SaaS approach to configuration and monitoring means control plane software is always up-to-date and requires no day-to-day management. In the case of NetApp, this modern control plane is instantiated with the BlueXP cloud-based portal.
The sample BlueXP canvas displayed above demonstrates the diversity of data estate entities that can be managed from one workspace, with volumes both on-premises and AWS cloud-based, along with Amazon S3 storage seen.
NetApp offers a widely used cloud-based implementation of file storage, Cloud Volumes ONTAP (CVO) which serves as an excellent repository for replicating traditional on-premises volumes. In the demonstration environment both AWS and Azure were harnessed to quickly set up CVO instances. For BlueXP to establish a workspace involving a managed CVO instance, a “Connector” is deployed in the AWS VPC or Azure VNet. This connector is the entity which facilitates the BlueXP control plane management functions for hybrid-cloud storage.
Upon establishing on premises to AWS and Azure connectivity, enabled by the F5 Secure XC Customer Edge (CE) nodes deployed at sites, a vast and mature range of features are provided to the BlueXP operator.
As highlighted above, a core function of the BlueXP services is replication, in this workspace one can see the on-premises cluster being replicated automatically to an Azure CVO instance.
The result of combining the F5 Distributed Cloud multi-cloud networking support with the NetApp ability to safeguard mission critical enterprise data, anywhere, was found to be a smooth, intuitive set of guided configuration steps. Within an hour, protected inside networks were established in two popular cloud providers, AWS and Azure, as well as in an existing on premises data center. With the connectivity encrypted and standard firewall rules available, including the option to run data flows through inline third-party NGFW instances, the focus upon practical usage of the cloud infrastructure could commence.
A multi-site file storage solution was deployed using the NetApp BlueXP SaaS console, whereby an on premises ONTAP cluster received local files through the NFS protocol. To demonstrate the value of a multi-cloud deployment, the F5 XC NetworkConnect module allowed real-time file replication of the on-prem cluster contents to separate and independent volumes securely located within an AWS VPC and Azure VNet, respectively. Using F5 XC, the target networks within the cloud providers were highly secured, only permitting access from the data center.
The net result is a solution that can accommodate disaster recovery requirements, for instance a clone of the AWS or Azure volumes could be created and utilized for business continuity in the event of data corruption or disk failure on premises. Other use cases would be to clone the cloud-based volumes for research and development purposes, analytics, and further backup purposes that could utilize snapshotting or imaging of the data. The inherent redundancy offered by using multiple, secured cloud instances could be enhanced easily by expanding to other hyperscalers, for instance Google Cloud Platform when business purposes dictate such a configuration is prudent.
A simple and intuitive simulator is available to walk users quickly through the setup of an F5 Distributed Cloud MCN deployment such as the one reflected in this article. The simulator can be found here.
For a complete, comprehensive walk-through of F5 Distributed Coud Multi-Cloud Networking MCN, including setup steps, please see this DevCentral article - Multi-Cloud Networking Walkthrough with Distributed Cloud.
As part of release cycle management F5 Distributed Cloud (F5 XC) keeps on releasing new features. July[1] upgrade has released 2 new features in Web Application and Api Protection (WAAP).
Let’s dive into them one by one.
Security dashboards capture different types of logging metrics and sometimes users may need these logs to analyze them offline. WAAP Exports feature addresses this problem by exporting the latest 500 security related logs in csv format. Users can export logs from events, incidents and requests tabs of security dashboard.
Production security dashboards show plenty of logging information to understand the security posture of their Apps and API’s currently for the ongoing traffic. Owners can go through them to analyze the traffic and come to decisions if ongoing data is malicious and has any threats. This process is a little time-consuming and needs human expertise in traffic analysis. Users are looking for a top-level overview of how many attacks are seen in a specific period compared to the last period.
WAAP Trends feature in security dashboards of HTTP load balancer enables users to view the change in metrics (up or down) compared with previous period. Incoming traffic is analyzed using internal tools to decide the sentiment (positive, negative or neutral) and is displayed in UI thereby saving lot of time. Users can instantly check the sentiment and if needed can update the existing configurations to safeguard the applications.
As I was writing this article, I keep remembering this famous generic quote “Trend is your Friend” which conveys the importance of identifying the current trend in safeguarding your applications.
I hope this article has provided a summary of newly implemented features of WAAP events export and trends which focus on logging and security dashboards.
Stay tuned for more feature article. For more details refer below links:
F5 Distributed Cloud services (XC) provide full REST APIs to enable automation of the deployment and management of multi-cloud infrastructure. Organizations looking to implement infrastructure-as-code operations for modern apps, distribute and secure multi-cloud deployments can utilize and adapt the Terraform and Ansible scripts in the many articles on F5 DevCentral that cover automation topics for F5 Distributed Cloud. Typically these scripts automate and help to consistently :
This article focuses on only the Deliver part of the distributed app lifecycle, where using Terraform script with F5 Distributed Cloud Services organizations can easily deploy and configure multi-cloud networking & app connectivity of their distributed applications that span across:
The easiest place to get started with Automation of Multi-Cloud Networking (MCN) and Edge Compute scenarios is by cloning the corresponding GitHub repositories from the Demo Guides, which include sample applications and provide opportunities to see automation scripts in action. The Terraform scripts within the following Demo Guides can be used as a template to quickly customize to your organization’s requirements to automate repetitive tasks or the creation of resources with just a quick update of variables unique to your environment to customize automation actions.
Multi-cloud networking use-cases Demo Guide where you can use Terraform to enable connectivity for multiple clouds and explore using HTTP and TCP load balancers to connect the provided sample application. You can use the provided scripts in the GitHub repositories to deploy the required sample app, and other components representative of a traditional 3-tier app architecture (backend + database + frontend).
Furthermore, the scripts provide flexibility of choosing the target clouds (AWS or Azure or both), which you can adapt to your environment and app topologies based on which clouds the different app services should be deployed to. Use the guide to get familiar with how to update variables for each cloud configuration, so that you can further customize to your environment to help automate and simplify deployment of the networking topologies across Azure and AWS, ultimately saving time and effort.
Edge Compute for Multi-cloud Apps Demo Guide where Terraform scripts help automate deployment of the application infrastructure across AWS (sample app and other components representative of a traditional 3-tier app architecture – backend, database, frontend). The result is a multi-cloud architecture, with components deployed on Microsoft Azure and Amazon AWS.
By adapting the included Terraform script you can easily deploy and securely network app services to create a distributed app model that spans across:
In the process you get familiar with the configuration of TCP and HTTP Load Balancers, create a vK8s that spans multiple locations / clouds, and deploy distributed app services across those locations with the help of the Terraform scripts.
Deploying high-availability configurations Demo Guide is an important resource for getting familiar with and automating High-Availability (HA) configuration of backend resources. In this guide, as an example, you can use a PostgreSQL database HA deployment on a CE (Customer Edge), which is a common use-case leveraging F5 Distributed Cloud Customer Edge for deploying a backend. First, deploy the AWS Site environment, followed by deployment of a vK8s, and then customizing and running Bitnami Helm chart to configure a multi-node PostgreSQL deployment.
Of course, you can leverage this type of automation with a Helm chart of your choice to configure a different backend resource or database type. Adapt to your environment with a few changes to the script variables, and feel free to combine with scripts from the other two guides to deploy the app(s) and configure networking (MCN) should you choose to automate the entire workflow.
Terraform scripts represent ready-to-use code, which you can easily adapt to your own apps, environments, and services or extend as needed. The baseline for most scripts is using the Volterra Provider with required edits / updates of the variables in Terraform. These variables are special elements that allow us to store and pass values to different aspects of modules without changing the code in the main configuration file. Variables allow the flexibility of updating the settings and parameters of the infrastructure, and it facilitates its configuration and support.
Variables are stored and can be found in .tf files of the respective folders. Using the Deploying high-availability configurations Demo Guide as example, you change the environment variable values related to your app, which you can find in the terraform folder and the application subfolder. Open the var.tf file to update the values:
More detailed information on variables can be found here.
In summary, Demo Guide repositories include Terraform scripts used to help automate different operations, including deployment of the environment required for the sample distributed app, as well as deploying the app itself. You can take a closer look at the Demo Guide use-cases together with their respective Terraform scripts, run a quick test to get familiar with the use-case, and then adapt the scripts to your environment and your applications.
Whether your app has high availability requirements or distributed multi-cloud infrastructure, using Terraform with F5 Distributed Cloud Services can simplify deployment, automate infrastructure on any cloud and save time and effort managing and securing app resources in any cloud or data center.
Edge Compute for Multi-cloud Apps Demo Guide
Terraform scripts & assets for the Edge Compute Demo Guide
https://registry.terraform.io/providers/volterraedge/volterra/latest/docs
To prevent attackers from exploiting mobile apps to launch bots, F5 provides customers with the F5 Distributed Cloud (XC) Mobile SDK, which collects signals for the detection of bots. To gain this protection, the SDK must be integrated into mobile apps, a process F5 explains in clear step-by-step technical documentation. Now, F5 provides an even easier option, the F5 Distributed Cloud Mobile SDK Integrator, a console app that performs the integration directly into app binaries without any need for coding, which means no need for programmer resources, no need to integration delays.
The Mobile SDK Integrator supports most iOS and Android native apps. As a console application, it can be tied directly into CI/CD pipelines to support rapid deployments.
While motivations for using SDK Integrator may vary, below are some of the more common reasons:
The work of the SDK Integrator is done through two commands: the first command creates a configuration profile for the SDK injection, and the second performs the injection.
$ python3 ./create_config.py --target-os Android --apiguard-config ./base_configuration_android.json --url-filter "*.domain.com/*/login" --enable-logs --outfile my_app_android_profile.dat
In Step 1, apiguard-config lets the user specify the base configuration to be used in integration. With url-filter we specify the pattern for URLs which require Bot Defense protection, enable-logs allows for APIGuard logs to be seen in the console, outfile specifies the name of this integration profile.
$ java -jar SDK-Integrator.jar --plugin F5-XC-Mobile-SDK-Integrator-Android-plugin-4.1.1-4.dat --plugin my_app_android_profile.dat ./input_app.apk --output ./output_app.apk --keystore ~/my-key.keystore --keyname mykeyname --keypass xyz123 --storepass xyz123
In Step 2, we specify which SDK Integrator plugin and configuration profile should be used. In the same step, we can optionally pass parameters for app-signing: keystore, keyname, keypass and storepass. Output parameter specifies the resulting file name. The resulting .apk or .aab file is a fully integrated app, which can be tested and released.
Injection steps for iOS are similar. The commands are described in greater detail in the SDK Integrator user guides distributed with the SDK Integrator.
In order to thwart potential attackers from capitalizing on mobile apps to initiate automated bots, The F5 Distributed Cloud Mobile SDK Integrator seamlessly incorporates the SDK into app binaries, completely bypassing the necessity for coding making the process easy and fast.
To add to Nikoolayy1's comment, F5 can generate API Definitions within XC, and we are working on integration for BIG-IP and nginx deployments. This will allow traffic that does not use XC for client traffic to provide the Swagger/OpenAPI files and security assessments available for XC Load Balancers.
Currently, you use a logger on the proxy (BIG-IP or nginx) to gather the request and response data and that is then sent in to XC via a separate service. The advantage here is that you don't need to change the traffic flow of your client traffic.
If you're interested in learning more, please PM me or reach out to your local account team.
As I have mentioned in https://community.f5.com/t5/codeshare/generating-irule-logs-emails-and-reports-for-shadow-api/ta-p/313912 only F5 XC Distributed Cloud can do this and as of now there is no native option with F5 BIG-IP.
Date: Tuesday, September 19, 2023
Time: 10:00am PT | 1:00pm ET
Speaker: Krista Baum
What's the webinar about?
F5's SaaS services delivery platform, F5 Distributed Cloud, has expanded. We'll answer common customer questions like: How does this SaaS platform work with other F5 products and services? What best practices help securely deliver my applications? And we'll look at how pairing F5 Distributed Cloud Services with your existing security services can help you become an application security hero in your organization.
In this session, we will:
Note: If you can't make this session, please still register and will send you a link to the on-demand recording.
Date: Thursday, September 7, 2023
Time: 10:00am PT | 1:00pm ET
Speaker: Kyle Twenty, F5 Solutions Engineer II
What's the webinar about?
Whether an application is hosted in one or more locations, is migrating, or has advanced delivery and security needs, it's crucial to provide effective and consistent security. Fortunately, application owners can use the F5 Distributed Cloud Platform and portfolio of services to engage multi-cloud networking for on-premises and cloud-based application deployments to provide consistent security operations.
In this session, we will:
Note: If you can't make this session, please still register and will send you a link to the on-demand recording.
Date: Wednesday, August 30, 2023
Time: 10:00am PT | 1:00pm ET
Speaker: Andy Conley, F5 Solutions Engineer III
What's the webinar about?
As application development continues to advance, API frameworks must evolve to meet the increased security burden placed upon them. API security includes the rules, protections, and controls used to secure them as well as the essential observability and visibility provided through discovery. Together, API security and discovery accelerate development and delivery of services.
In this session, we will:
Explore F5 Distributed Cloud's API Discovery functionality and ability to drive learning and visibility of discovered, inventories, and shadow APls.
Apply API security protections delivered at the edge and securely deliver endpoints while understanding their delivery.
Layer complimenting security strategies like Malicious User Detection and Mitigation, Bot Strategies, and other Service policy services to protect APls.
Note. If you can't make this session, please still register and will send
F5 Distributed Cloud’s Customer Edge (CE) software is an incredibly powerful solution for Multi-Cloud Networking, Application Delivery, and Application Security. The Customer Edge gives you flexibility on how routing is distributed across a multi-cloud fabric, how client-side and server-side connections are handled, and ultimately the injection of highly effective L4-L7 services into a proxy architecture. Best yet, these powerful data planes are 100% managed from a central control-plane, F5 Distributed Cloud’s Console, giving a single pane of glass for configuration management and observability.
In the image above, we're highlighting some of the common topology types that an enterprise may choose to deploy with F5 Distributed Cloud. You can see the CE is very flexible. You can combine any number of these topology flows depending on your network and application service requirements. VIP and SNAT are common proxy terminologies, and while we can do pure L3 routing via the CE software, the ability to have a geographically dispersed proxy architecture is extremely powerful (flows 3 & 4).
That all sounds wonderful, but… there are a lot of details surrounding the deployment of the CE software. Details matter. The options at hand must be thoroughly examined for the best deployment model that fits your enterprise use case, existing network, cloud architecture(s), performance and scale, day 2 operations, and personnel/team expertise. In this article, I hope to provide an overview of how to attach CEs to your network, how traffic flows from the network to the CEs for the different attachment models, and how CEs can benefit your enterprise.
First, we must understand what environments can a CE be deployed. Keep in mind, the CE software is the same software deployed in our Regional Edges (REs). The difference is the REs are the SaaS data plane of F5 Distributed Cloud that F5 maintains and scales on the behalf of enterprises consuming services on the REs. Whereas the Customer Edge (CE) software is deployed within the enterprise environment. The CE software can be installed into 4 different platform types:
In this article, we will focus on the first 3 options, and leave the Kubernetes attachment for another day, as it is a little different from the other 3. If you’re familiar with F5 Distributed Cloud, you may also know that the CEs have two personas’, one if which is Mesh for the use cases I mentioned above, and the other is AppStack, which is turning the CE into its own k8s cluster that you can bring workloads to. We will not be focusing on AppStack in this article.
Deployment Options Summary
As you can see, there are 5 ways of getting traffic to a Customer Edge site (cluster) and the individual nodes making up that site/cluster. These 5 deployment models are grouped into 3 attachment types, Layer 3 attached (blue), Layer 2 attached (purple), and externally attached (green). You may have also noticed there are 3 CE nodes in each of the diagrams. If a single node is deployed, there is less to think about in terms of scale and failover, but these attachments can still be utilized. To achieve High Availability, when deploying multiple nodes, F5 Distributed Cloud requires 3 nodes. The 3 nodes are required to form a cluster due to the underlaying software stack.
This is personally my preferred method. I am a big believer in Equal Cost Multi-Path Routing (ECMP) to establish active/active/active pathing for traffic traversing the F5 Distributed Cloud Customer Edge software. However, not every environment may have routing available, especially dynamic routing via BGP. This may be due to limitations on the existing network, or comfort level of the individuals deploying the software with routing. However, if you are comfortable with routing, and the environment supports routing, this can be a great model for your enterprise. Both models shown above, static and BGP, support the expansion of a cluster via worker nodes. These worker nodes can provide horizontal scale and performance of the site/cluster.
When statically routed, depending on your route/switch fabric, you may not get the desired effect. This could be because the of the lack of support for ECMP, or if the route is persistent even if the network cannot ARP for the next hop. However, setting up static routing is simple, quick, and takes less network expertise to accomplish.
In the picture below, you’ll notice we’re using custom VIPs associated to 4 “color” applications/FQDNs. These custom VIPs act as loopbacks to a traditional router as they are locally significant to the F5 Distributed Cloud Customer Edge nodes. The 3 static routes configured in the network target the next hop of each Customer Edge node’s SLO or SLI interface to reach the custom VIP. Once the connection is established to the custom VIP, the software matches the application on criteria higher in the stack, such as port, SNI, or host information.
Layer 3 Attached - Static Routing
This is the exact same for BGP attached, except BGP attached is dynamic in nature. Each custom VIP is injected as a /32 route into the route/switch fabric. If for whatever reason a node is unavailable, that custom VIP is removed from the route/switch fabric. Layer 3 Attached - BGP Routed
A layer 2 attached model might be the most common for customers who are familiar with many other network appliances such as firewalls or load-balancers, even BIG-IP. Think of traffic groups and floating IPs in BIG-IP. These concepts typically utilize a First Hop Resolution Protocol (FHRP) known as VRRP. In F5 Distributed Cloud CE software VRRP utilizes a VIP as a virtual address that is shared between the 3 nodes. However the VIP is only active on one of the nodes which creates an Active/Standby/Standby topology. The Active node’s MAC address is what is returned during the ARP process for the VIP. If a different node becomes active, the new active node’s MAC is now associated to the VIP and is updated in the broadcast domain via a process called Gratuitous ARP (GARP).
Today, in F5 Distributed Cloud, if you’re using VRRP for your attachment, the VIP becomes active on a node at random. We do not expose any priority settings for the VIP. If using multiple custom VIPs and VRRP, this does allow for the potential of each of the nodes in the cluster to be active for a one or more of the VIPs. Meaning, you could have traffic actively utilizing all the nodes, but each of the nodes is only active for a specific VIP(s) and subset of apps associated to that VIP(s).
In our picture below, we again have 4 color applications that are randomly active across each of the 3 nodes. Blue and purple are active on node0, red is active on node1, and green is active on node2. Take a close look at the ARP table and how the MAC addresses of the physical SLO interface map to the custom VIPs. Lastly, worker nodes participate in VRRP, and can be utilized to scale the cluster horizontally.
The two external attachments for scaling services have been around for a very long time and have grown in popularity as enterprises have taken their tooling to cloud. When moving tooling from on-prem to cloud, the lack of L2 technologies such as ARP/GARP, forced many enterprises to re-think how traffic is routed to/through the tooling. This tooling includes Firewalls, Next-Generation Firewalls, Proxies, Load Balancers, Web Application Firewalls, API Gateways, Access Proxies and Federation tooling, and so on….
With an external Load Balancer (LB), these can be deployed as L4 or L7 load-balancers for sending traffic to/through the Customer Edge software. If a L4 LB is chosen, depending on the LB technology, but it is likely that the source IP will be lost. If L7, you can use headers to maintain the source IP information, but if using TLS, you’ll need to manage certificates at the L7 LB, which may not be operationally efficient for your organization. We can scale the cluster with worker nodes by adding the worker nodes to the external LB pool. In this deployment model, custom VIPs are less needed, and the SLO or SLI interfaces can be the target.
External Attachment - In-Path Load-Balancer
Like the inline LB, we can use an out-of-path LB via DNS. This DNS could be as simple as round robin A-Records, or advanced Global Server Load-Balancing (GSLB) which incorporates configured intelligence and health checking into logic of what IP is sent in response to a DNS query. In this model, while health checking is available, the traffic flows would still be subject to DNS cache and TTLs for failover. As with the inline LB, worker nodes can be used to scale the cluster, and the node interface IP can be used as the DNS LB target.
External Attachment - Out-of-Path DNS LB
The F5 Distributed Cloud Customer Edge Software is a flexible component of the platform. The CE takes the F5 Distributed Cloud from a pure SaaS solution, to a multi-cloud fabric for Applicaiton Delivery and Security. Depending on the architecture of enterprises' different “service-centers” such as data centers and clouds, the Customer Edge software can attach to the network in many ways. Please consult your account team and F5 Distributed Cloud specialist for collaborative details on what may work best for your enterprise’s network and use case(s).
Deployment Options Summary - Details
F5 Distributed Cloud Services offers many secure multi-cloud networking features. In the following video, I demonstrate how to connect a secure mesh customer edge (CE) site running on VMware and using common hardware. This on-prem CE is joined to a site mesh group of three other CE's, two of which are run on the public cloud providers AWS and Azure. Secure Mesh CE is a newly enhanced feature in Distributed Cloud that allows CE's not running in public cloud providers to run on hardware with unique and different configurations. Specifically, it's now possible to deploy site mesh transit networking to all CE's having one, two, or more NIC's, with each CE having its own unique physical configuration for networking.
See my article on Secure Mesh Site Networking to learn how to set up and configure secure mesh sites.
In addition to secure mesh networking, on-prem CE's can be deployed without app management features, giving organizations the flexibility to conserve deployed resources. Organizations can now choose whether to deploy AppStack CE's, where the CE's can manage and run K8s compute workloads deployed at the site, or use networking-focused CE's freeing up resources that would otherwise be used managing the apps. Whether deploying an AppStack or Secure Mesh CE, both types support Distributed Cloud's comprehensive set of security features, including DDoS, WAF, API protection, Bot, and Risk management.
Secure MCN deployment capabilities include the following capabilities:
In the following video, I introduce the components that make up a Secure MCN deployment, and then walk through configuring the security features and show how to observe app performance and remediate security related incidents.
0-3:32 - Overview of Secure MCN features
3:32-9:20 - Product Demo
Technical Article: Secure Mesh Site Networking
Product Documentation: How-To Create Secure Mesh Sites
Product Information: Distributed Cloud Network Connect
Product Information: Distributed Cloud App Connect
Use this guide with the provided GitHub repository of walk-through steps or Ansible scripts to explore the following scenario: deliver user applications to the edge securely with low latency and minimal compute per request at the same time with F5 Distributed Cloud (XC) Content Delivery Network (CDN) and Web App and API Protection (WAAP).
F5 CDN is built on a secure global private network to distribute apps, APIs, and services regionally and to do so securely by leveraging many of the “out-of-the-box” WAAP features of the XC Platform.
Points of presence (PoPs) on the F5 Global Network make it possible to run workloads close to users to improve application performance. Apps can be deployed, managed, and secured everywhere: across multiple public and private clouds, edge locations, and on-premises environments.
This demo guide explores the capabilities of F5 XC CDN features integrated with WAAP in both a step-by-step guide and by using automation scripts in Ansible. The goal is to deploy a sample app “BuyTime Online”, representative of a typical scenario of an e-Commerce online store. This example combines CDN with WAAP, allowing organizations that use F5 XC to deliver their apps and services globally while protecting their deployments with WAF policies.
The guide offers two ways to complete the configuration:
First, an HTTP Load Balancer is set up with an App Firewall protection for the sample app. Redirect to HTTPS is enabled and an origin pool for the services is created. Blocking mode for the App Firewall is switched on meaning that the Distributed Cloud WAF will take mitigation action on offending traffic.
Afterward, we configure CDN, resulting in a high-performing low-latency content delivery via the Global Network. The resulting CDN-backed app is tested to showcase lowering the load time for the app with CDN, while some of the attempted malicious payloads are blocked by WAAP services in CDN.
In summary, this demo guide supports integrated security and content caching capabilities provided by F5 XC CDN and WAAP. Among the many benefits of this combined solution, there is the capability to use security policies anywhere across platforms and clouds, while ensuring content caching and containerized edge-based workloads for apps running close to users via the Global Network.
Visit the following resources for more information:
Multi-cloud networking (MCN) features of F5 Distributed Cloud help deploy distributed apps across private and public clouds, as well as edge sites. This helps deliver app services with control, security, and flexibility wherever they are; this is especially critical for modern microservices-based apps.
This demo guide showcases the delivery of a sample app with multi-cloud networking across different cloud locations HTTP Load Balancer (Layer 7) and TCP Load Balancer (Layer 3) on the F5 Distributed Cloud Global Network. It uses MCN deployment of a representative customer app with services in three different clouds: Cloud A, Cloud B, and Cloud C. These can be any combination of clouds (Amazon AWS and Microsoft Azure) and are captured in three respective modules with (A) Step-by-step Console based instructions and (B) a Set of Terraform scripts for either public cloud.
A fictitious sample app representative of a typical banking website with customer login, statements, and bank transfers is used to show the solution. The guide considers a scenario when a core banking app (running in a Cloud A) is adding additional banking services needs in other clouds, such as a Refer-a-Friend Widget or a Transactions Module. This would be a common scenario for an M&A acquisition or for services developed by different teams.
MODULE 1: Deployment of Core App & Front-end Portal in Cloud A
The first module outlines using an HTTP Load Balancer to deliver a front-end portal in Cloud A by leveraging Terraform scripts to simplify the deployment. A DNS tool is also used to assist with generating a domain entry and a certificate. This module demonstrates SSL offloading and reducing the web server processing of decrypting SSL traffic. The resulting core app will provide a starting point for our multi-cloud networking as it will contain placeholders for a few features that will be activated in subsequent modules.
MODULE 2: Connection of Refer-a-Friend Widget in Cloud B
The second module shows how to connect the Refer-a-Friend Widget running in Cloud B to the core app deployed in the previous step using an HTTP Load Balancer (Layer 7). Again, Terraform scripts will simplify the deployment of the app services in a different cloud provider. It is recommended to use different providers for Clouds A and B, to simulate true multi-cloud capabilities.
After all the configuration of clouds and HTTP LB is completed, Arcadia DNS Tool is used to update the DNS with the private IP address for the site deployed by F5 Distributed Cloud Services. This enables the Refer-a-Friend Widget on the website.
MODULE 3: Connection of the Transaction Module in Cloud C
Unlike the previous modules that used Layer 7 connectivity, this uses Layer 3 Multi-Cloud Networking via Sites/Global Network. Layer 3 connectivity is used to connect the transaction element to the backend of the app. When Cloud C with the Transaction module is configured, we create and configure a Global Network in the Cloud A site.
In summary, this demo guide showcases that no matter where your applications are, F5 Distributed Cloud Multi-Cloud Networking makes easy work of deployment and delivery of app services on different cloud providers.
For more information, refer to the following sources:
Mobile App Shield is a security technology that integrates directly into mobile applications to provide proactive security against a wide range of attacks, such as tampering, debugging, code injection, code modification and stealing of data from the app. Mobile App Shield is delivered in separate packages for iOS and for Android. Shielding an app with Mobile App Shield is an automated process.
F5 Distribtued Cloud (XC) Mobile App Shield contains multiple security features to counter threats found in the Android and iOS eco-system, and are outlined further below.
In this Product Demonstration we'll be showcasing Mobile App SHIELD with a product demonstration of how to both integrate SHIELD while also highlighting the protection it provides
Mobile App Shield represents an advanced security technology seamlessly embedded within mobile applications, offering proactive protection against a diverse array of threats and is easily coupled with XC Bot Defense for comprehensive Mobile App Protection for both Android and iOS.
F5 Distributed Cloud (F5 XC) Web Application and API Protection (WAAP) provides a rich set of security configurations to safeguard applications. Each application configuration differs, so configuring appropriate controls and security measures is needed to prevent applications from data breaches.
Even though your application is currently protected, it doesn’t necessarily mean it’s steel proof for future intrusions. We should keep monitoring application event data for new types of attacks that may surface. If new exploits are found, we must accordingly update the existing configurations.
Identifying the security attacks and taking a necessary action at the right moment is pivotal in protecting applications. Each minute of delay may result in severe consequences to businesses as well as application data. Security Analytics --> “Events” tab populates a large collection of requests data. So, inspecting each event and then coming up with security measures is not a recommended way as it’s inefficient and time consuming.
WAAP Security Incidents is a new feature which focusses on solving this concern by continuously pushing application events to internal AI/ML engines. The "Incidents” tab simplifies the investigation of attacks by grouping thousands of events into few incidents based on context and common characteristics. These can guide customers to quickly examine these issues without getting lost in a flood of security events. These incidents give valuable insights efficiently, thereby providing sufficient time for application owners to research and configure the preventive solutions before getting exploited.
This article delves into basics of WAAP security incidents: what it is, how it works and also enlightens this feature importance in identifying security attacks at the critical time.
Adaptive applications utilize an architectural approach that facilitates rapid and often fully-automated responses to changing conditions—for example, new cyberattacks, updates to security posture, application performance degradations, or conditions across one or more infrastructure environments.
Unlike the current state of many apps today that are labor-intensive to secure, deploy, and manage, adaptive apps are enabled by the collection and analysis of live application performance and security telemetry, service management policies, advanced analytic techniques such as machine learning, and automation toolchains.
Two key components of F5's Adaptive Apps vision are to help our customers speed deployments of new apps and to help our customers more rapidly detect and neutralize application security threats.
In today's digital landscape, the need for modern, simple, consistent, and scalable application security has become increasingly critical. With the increasing number of cyber threats and the potential risks they pose to sensitive data, user privacy, and business operations, organizations must prioritize robust application security measures across all environments including traditional on-premises, public cloud, and edge deployments. As technology advances and applications become more interconnected, the potential risks and vulnerabilities that can be exploited by malicious actors also grow. Therefore, it is essential to prioritize robust and scalable security measures to protect sensitive user data, prevent unauthorized access, and maintain the integrity of applications.
Simplicity is crucial to ensure that security measures are practical, easy to understand, quick to deploy, and easy to maintain. Simplicity is a key requirement for accelerating secure application deployments.
Consistency ensures that security standards and best practices are uniformly applied across all applications, reducing the potential for gaps or vulnerabilities and speeding up secure application deployments. By prioritizing modern, simple, and consistent application security, organizations can safeguard their digital assets, maintain user trust, and mitigate the financial and reputational risks associated with security breaches.
As organizations expand their digital presence, the volume and complexity of applications grow exponentially. This expansion introduces new security challenges and potential vulnerabilities. Modern application security practices must address these challenges by adopting scalable solutions that can accommodate the increasing demands of diverse and dynamic applications.
By prioritizing modern, simple, consistent, and scalable application security, organizations can quickly and effectively protect their digital assets, mitigate the impact of security breaches, and maintain the trust of their users in an ever-changing digital landscape.
This Demo Kit provides an example of a modern application security architecture that meets the requirements of simplicity, consistency, and scalabiity via use of F5 Distributed Cloud Web Application and API Protection (F5 XC WAAP).
The modern architecture shown here enables you to more easily and quickly deploy a common set of application security capabilities across all deployment environments using F5 XC as a single consistent deployment console. Additionally, given application security does not fit a "one size fits all" model, we show how to enable tiered application security capabilities as appropriate for your low risk, medium risk, and high risk applications.
Moreover, with this security architecture, existing BIG-IP AWAF customers have the flexibility to continue using AWAF in their on-prem data centers and/or migrate their AWAF application security capabilities to F5 XC WAAP using the F5 Policy Supervisor.
Specifically we demonstrate the following:
With this modern architecture, our customers can enjoy the following benefits:
Note application deployments are not limited to traditional on-prem data center environments as shown in this simplified example. Applications deployed within F5 XC REs and CEs (whether on-prem or in public cloud) can also enjoy the same F5 XC WAAP-enabled tiered application security benefits.
Simple and easy to deploy WAAP solution that delivers comprehensive app protection across the F5 portfolio and our customers’ deployment scenarios.
Customers find it difficult, time-consuming, and costly to deploy a consistent security model across their application deployments on-prem, in cloud, and at the edge in an automated fashion.
Customers can easily and quickly deploy secure applications and make better informed remediation decisions regarding their security posture and protect against potential threats in an automated fashion.
Please refer to https://github.com/f5devcentral/adaptiveapps for detailed instructions and artifacts for deploying this demo.
F5 XC WAAP Provides Tiered Application Security in Regional Edge (RE) and Customer Edge (CE)
F5 XC WAAP Provides Tiered Application Security for Partner Traffic in Customer Edge (CE)
Bringing It All Together: F5 XC WAAP Solution Provides Tiered Application Security
Watch the demo video:
An open group to foster discourse around the integration of security, networking, and application management services across public/private cloud and network edge compute services.
User |
---|
Ernesto_Silver1
![]() Fog
|
Tony_Asante
![]() Fog
|
Keiron_Shepherd
![]() F5 Employee
|
Long_Trinh
![]() Altostratus
|
NCartron
![]() F5 Employee
|