F5 Distributed Cloud (XC) Services are SaaS-based security, networking, and application management services that can be deployed across multi-cloud, on-premises, and edge locations. This article will show you how you can deploy F5 Distributed Cloud’s Customer Edge (CE) site in Cisco Application Centric Infrastructure (ACI) so that you can securely connect your application in Hybrid Multi-Cloud environment.
A F5 Distributed Cloud's Customer Edge (CE) site can be deployed with Layer Three Attached in Cisco ACI environment using Cisco ACI L3Out. As a reminder, Layer Three Attached is one of the deployment models to get traffic to/from a F5 Distributed Cloud's CE site, where the CE can be a single node or a three nodes cluster. Static routing and BGP are both supported in the Layer Three Attached deployment model. When a Layer Three Attached CE site is deployed in Cisco ACI environment using Cisco ACI L3Out, routes can be exchanged between them via static routing or BGP. In this article, we will focus on BGP peering between Layer Three Attached CE site and Cisco ACI Fabric.
BGP configuration on XC is simple and it only takes a couple steps to complete:
1) Go to "Multi-Cloud Network Connect" -> "Networking" -> "BGPs".
*Note: XC homepage is role based, and to be able to configure BGP, "Advanced User" is required.
2) "Add BGP" to fill out the site specific info, such as which CE Site to run BGP, its BGP AS number etc., and "Add Peers" to include its BGP peers’ info.
*Note: XC supports direct connection for BGP peering IP reachability only.
In this section, we will use an example to show you how to successfully bring up BGP peering between a F5 XC Layer Three Attached CE site and a Cisco ACI Fabric so that you can securely connect your application in Hybrid Multi-Cloud environment.
In our example, CE is a three nodes cluster (Master-0, Master-1 and Master-2) that has a VIP 10.10.122.122/32 with workloads, 10.131.111.66 and 10.131.111.77, in the cloud (AWS):
The CE connects to the ACI Fabric via a virtual port channel (vPC) that spans across two ACI boarder leaf switches. CE and ACI Fabric are eBGP peers via an ACI L3Out SVI for routes exchange. CE is eBGP peered to both ACI boarder leaf switches, so that in case one of them is down (expectedly or unexpectedly), CE can still continue to exchange routes with the ACI boarder leaf switch that remains up and VIP reachability will not be affected.
First, let us look at the XC BGP configuration ("Multi-Cloud Network Connect" -> "Networking" -> "BGPs"):
We "Add BGP" of "jy-site2-cluster" with site specific BGP info along with a total of six eBGP peers (each CE node has two eBGP peers; one to each ACI boarder leaf switch):
We "Add Item" to specify each of the six eBPG peers’ info:
Example reference - ACI BGP configuration:
There are a couple of ways to check the BGP peering status on the F5 Distributed Cloud's Console:
Go to "Multi-Cloud Network Connect" -> "Networking" -> "BGPs" -> "Show Status" from the selected CE site to bring up the "Status Objects" page. The "Status Objects" page provides a summary of the BGP status from each of the CE nodes. In our example, all three CE nodes from "jy-site2-cluster" are cleared with "0 Failed Conditions" (Green):
We can simply click on a CE node UID to further look into the BGP status from the selected CE node with all of its BGP peers. Here, we clicked on the UID of CE node Master-2 (172.18.128.14) and we can see it has two eBGP peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), and both of them are Up:
Here is the BGP status from the other two CE nodes - Master-0 (172.18.128.6) and Master-1 (172.18.128.10):
For reference, here is an example of a CE node with "Failed Conditions" (Red) due to one of its BGP peers is down:
Go to "Multi-Cloud Network Connect" -> "Overview" -> "Sites" -> "Tools" -> "Show BGP peers" to bring up the BGP peers status info from all CE nodes from the selected site. Here, we can see the same BGP status of CE node master-2 (172.18.128.14) which has two eBGP peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), and both of them are Up:
Here is the output of the other two CE nodes - Master-0 (172.18.128.6) and Master-1 (172.18.128.10):
Example reference - ACI BGP peering status:
To check the BGP routes, both received and advertised routes, go to "Multi-Cloud Network Connect" -> "Overview" -> "Sites" -> "Tools" -> "Show BGP routes" from the selected CE sites:
In our example, we see all three CE nodes (Master-0, Master-1 and Master-2) advertised (exported) 10.10.122.122/32 to both of its BPG peers: 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2), while received (imported) 172.18.188.0/24 from them:
Now, if we check the ACI Fabric, we should see both 172.18.128.11 (ACI boarder leaf switch 1) and 172.18.128.12 (ACI boarder leaf switch 2) advertised 172.18.188.0/24 to all three CE nodes, while received 10.10.122.122/32 from all three of them (note "|" for multipath in the output):
To view the routing table of a CE node (or all CE nodes at once), we can simply select "Show routes":
Based on the BGP routing table in our example (shown earlier), we should see each CE node has two Equal Cost Multi-Path (ECMP) installed in the routing table for 172.18.188.0/24: one to 172.18.128.11 (ACI boarder leaf switch 1) and one to 172.18.128.12 (ACI boarder leaf switch 2) as the next-hop, and we do (note "ECMP" for multipath in the output):
Now, if we check the ACI Fabric, each of the ACI boarder leaf switch should have three ECMP installed in the routing table for 10.10.122.122: one to each CE node (172.18.128.6, 172.18.128.10 and 172.18.128.14) as the next-hop, and we do:
We can now securely connect our application in Hybrid Multi-Cloud environment:
*Note: After F5 XC is deployed, we also use F5 XC DNS as our primary nameserver:
To check the requests on the F5 Distributed Cloud's Console, go to "Multi-Cloud Network Connect" -> "Sites" -> "Requests" from the selected CE site:
A F5 Distributed Cloud's Customer Edge (CE) site can be deployed with Layer Three Attached deployment model in Cisco ACI environment. Both static routing and BGP are supported in the Layer Three Attached deployment model and can be easily configured on F5 Distributed Cloud's Console with just a few clicks. With F5 Distributed Cloud's Customer Edge (CE) site deployment, you can securely connect your application in Hybrid Multi-Cloud environment quickly and efficiently.
OWASP API Security Top 10 - 2019 has two categories “Mass Assignment” and “Excessive Data Exposure” which focus on vulnerabilities that stem from manipulation of, or unauthorized access to an object's properties. For ex: let’s say there is a user information in json format {“UserName”: ”apisec”, “IsAdmin”: “False”, “role”: ”testing”, “Email”: “apisec@f5.com”}. In this object payload, each detail is considered as a property, and so vulnerabilities around modifying/showing these sensitive properties like email/role/IsAdmin will fall under these categories.
These risks shed light on the hidden vulnerabilities that might appear when modifying the object properties and highlighted the essence of having a security solution to validate user access to functions/objects while also ensuring access control for specific properties within objects. As per them, role-based access, sanitizing the user input, and schema-based validation play a crucial role in safeguarding your data from unauthorized access and modifications.
Since these two risks are similar, the OWASP community felt they could be brought under one radar and were merged as “Broken Object Property Level Authorization” (BOPLA) in the newer version of OWASP API Security Top 10 – 2023.
Mass Assignment vulnerability occurs when client requests are not restricted to modifying immutable internal object properties. Attackers can take advantage of this vulnerability by manually parsing requests to escalate user privileges, bypass security mechanisms or other approaches to exploit the API Endpoints in an illegal/invalid way.
For more details on F5 Distributed Cloud mitigation solution, check this link: Mitigation of OWASP API6: 2019 Mass Assignment vulnerability using F5 XC
Application Programming Interfaces (APIs) don’t have restrictions in place and sometimes expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks to gain access to customer information, and identifying the sensitive information in these huge chunks of API response data is crucial in data safety.
For more details on this risk and F5 Distributed Cloud mitigation solution, check this link: Mitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC
Wrapping up, this article covers the overview of the newly added category of BOPLA in OWASP Top 10 – 2023 edition. Finally, we have also provided minutiae on each section in this risk and reference articles to dig deeper into F5 Distributed Cloud mitigation solutions.
Reference links or to get started:
F5 Distributed Cloud (XC) Origin server subset rules provide the ability to create match conditions on incoming source traffic to the HTTP load balancer. The match conditions include Country, ASN, Regional edge (RE), IP address, and client label selectors for subset selection of destination (origin servers). This helps in customized routing based on request information.
As there is a decent increase in Holiday retail sales every year, which leads to an increase in ecommerce shopping during Thanksgiving, Cyber Monday, and Holiday season as well. It is observed that there is a spike in web traffic to 38% and Black Friday sees 3x the traffic than normal days during this time frame, and this has led to 1.7 billion online visits during the global holiday season. Under these circumstances, users in certain locations consume more than 50% of global traffic. An event of this nature requires infrastructure that must easily scale up to match the surge in traffic.
One of the most suitable solutions for this challenge is to identify the users’ demands and their geographical location and distribute the traffic by increasing further bandwidth to the existing or new servers. This diversification in traffic based on geo location helps the users to access the application specifically for their immediate needs there by avoiding wait time or outages during this period.
This is achieved using F5 XC Origin Server Subset rules, which helps to redirect the traffic based on Geo Location with subset rules.
Below are the steps to be followed to redirect the traffic that helps in solving the situation mentioned above,
Step 1: Creating a label (key-value pair).
Step 2: Adding labels to one or more Origin Servers.
Step 3: Creating subset rule in Load Balancer.
This rule helps in redirecting the traffic from the countries mentioned below to use different servers apart from the server allotted for United States, thereby providing more Bandwidth to the Users in United States.
It is observed from the above logs that the users from US Geo location directed as per Origin Server Label associated with it.
Whereas the users apart from US get load-balanced to different Origin Server as mentioned in Origin Pool as per Label configs.
Thereby, users in the US could be able to experience the enhanced capability of their allocated servers. This helps in avoiding outages, bottlenecks.
Note: Given requirement can also be achieved using RE match condition as well by adding necessary REs as shown below.
F5 XC analyzes the traffic based on its origin, such as Regional Edges, Geo Location, IP Match and more and redirects the traffic as per Origin Server Subset Rules configuration. This simple and effective technique could be able to meet the users’ demands in no time and helps in solving major issues during peak usage hours of e-commerce sites.
https://roirevolution.com/blog/2022-holiday-ecommerce-stats-trends-predictions/
https://docs.cloud.f5.com/docs/how-to/others/create-known-labels-keys
https://docs.cloud.f5.com/docs/how-to/app-networking/origin-pools
https://docs.cloud.f5.com/docs/how-to/app-networking/http-load-balancer
Attack signatures are the rules and patterns which identifies attacks against your web application. When the Load balancer in the F5 Distributed Cloud (XC) console receives a client request, it compares the request to the attack signatures associated with your WAF policy and detects if the pattern is matched. Hence, it will trigger an "attack signature detected" violation and will either alarm or block based on the enforcement mode of your WAF policy. A generic WAF policy would include only the attack signatures needed to protect your application. If too many are included, you waste resources on keeping up with signatures that you don't need. Same way, if you don't include enough, you might let an attack compromise your application.
F5 XC WAF is supporting multiple states of attack signatures like enable, disable, suppress, auto-supress and staging. This article focusses on how F5 XC WAF supports staging and detects the staged attack signatures and gives the details of attack signatures by allowing them into the application.
A request that triggers a staged signature will not cause the request to be blocked, but you will see signature trigger details in the security event. When a new/updated attack signature(s) is automatically placed in staging then you won't know how that attack signature is going to affect your application until you had some time to test it first. After you test the new signature(s), then you can take them out of staging, apply respective event action to protect your application!
Here is the step-by-step process of configuring the WAF Staging Signatures and validating them with new and updated signature attacks.
Associate the initially created APP firewall ``waf-sig-staging`` under LB WAF configuration section.
To verify the staging attacks, you need the signature attacks listed in attack signature DB. In this demo we are using the below newly added attack signature (200104860) and updated attack signature (200103281) Id’s.
Now, Let’s try to access the LB domain with the updated attack signature Id i.e 200103281 and verify that the LB dashboard has detected the staged attack signature by reflecting the details.
Now try to access the LB domain with new signature attack adding the cookie in the request header.
Now, Disable the staging in WAF policy ``waf-sig-staging``.
Let’s try to access the LB domain with new signature attack.
As you see from the demo, F5 XC WAF supports staging feature which will enhance the testing scope of newly added and updated attack signature(s).
In today's fast-paced digital landscape, businesses are constantly seeking ways to enhance their IT infrastructure's performance, scalability, and security while optimizing costs. One solution to meet these demands is the integration of F5 Distributed Cloud (XC) Customer Edge (CE) within the HPE GreenLake platform. This strategic collaboration brings forth a combination of application delivery, security, and flexible consumption models that help organizations in their hybrid and multi-cloud environments.
F5 Distributed Cloud Services are SaaS-based security, networking, and application management services that enable customers to deploy, secure, and operate their applications in a cloud-native environment wherever needed–data center, multi-cloud, or the network or enterprise edge.
HPE GreenLake provides companies with an easy way to use cloud computing services. It lets businesses pay only for the IT infrastructure they need and use. With HPE GreenLake, companies don't have to purchase and manage their own IT hardware and software. Instead, HPE sets up the cloud services and handles maintaining and upgrading the infrastructure. This flexible approach makes it simpler and more affordable for enterprises to leverage the power of the cloud. It also gives companies access to the latest technology from HPE without large upfront investments.
F5 XC CE is an application delivery and security software from F5 that can improve company IT systems in several ways. When businesses use F5 XC CE with HPE GreenLake cloud services, they get a powerful combined solution.
The F5 software helps ensure applications run fast and reliably by optimizing how they are delivered to users. It also strengthens application security against threats.
By implementing F5 XC CE through HPE GreenLake's flexible cloud platform, companies can deploy and manage these benefits faster and more easily. They don't need to purchase and maintain the infrastructure on their own.
Together, F5 XC CE and HPE GreenLake provide companies with an efficient way to boost application performance, enhance security, simplify IT operations, and reduce costs. The integrated solution transforms IT infrastructure into a strategic advantage that aligns with business goals.
F5 XC CE provides traffic management and optimization methods to keep applications running fast and smoothly. The software balances user requests across available servers to avoid overloading any one server. It also optimizes how content is delivered based on application and network conditions.
These features maximize application performance and maintain consistent speeds for users even when traffic spikes occur. If demand increases, companies can rapidly scale up their infrastructure through HPE GreenLake's flexible cloud platform. The service allows expanding IT resources on demand to support more users and heavier workloads.
By working together, F5 XC CE's application optimization and HPE GreenLake's scalable cloud infrastructure ensure applications stay speedy and reliable at all usage levels. Companies don't have to sacrifice performance as their needs grow.
F5 XC CE provides powerful application security capabilities that protect companies from cyber threats. It includes features like:
By using F5 XC CE with HPE GreenLake, businesses get robust, layered security for their applications and data.
HPE GreenLake adds extra defenses at the cloud infrastructure level. Together, the solutions create an end-to-end security envelope to safeguard critical systems and information.
Companies can deploy F5 XC CE's security easily and cost-effectively through HPE GreenLake's cloud platform. The service handles the deployment and infrastructure management.
With cyberattacks growing, applications need strong security. F5 XC CE and HPE GreenLake together provide a flexible, comprehensive security environment. Companies can protect their apps, data, and users across cloud, on-premises, and hybrid environments.
Together, the solutions let organizations add or reduce cloud services and F5 capabilities on demand. Companies can scale up seamlessly during busy periods and scale down during slower times.
This flexibility optimizes costs. Businesses don't pay for more than they need. However, they can expand resources instantly to maintain performance and security when workloads increase.
F5 XC CE and HPE GreenLake make managing IT infrastructure easier.
The solutions provide:
With these capabilities, the joint solution minimizes the workload for IT teams. It allows them to spend less time on routine IT management. Instead, they can focus on delivering more business value through strategic initiatives and innovation.
F5 XC CE and HPE GreenLake are designed with hybrid and multi-cloud environments in mind. This compatibility ensures seamless integration between on-premises and cloud-based resources, allowing organizations to embrace cloud-native strategies while preserving investments in existing infrastructure.
Conclusion
Together, F5 XC CE and HPE GreenLake provide a strong IT infrastructure solution.
The key benefits:
- Optimized application performance
- Enhanced security
- Flexible, pay-as-you-go model
- Simplified infrastructure management
This partnership empowers businesses to:
- Meet changing needs
- Protect critical data
- Stay competitive
By combining F5's application expertise with HPE GreenLake's cloud platform, companies can confidently navigate technology challenges.
Kindly explore this demonstration that discusses the deployment of F5 Distributed Cloud Customer Edge within HPE GreenLake Central below.
Hello to Everyone,
After my investigation of the OpenAPI/Swagger options for AWAF/ASM that I have questions under F5 AWAF/ASM support for wildcard url and parameter... - DevCentral that I have still have questions now I have the same questions for XC Swagger/OpenAPI supported custom options and if there is a list?
I see that XC supports Regex for wildcard in the path as shown below but what about parameters with wildcard names or if there could be a wildcard support for methods as to not have to specify each method under an http path or using custom word like "any" ? Other than that it will be nice if there is support for parameters to be specified as in any location not query or request body.
Any help will appreciated.
},
"servers": [
{
"url": "/"
}
],
"paths": {
"/niki.*": {
"parameters": [
{
"in": "query",
"name": "userId",
"schema": {
"type": "integer"
},
"required": true,
"description": "Numeric ID of the user to get"
}
Along with the likes of Splunk and DataDog, we can add another SIEM vendor in the Distributed Cloud (XC) external logging line up. QRadar has its own native integration drop-down from the Global Log Receiver menu.
We know Distributed Cloud’s innate security and performance dashboards are rich with data. Even still, many customers prefer to use their existing SIEM environment to ingest the security events generated from Distributed Cloud. In support of this, a custom F5 XC specific content pack was created to hasten the ease of use within QRadar itself. The content pack consists of a zip file which contains what IBM calls a DSM (Device Support Module) which collects, maps and parses the security events in JSON format. The F5 XC content pack covers both security and access logs.
The content pack is discoverable on IBM’s X-Force App Exchange under F5 Distributed Cloud.
QRadar is able to collect events forwarded via HTTP or HTTPs. For a deeper technical walkthrough please see the video I’ve created.
You did it! With the power and reach of Distributed Cloud combined with the security that NGINX Plus provides, we have been able to easily provide authorization for our example API-based application.
Where could we go from here? Do you remember we deployed these applications to two specific geographical sites? You could very easily extend the reach of this solution to more regions (distributed globally) to provide reliability and low-latency experiences for the end users of this application. Additionally, you can easily attach Distributed Cloud’s award-winning DDoS mitigation, WAF, and Bot mitigation to further protect your applications from attacks and fraudulent activity.
Thanks for taking this journey with me, and I welcome your comments below.
This article wouldn’t have been the same without the efforts of @Fouad_Chmainy, @Matt_Dierick, and Alexis Da Costa. They are the original authors of the distributed design, the Sentence app, and the NGINX Plus OIDC image optimized for Distributed Cloud. Additionally, special thanks to @Cody_Green and @Kevin_Reynolds for inspiration and assistance in the Terraform portion of the solution. Thanks, guys!
Adaptive applications utilize an architectural approach that facilitates rapid and often fully-automated responses to changing conditions—for example, new cyberattacks, updates to security posture, application performance degradations, or conditions across one or more infrastructure environments.
Unlike the current state of many apps today that are labor-intensive to secure, deploy, and manage, adaptive apps are enabled by the collection and analysis of live application performance and security telemetry, service management policies, advanced analytic techniques such as machine learning, and automation toolchains.
This example seeks to demonstrate value in two key components of F5's Adaptive Apps vision: helping our customers more rapidly detect and neutralize application security threats and helping to speed deployments of new applications.
In today's interconnected digital landscape, the ability to share application security policies seamlessly across data centers, public clouds, and Software-as-a-Service (SaaS) environments is of paramount importance. As organizations increasingly rely on a hybrid IT infrastructure, where applications and data are distributed across various cloud providers and security platforms, maintaining consistent and robust security measures becomes a challenging task.
Using a consistent & centralized security policy architecture provides the following key benefits:
Consistent Protection: A unified security policy approach guarantees consistent protection for applications and data, regardless of their location. This reduces the risk of security loopholes and ensures a standardized level of security across the entire infrastructure.
Improved Threat Response Efficiency: By sharing application security policies, organizations can respond more efficiently to emerging threats. A centralized approach allows for quicker updates and patches to be applied universally, strengthening the defense against new vulnerabilities.
Regulatory Compliance: Many industries have strict compliance requirements for data protection. Sharing security policies helps organizations meet these regulatory demands across all environments, avoiding compliance issues and potential penalties.
Streamlined Management: Centralizing security policies simplifies the management process. IT teams can focus on maintaining a single set of policies, reducing complexity, and ensuring a more effective and consistent security posture.
Cost-Effective Solutions: Investing in separate security solutions for each platform can be expensive. Sharing policies allows businesses to optimize security expenditure and resource allocation, achieving cost-effectiveness without compromising on protection.
Enhanced Collaboration: A shared security policy fosters collaboration among teams working with different environments. This creates a unified security culture, promoting information sharing and best practices for overall improvement.
Improved Business Agility: A unified security policy approach facilitates smoother transitions between different platforms and environments, supporting the organization's growth and scalability.
By having a consistent security policy framework, businesses can ensure that critical security policies, access controls, and threat prevention strategies are applied uniformly across all their resources. This approach not only streamlines the security management process but also helps fortify the overall defense against cyber threats, safeguard sensitive data, and maintain compliance with industry regulations. Ultimately, the need for sharing application security policies across diverse environments is fundamental in building a resilient and secure digital ecosystem.
In the spirit of enabling a unified security policy framework, this example shows the following two key use cases:
Specifically, we show how to use F5's Policy Supervisor and Policy Supervisor Conversion Utility to import, convert, replicate, and deploy WAF policies across the F5 security proxy portfolio. Here we will show how the Policy Supervisor tool provides flexibility in offering both automated and manual ways to replicate and deploy your WAF policies across the F5 portfolio. Regardless of the use case, the steps are the same, enabling a consistent and simple methodology.
We'll show the following 2 use cases:
1. Manual BIG-IP AWAF to F5 XC WAAP policy replication & deployment:
2. Automated NGINX NAP to F5 XC WAAP policy replication & deployment:
Simple, easy way to replicate & deploy WAF application security policies across F5's BIG-IP AWAF, NGINX NAP, and F5 XC WAAP security portfolio.
While the Policy Supervisor supports all of the possible security policy replication & migration paths shown on the left below, this example is focused on demonstrating the two specific paths shown on the right below.
Customers find it challenging, complex, and time-consuming to replicate & deploy application security policies across their WAF deployments which span the F5 portfolio (including BIG-IP, NAP, and F5XC WAAP) within on-prem, cloud, and edge environments.
By enforcing consistent WAAP security policies across multiple clouds and SaaS environments, organizations can establish a robust and standardized security posture, ensuring comprehensive protection, simplified management, and adherence to compliance requirements.
Please refer to https://github.com/f5devcentral/adaptiveapps for detailed instructions and artifacts for deploying this example use case.
Watch the demo video:
In today's digital landscape, where cyber threats constantly evolve, safeguarding an enterprise's web applications is of paramount importance. However, for security engineers tasked with protecting a large enterprise equipped with a substantial deployment of web application firewalls (WAFs), the task of managing distributed security policies across the entire application landscape presents a significant challenge. Ensuring consistency and coherence, in both the effectiveness and deployment of these policies is essential, yet it's far from straightforward. In this article and demo, we'll explore a few best practices and tools available to help organizations maintain robust security postures across their entire WAF infrastructure, and how embracing modern approaches like DevSecOps and the F5 Policy Supervisor and Conversion tools can help overcome these challenges.
Storing your WAF policies as code within a secure repository is a DevSecOps best practice that extends beyond consistency and tracking. It's also the first step in making security an integral part of the development process, fostering a culture of security throughout the entire software development and delivery lifecycle. This shift-left approach ensures that security concerns are addressed early in the development process, reducing the risk of vulnerabilities and enhancing collaboration between security, development, and operations teams. It enables automation, version control, and rapid response to evolving threats, ultimately resulting in the delivery of secure applications with speed and quality.
To help facilitate this, the entire F5 security product portfolio supports the ingestion of WAF policy in JSON format. This enables you to store your policies as code in a Git repository and seamlessly reference them during your automation-driven deployments, guaranteeing that every WAF deployment is well-prepared to safeguard your critical applications.
"wafPolicy": {
"class": "WAF_Policy",
"url": "https://raw.githubusercontent.com/knowbase/architectural-octopod/main/awaf/owasp-auto-tune.json",
"enforcementMode": "blocking",
"ignoreChanges": true
}
Considering the sheer number of WAFs in large enterprises, managing distributed policies can easily overwhelm security teams. Coordinating updates, rule changes, and incident response across the entire application security landscape requires efficient policy lifecycle management tools. Using a centralized management system that provides visibility into the security posture of all WAFs and the state of deployed policies can help streamline these operations. The F5 Policy Supervisor was designed to meet this critical need.
The Policy Supervisor allows you to easily create, convert, maintain, and deploy WAF polices across all F5 Application Security platforms. With both an easily navigated UI and robust API, the Policy Supervisor tool greatly enhances your ability to easily manage security policies at scale.
In the context of the Policy Supervisor, providers are remote instances that provide WAF services, such as NGINX App Protect(NAP), BIG-IP Advanced WAF(AWAF), or F5 Distributed Cloud Web App and API Security(XC WAAP). The "Providers" section serves as the command center where we oboard of all our WAF instances and gain insight into their status and deployments. For BIG-IP and NGINX we employ agents to perform the onboarding. An agent is a lightweight container that stores secrets in a vault and connects the instances to the SaaS layer. For XC we use an API token, this can easily be generated by navigating to Account > Account Settings > Personal Management > Credentials> Add Credentials in the XC console. Detailed instructions for adding both types of providers are readily accessible during the "Add Provider" workflow.
After successfully onboarding our providers, we can ingest the currently deployed policies and begin managing them on the platform.
The "Policies" section serves as the central hub for overseeing the complete lifecycle of policies onboarded onto the platform. Within this section, we gain access to policy insights, including their current status and the timestamp of their last modification. Selecting a specific policy opens up the "Policy Details" panel, offering a comprehensive suite of options. Here, you can edit, convert, deploy, export, or remove the policy, while also accessing essential information regarding policy-related actions and reports detailing those actions.
The tool additionally features an editor equipped with real-time syntax validation and auto-completion, allowing you to create new or edit existing polices on the fly.
Navigating the policy deployment process within the policy supervisor is a seamless and user-friendly experience. To initiate the process select "Deploy" from the "Policy Details" panel then selecting the source and target or targets. The platform first begins the conversion process to ensure the policy aligns with the features supported by the targets. Following this conversion, you'll receive a detailed report providing you with information on what was and was not converted. Once you've reviewed the conversion results and are satisfied with the outcome, select the endpoints to apply the policy to, and click deploy. That's it, it's that easy.
The F5 Policy Conversion tool allows you to transform JSON or XML formatted policies from an NGINX or BIG-IP into a format compatible with your desired target - any application security product in the F5 portfolio. This user-friendly tool requires no authentication, offering hassle-free access at https://policysupervisor.io/convert.
The interface has an intuitive design, simplifying the process: select your source and target types, upload your JSON or XML formatted policy, and with a simple click, initiate the conversion. Upon completion, the tool provides a comprehensive package that includes a detailed report on the conversion process and your newly adapted policies, ready for deployment onto your chosen target.
Whether you are augmenting a F5 BIG-IP Advanced WAF fleet with F5 XC WAAP at the edge, decomposing a monolithic application and protecting the new microservice with NIGNX App Protect, or augmenting a multi-cloud security strategy with F5 XC WAAP at the edge, the Policy Conversion utility can help ensure you are providing consistent and robust protection across each platform.
Managing security policies across a large WAF footprint is a complex undertaking that requires constant vigilance, adaptability, and coordination. Security engineers must strike a delicate balance between safeguarding applications and ensuring their uninterrupted functionality while also staying ahead of evolving threats and maintaining a consistent security posture across the organization. By harnessing the F5 Policy Supervisor and Conversion tools, coupled with DevSecOps principles, organizations can easily deploy and maintain consistent WAF policies throughout the organization's entire application security footprint.
F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)
For further information or to get started:
In a recent conversation, a customer mentioned they figured they had something on the order of 6000 API endpoints in their environment. This struck me as odd, as I am pretty sure they have 1000+ HTTP-based applications running on their current platform. If the 6000 API number is correct, each application has only six endpoints. In reality, most apps will have dozens or hundreds of endpoints... that means there are probably 10s of thousands of API endpoints in their environment!
But the good news is that you're not using all of them. The further good news is that you can REDUCE your security exposure.
When was the last time someone took things OFF your To-do list?
The answer is to profile your application landscape. Much like the industry did in the early 20-teens with Web Application Security, understanding your attack surface is the key to defining a plan to defend it. This is what we call API Discovery.
By allowing your traffic to be profiled and APIs uncovered, you can begin to understand the scale and scope of your security journey.
You can do this by putting your client traffic through an engine that offers this, like F5's Distributed Cloud (or F5 XC). With F5 XC, you can build a list of the URIs and their metadata and generate a threat assessment and data profile of the traffic it sees.
Interactive view of API calls
This is a fantastic resource if you can push your traffic through an XC Load Balancer, but that isn't always possible.
What are your options when you want to do this "Out of Band"? Out of Band (or OOB) presents challenges, but luckily, F5 has answers.
If we can gather the traffic and make it available to the XC API Discovery process, generating the above graphic for your traffic is easy.
Replaying, or more accurately, "mimicking" the traffic can be done using a log process on the main proxy - BIG-IP or nginx are good examples, but any would work - and then sending that logged traffic to a process that will generate a request and response that traverses an XC Load Balancer.
API Discovery traffic flow
This diagram shows using an iRule to gather the request and response data, which is then sent to a custom logging service. This service uses the data to recreate the request (and response) and sends that through the XC Load Balancer.
Both the iRule and the Logger service are available as open-source code here.
If you're interested in deploying this, F5 is here to help, but if you would like to deploy it on your own, here is a suggested architecture:
Deploying the logger as a container on F5 Distributed Cloud's AppStack on a Customer Edge instance allows the traffic to remain within your network enclave. The metadata is pushed to the XC control plane, where it is analyzed, and the API characteristics are recorded.
The analysis provided in the dashboard is invaluable for determining your threat levels and attack surfaces and helping you build a mitigation plan.
From the main dashboard shown here, the operator can see if any sensitive data was exposed (and what type it might be), the threat level assessment and the authorization method. Each can help determine a course of action to protect from data leakage or future breach attempts.
Drilling into these items, the operator is presented with details on the performance of the API (shown below).
endpoint details
To promote sharing of information, all of the data gathered is exportable in Swagger/OpenAPI format:
swagger export
We will publish more on this in the coming weeks, so stay tuned.
Mutual Transport Layer Security (mTLS) is a process that establishes encrypted and secure TLS connection between the parties and ensures both parties use X.509 digital certificates to authenticate each other. It helps to prevent the malicious third-party attacks which will imitate the genuine applications. This authentication method helps when a server needs to ensure the authenticity and validity of either a specific user or device. As the SSL became outdated several companies like Skype, Cloudfare are now using mTLS to secure business servers. Not using TLS or other encryption tools without secure authentication leads to ‘man in the middle attacks.’ Using mTLS we can provide an identity to a server that can be cryptographically verified and makes your resources more flexible.
Not only supporting the mTLS process, F5 Distributed Cloud WAF is giving the feasibility to forward the Client certificate attributes (subject, issuer, root CA etc..) to origin server via x-forwarded-client-cert header which provides additional level of security when the origin server ensures to authenticate the client by receiving multiple requests from different clients. This XFCC header contains the following attributes by supporting multiple load balancer types like HTTPS with Automatic Certificate and HTTPS with Custom Certificate.
In this Demo we are using httpbin as an origin server which is associated through F5 XC Load Balancer. Here is the procedure to deploy the httpbin application, creating the custom certificates and step-by-step process of configuring mTLS with different LB (Load Balancer) types using F5 XC.
Click on apply and enable the mutual TLS, import the root cert info, and add the XFCC header value.
Click on ‘Apply’ and then save the LB configuration with ‘Save and Exit’.
Now, we have created the Load Balancer with mTLS parameters. Let us verify the same with the origin server.
Log in the F5 Distributed Cloud Console and navigate to “Web APP & API Protection” module.
Goto Load Balancers and Click on ‘Add HTTP Load Balancer’.
Configure the origin pool by clicking on ‘Add Item’ under Origins. Select the created origin pool for httpbin.
As you can see from the demonstration, F5 Distributed Cloud WAF is providing the additional security to the origin servers by forwarding the client certificate info using mTLS XFCC header.
For those of you following along with the F5 Hybrid Security Architectures series, welcome back! If this is your first foray into the series and would like some background, have a look at the intro article. This series is using the F5 Hybrid Security Architectures GitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles. This repo is a community supported effort to provide not only a demo and workshop, but also a stepping stone for utilizing these practices in your own F5 deployments. If you find any bugs or have any enhancement requests, open a issue or better yet contribute!
Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller. To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service as well as F5 BIG-IP Access Policy Manger and Advanced WAF. We will then use F5 Container Ingress Service and IngressLink to tie it all together.
Distributed Cloud WAAP: Available for SaaS-based deployments and provides comprehensive security solutions designed to safeguard web applications and APIs from a wide range of cyber threats.
BIG-IP Access Policy Manager(APM) and Advanced WAF: Available for on-premises / data center and public or private cloud (virtual edition) deployment, for robust, high-performance web application and API security with granular, self-managed controls.
BIG-IP Container Ingress Services: A container integration solution that helps developers and system teams manage Ingress HTTP routing, load-balancing, and application services in container deployments.
F5 IngressLink: Combines BIG-IP, Container Ingress Services (CIS), and NGINX Ingress Controller to deliver unified app services for fast-changing, modern applications in Kubernetes environments.
NIGNX Ingress Controller for Kubernetes: A lightweight software solution that helps manage app connectivity at the edge of a Kubernetes cluster by directing requests to the appropriate services and pods.
Workspaces: Create a workspace for each asset in the workflow chosen
Workflow | Workspaces |
xcbn-cis | infra, bigip-base, bigip-cis, eks, nic, brewz, xc |
Your Terraform Cloud console should resemble the following:
Variable Set: Create a Variable Set with the following values.
IMPORTANT: Ensure sensitive values are appropriately marked.
Your Variable Set should resemble the following:
Fork and Clone Repo: F5 Hybrid Security Architectures
Actions Secrets: Create the following GitHub Actions secrets in your forked repo
Your GitHub Actions Secrets should resemble the following:
Step 1: Check out a branch for the deploy workflow using the following naming convention
xcbn-cis deployment branch: deploy-xcbn-cis
Step 2: Upload the Brewz OAS file to XC
* From the side menue under Manage, navigate to Files->Swagger Files and choose Add Swagger File
* Upload Brewz OAS file from the repo f5-hybrid-security-architectures/brewz/brewz-oas.yaml
Step 3: Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data
#Global
project_prefix = "Your project identifier"
resource_owner = "You"
#AWS
aws_region = "Your AWS region" ex: us-west-1
azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"]
#Assets
nic = true
nap = false
bigip = true
bigip-cis = true
Step 4: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data
#XC Global
api_url = "https://<Your Tenant>.console.ves.volterra.io/api"
xc_tenant = "Your XC Tenant ID"
xc_namespace = "Your XC namespace"
#XC LB
app_domain = "Your App Domain"
#XC WAF
xc_waf_blocking = true
#XC AI/ML Settings for MUD, APIP - NOTE: Only set if using AI/ML settings from the shared namespace
xc_app_type = []
xc_multi_lb = false
#XC API Protection and Discovery
xc_api_disc = true
xc_api_pro = true
xc_api_spec = ["Path to uploaded API spec"] *See below screen shot for how to obtain this value.
#XC Bot Defense
xc_bot_def = false
#XC DDoS
xc_ddos = false
#XC Malicious User Detection
xc_mud = false
* For Path to API Spec navigate to Manage->Files->Swagger Files, click the three dots next to your OAS, and choose "Copy Latest Version's URL". Paste this into the xc_api_spec in the xc/terraform.tfvars.
Step 5: Modify line 16 in the .gitignore and comment out the *.tfvars line with # and save the file
Step 6: Commit your changes
Step 1: Push your deploy branch to the forked repo
Step 2: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build
Step 3: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC
Step 4: Check your Terraform Outputs for XC and verify your app is available by navigating to the FQDN
Step 5: Configure F5 APM and Advanced WAF following the guide here.
The F5 XC WAAP platform learns the schema structure of the API by analyzing sampled request data, then reverse-engineering the schema to generates an OpenAPI spec. The platform validates what is deploy versus what is discovered and tags any Shadow APIs that are found. We can then download the learned schema and use it to augment our BIG-IP APM API protection configuration.
Step 1: From your deployment branch check out a branch for the destroy workflow using the following naming convention
xcbn-cis destroy branch: destroy-xcbn-cis
Step 2: Push your destroy branch to the forked repo
Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build
Step 4: Once the pipeline completes, verify your assets were destroyed
In this article we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAAP, F5 BIG-IP, and NGINX Ingress Controller to protect a test API running in AWS EKS. While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments.
Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances. Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect. With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications. Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution.
F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility)
F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller)
F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF)
F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)
For further information or to get started:
Table Of Contents:
As the modern digital application world keeps evolving and innovative, organizations are faced with an overwhelming amount of data coming from various sources. Navigating through this sea of data can be a daunting task, often leading to confusion and inefficiency in decision-making processes. Making sense of this data and extracting valuable insights is crucial in choosing the right decisions for protecting the applications and boosting their performance. This is where dashboards come to the rescue. Dashboards are powerful visual tools of consolidated results of complex data sets into user-friendly, interactive displays, offering a comprehensive overview of key metrics, trends, and insights in one place.
By grouping different types of service details into visuals like graphs, charts, tables, and metrics, and displaying those visuals on a single page, dashboards provide valuable insights. They help users to review the summary on a regular basis which focusses on highlighting the key issues, security risks and current business trends. They provide users with quick and easy-to-understand real time insights with analysis. They can also be made interactive by providing advanced options of global search and filters to view the data that best suits each user’s needs.
In a nutshell, "Dashboards are like canvas of your business data – offering a panoramic view of your applications data landscape which illuminates the hidden insights that drive application security decisions."
In this dashboards overview article, we will walk you through some of the enhanced F5 Distributed Cloud (XC) dashboards and their key insights.
You can explore more about security dashboards in simulator by clicking this link: https://simulator.f5.com/s/xc-dashboards.
App Connect is a L7 full proxy service using the F5 Global Network to provide apps with effective local connectivity. App Connect dashboards focus on details of how an app is connected both internally and externally by visualizing traffic ingress to the front end and egress to each service endpoint. This service also joined the trend by serving rich dashboards focused on application delivery. Application owners can now observe and take action on applications delivered across their multi-cloud network with a dashboard focused on applications and performance. Performance dashboard shows details like HTTP & TCP traffic overview, throughput, top load balancers, etc. Application dashboard focusses on load balancers health, active alerts, list of existing load balancers. Fig 7: Image showing App Connect Dashboard
Fig 8: Image showing App dashboard
When it comes to content delivery, performance plays a major role in smoother application streamlining. So, by keeping this in in picture, XC console has released CDN performance dashboard which features the cache hit ratio, allowing network operators and app owners to optimize the regional delivery of content that can be cached. They also show existing CDN distributions along with their metrics like requests count, Data transfer, etc.Fig 9: Image showing CDN dashboard
Note: This is our first overview article on XC dashboards series and stay tuned for our upcoming articles on these newly implemented rich dashboards.
Dashboards are highly recommended tools to visualize data in a simple and clear way. In this article, we have provided some insights on newly enhanced rich security dashboards of important features which are helpful to users in identifying application concerns and taking necessary actions.
For more details refer below links:
DNS, a Domain Name Service is a mechanism of how humans and machines discover where to connect. It is the universal directory of addresses to names. It is the most prominent feature that every service on the Internet depends on. It will be very critical to keep our organizations online in the midst of DDoS attacks.
We usually encounter with multiple scenarios of DNS failure in single on-prem, CPE based DNS solutions with backup or a single cloud DNS solution struggling with increasing traffic demands. Also, when we extend our traditional DNS to an organization’s websites and applications across different environments, most of the on-premises DNS solutions don’t scale efficiently to support today’s ever expanding app footprints.
F5 Distributed Cloud DNS simplifies all these problems by acting as both Primary or Secondary nameservers and provides global security, automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS – DNSSEC support. With the increase in deployment of app in cloud, F5 XC DNS helps to scale up and provide regional DNS as well.
It also acts as an intelligent DNS Load Balancer from F5 which directs application traffic across the environments globally. It performs health checks, provides disaster recovery, and automates responses to activities and events to maintain high performance among applications not only that it has regional DNS that helps to redirect traffic according to the Geographical location there by reducing load on single DNS server.
Here are the key areas where F5 Distributed Cloud DNS plays a vital role to solve:
There is a GitHub repo that is available and helps to deploy the services for the above key features.
Finally, this demo guide supports the customers by giving a clear instruction set and a detto deploy the services using F5 Distributed Cloud DNS.
Unlike other Open Banking initiatives that are mandate-driven in a top-down approach, the North-American Open Banking standardisation effort is industry-led, in a bottom-up fashion by the Financial Data Exchange (FDX), a non-profit body. FDX's members are financial institutions, fintechs, payment networks and financial data stakeholders, collaboratively defining the technical standard for financial data sharing, known as FDX API.
As Security is a core principle followed in development of FDX API, it's worth examining one of the ways in which F5 customers can secure and test their FDX deployments.
To understand the general architecture of an Open Banking deployment like FDX, it is helpful to visualise the API endpoints and components that play a central role in the standard, versus the back-end functions of typical Financial Institutions (the latter elements displayed as gray in the following diagram):
In typical Open Banking deployments, technical functions can be broadly grouped in Identity & Consent, Access and API management areas. These are core functions of any Open Banking standard, including FDX.
If we are to start adding the Security Controls (green) to the diagram and also show the actors that interact with the Open Banking deployment, the architecture becomes:
It is important to understand that Security Controls like the API Gateway, Web Application and API Protection or Next Generation Firewalls are just functions, rather than instances or infrastructure elements. In some architectures these functions could be implemented by the same instances/devices while in some other architectures they could be separate instances.
To help decide the best architecture for Open Banking deployments, it is worth checking the essential capabilities that these Security Controls should have:
WAAP (Web Application and API Protection) |
|
API Gateway |
|
(NG)FW |
|
IDS/IPS |
|
Databases |
|
Client-side protection |
|
One possible architecture that could satisfy these requirements would look similar to the one depicted in the following high-level diagram, where NGINX is providing API Gateway functionality while F5 Distributed Cloud provides WAF, Bot Management and DDoS protection.
In this case, just for demo purposes, the notional FDX backend has been deployed as a Kubernetes workload on GKE and NGINX API Gateway was deployed as an Ingress Controller while Distributed Cloud functionality was implemented in F5's Distributed Cloud (XC) Regional Edges, however there is a great degree of flexibility in deploying these elements on public/private clouds or on-premises.
To learn more on the flexibility in deploying XC WAAP, you can read the article Deploy WAAP Anywhere with F5 Distributed Cloud
Once the architectural decisions have been made, the next critical step is testing this deployment (with a focus on Security Controls testing) and adjust the security policies. This, of course, should be done continuously throughout the life of the application, as it evolves.
The challenge in testing such an environment comes from the fact the Open Banking API is generally protected against unathorized access via JSON Web Token (JWT), which is checked for authentication and authorisation at the API Gateway level. "Fixing" the JWT to some static values defeats the purpose of testing the actual configuration that is in (or will be moved to) Production, while generating the JWT automatically, to enable scripted testing is fairly complex, as it involves going through all the stages a real user would need to go through to perform a financial transaction.
As an example of the consent journey an end-user and the Data Recipient have to go through to obtain the JWT can be seen in the following diagram:
One solution to this challenge would be to use an API Tester that can perform the same actions as a real end-user: obtain the JWT in a pre-testing stage and feed it as an input to the security testing stages.
One such tool was built using the Open Source components described in the diagram below and is available on GitHub.
The API Tester is automatically deployed and run using GitHub Actions and Terraform Cloud. A full pipeline will go through the deployment of the the GCP's GKE infrastructure required to host the notional FDX back-end and the NGINX Ingress Controller API Gateway, the F5 XC WAAP (Web Application and API Protection), and the API Tester hosted on the F5 XC vk8s infrastructure.
A run is initiated by creating a repository branch and, following the deployment and test run, a report is being received via email.
Here's the API Tester in action:
F5 XC WAAP and NGINX API Gateway can provide the levels of protection required by the Financial Services Industry, the current article focussing on a possible security architecture for FDX, the North-American standard for Open Banking.
To test the security posture of the FDX Security Controls, a new API Tester framework is needed and the main challenge that is solved is the automated generation of JWT, following the same journey as a real end-user.
This allows the testing of deployments having a configuration similar to the one found in Production.
Who knows what an iRule is? iRules have been used by F5 BIG-IP customers for a quarter of a century! One of the most common use cases for iRules are for security decisions. If you're not coming from a BIG-IP and iRules background, what if I told you that you could apply 1000s of combinations of L4-L7 match criteria in order to take action on specific traffic? This is what a Service Policy provides similar to iRules. The ability to match traffic and allow, deny, flag, or tune application security policy based on that match. I often am asked, "Can F5 Distributed Cloud block ____ the same way I do with iRules?", and most commonly the answer is, absolutely, with a Service Policy.
Recently, I had a customer come to me with a challenge for blocking a specific attack based on a combination of headers. This is a common application security practice, specifically for L7 DDoS attacks, or even Account Take Over (ATO) attempts via Credential Stuffing/Brute Force Login. While F5 Distributed Cloud's Bot Defense or Malicious Users feature sets might be more dynamic tools in the toolbox for these attacks, a Service Policy is great for taking quick action. It is critical that you've clearly identified the match criteria inorder to ensure your service policy will not block good traffic.
As stated earlier, the attack was identified by a specific combination of headers and values of these headers. The specific headers looked something like below (taken from my test environment and curl tests):
curl -I --location --request GET 'https://host2.domain.com' \
--header 'User-Agent: GoogleMobile-9.1.76' \
--header 'Content-Type: application/json; charset=UTF-8' \
--header 'Accept-Encoding: gzip, deflate, br' \
--header 'partner-name: GOOGLE' \
--header 'Referer: https://host.domain.com/'
The combination of these headers all had to be present, meaning, we needed an "and" logic for matching the headers and their values. Seems pretty simple, but this is where the conversation between the customer and myself came into play. When applying all of the headers to match as shown below, they were not matching. Can you guess why?
Figure A: Headers - Flat
The first thought that comes to mind, is probably, case sensitivity in the values. However, if we take a closer look specifically at the 'partner-name' header configuration, I've placed a transformation on this specific header. So the 'partner-name' isn't the problem.Figure B: A transformer is applied to the request traffic attribute values before evaluating for match.
Give up? The issue in this Service Policy configuration is the 'Accept-Encoding' header. Specifically the ',' {comma} character in the value. In the F5 Distributed Cloud Service Policy feature, we treat commas as seperate headers with each individual value. The reason for this, is a request can have the same header multiple times, or it can have multiple values in a single header. In order to keep it consistant when parsing the headers with comma deliminated values, we seperate them into multiple headers before matching.
I thought I could be smart when initially testing this, and added multiple values to a single header. This will not match, for one because they are not seperate headers with values, but also because when there are multiple values within a single header. This multiple value in a single header configurations within the service policy creates an "or" logic, and we're looking for an "and" logic for all headers and their exact values.
Figure C: Multiple Values in Single Header
Figure D : Multiple Values within a Single Header - "or" Logic for this header
In order to get the proper match with "and" logic across all headers, and the header values, we need to apply the same header name multiple times. Important to note, the 'content-type header' has a ';' {semi-colon} which is not a deliminated value in F5 Distributedc Cloud serivce policy logic, and will match just fine the way it is in the defined policy shown below.
Figure E: Multiple Headers defined, with individual values, will provide "and" logic for all headers, and their values.
In these tests, I am going to first provide an exact match to block the traffic. When we match, we provide a 403 response code back to the client. Within the individual Load Balancers objects of F5 Distributed Cloud, you can customize the messaging that comes along with the 403 response code or any response code for that matter. For my tests, I'll simply use curl and update the different headers. After this initial successful block, I'll show a few examples of changing the headers sent with the curl. For the "and" logic, any changes to the headers should result in a 200 response code. For the "or" logic, it'll depend on how I change the headers.
In this testing section, the service policy is configured like Figure E above.
All values are an exact match, with and logic, the 403 response code identifies the block from the F5 Distributed Cloud
When removing the 'g' character from gzip, the "and" logic no longer matches, as not every value is exact. This results in a 200 response code being from the origin server and F5 Distributed Cloud.
In this testing section, the servicy policy is configured like Figure D above.
This is an exact match, and the Service Policy blocked the request, sending a 403 response code back to the client
With or logic of the Accept-Encoding header, one of the values must match. Since I removed the first letter of every value, there was not a match, and the F5 Distributed Cloud passed the traffic to the origin server. The origin and the F5 Distributed Cloud returned a 200 response code.
When adding the 'g' back to gzip, but leaving all other values missing their first character, we once again get a block at the service policy, and a 403 response code. Again, this is 'or' logic, so only 1 value must match.
A Serivce Policy is a very powerful engine within the F5 Distributed Cloud. We've scratched the surface of service policies in this article as it pertains to header matching and logic. Other match criteria examples are IP Threat Category (Reputation), ASN, HTTP Method, HTTP Path, HTTP Query Parameters, HTTP Headers, Cookies, Arguments, Request Body, and so on. The combination of these match criteria, and the order of operations of each service policy rule, can make a huge difference in the security posture of your application. These capabilites within the application layer is critical to he security of your application services. As the F5 Distributed Cloud is your stragegic point of application delivery and control, I hope you're able to use service policies to elevate your application security posture.
An important and long-standing need for enterprise storage is the ability to recover from disasters through both rapid and easy access to constantly replicated data volumes. Beyond reducing corporate downtime from recovery events, the replicated volumes are also critical for cloning purposes to facilitate items such as research into data trends or to perform advanced analytics on enterprise data.
A modern need exists to quickly replicate data across a wide breadth of sites, with diversity in the major cloud providers to be leveraged, providers such as AWS, Azure, and Google. The ability to simultaneously replicate critical data to multiple of these hyperscalers prevents a major industry concern, that of vendor lock in. Modern data stores must be efficiently and quickly saved to, and acted upon, using whichever cloud provider an enterprise desires. Principal reasons for this hybrid cloud requirement include maximizing return on investment by shopping for attractive price points or more 9’s of reliability.
Although major cloud providers may have individual, unique VPN-style solutions to support data replication, for example Microsoft Azure VPN Gateway deployments, selecting concurrent, differing solutions can quickly become an administrative burden. Each cloud provider offers slightly distinctive networking and security wares. A critical concern is the shortage of advanced skill sets often required to maintain configuration and diagnostic processes in place for competing cloud storage solutions. With flux to be expected in staffing, the long-term cost of trying to stitch disparate cloud technologies into one cohesive offering for the enterprise has been difficult.
This is the precise multi-cloud strategy where F5 Distributed Cloud (XC) can complement industry leading enterprise-grade storage solutions from a major player like NetApp. With F5 Distributed Cloud Network Connect, multiple points of presence of an enterprise, including on-prem data centers and a multitude of cloud properties, are seamlessly tied together through a multi-cloud network (MCN) offering that leverages a 20 Tbps backbone. Service turn up measured in minutes, not days.
An excellent, complementary use of the F5 XC hybrid secure network offering is NetApp’s modern approach to managing enterprise data estates, NetApp BlueXP. This unified, cloud-based control plane from NetApp allows an enterprise to manage volumes both on-prem and in major cloud providers and in turn set up high-value services like data replication. Congruent to the simple workflows F5 XC delivers for secure networking setup, NetApp BlueXP also consists of intuitive workflows. For instance, simply drag one volume onto another volume on a point-and-click working canvas and standard SnapMirror is enacted. F5 XC can underpin the connectivity requirement of a multi-cloud hybrid environment by handling truly seamless and secure network communications.
The first step in demonstrating the F5 and NetApp solutions working in concert to provide efficient disaster recovery of enterprise volumes was to set up F5 XC customer edge (CE) sites within Azure, AWS and On-Prem data center locations. The CE is a security demarcation point, a virtualized or dedicated server appliance, allowing highly controlled access to key enterprise resources from specific locales selected by the enterprise. For instance, a typical CE deployment for MCN purposes is a 2-port device with inside ports permitting selective access to important networks and resources.
Each CE will automatically multi-home to geographically close F5 regional edge (RE) sites, no administrative burden in incurred and no networking command line workflows need be learned, CE deployments are wizard-based workflows with automatic encrypted tunnels established. The following screenshot demonstrates in the red highlighted area that a sample Azure CE site freshly deployed in the Azure Americas-2 region has automatic encrypted tunnels set up to New York and Washington, DC RE nodes.
Regardless of the site, be it an AWS VPC, an AWS services VPC supporting transit gateway (TGW), Azure VNET or an on-prem location, the net result is always a rapid setup with redundant auto-tunneling to the F5 international fabric provided by the global RE network. Other CE attachment models can be invoked, such as direct site-to-site connectivity that bypasses the RE network, however the focus of this document is the most prevalent approach which harnesses the uptime and bandwidth advantages offered by RE gluing together of customer sites.
With connectivity available between inside interfaces of deployed CEs, standard firewall rules easily added, as well as service insertion of third party NGFW technology such as Palo Alto firewall instances, the plumbing to efficiently interconnect NetApp volumes for on-going replication is now possible.
The objective for the F5 XC deployment was to utilize the NetworkConnect module, to effectively allow layer 3 connectivity between inside ports of CEs regardless of site type. In other words, connectivity between networked resources at on-prem sites or AWS sites or Azure sites, are all enabled quickly with a consistent and simple guided workflow. The practical application of this layer-3 style of MCN that NetworkConnect allows was connectivity of NetApp volumes, as depicted in the following diagram.
A widely embraced enterprise-class file storage offering is the industry-leading NetApp ONTAP solution. When deployed on-prem, the solution allows shared file storage, often using NFS or SMB protocols for file storage, frequently with multiple nodes used to create a storage cluster. Although originally hardware appliance-oriented in nature, modern incarnations of on-prem ONTAP solutions can easily and frequently utilize virtualized appliances.
Both NetApp and F5, in keeping with modern control plane industry trends, have moved towards a centralized, portal-based approach to configuration, whether it be storage appliances (NetApp) or multi cloud networking (F5). This SaaS approach to configuration and monitoring means control plane software is always up-to-date and requires no day-to-day management. In the case of NetApp, this modern control plane is instantiated with the BlueXP cloud-based portal.
The sample BlueXP canvas displayed above demonstrates the diversity of data estate entities that can be managed from one workspace, with volumes both on-premises and AWS cloud-based, along with Amazon S3 storage seen.
NetApp offers a widely used cloud-based implementation of file storage, Cloud Volumes ONTAP (CVO) which serves as an excellent repository for replicating traditional on-premises volumes. In the demonstration environment both AWS and Azure were harnessed to quickly set up CVO instances. For BlueXP to establish a workspace involving a managed CVO instance, a “Connector” is deployed in the AWS VPC or Azure VNet. This connector is the entity which facilitates the BlueXP control plane management functions for hybrid-cloud storage.
Upon establishing on premises to AWS and Azure connectivity, enabled by the F5 Secure XC Customer Edge (CE) nodes deployed at sites, a vast and mature range of features are provided to the BlueXP operator.
As highlighted above, a core function of the BlueXP services is replication, in this workspace one can see the on-premises cluster being replicated automatically to an Azure CVO instance.
The result of combining the F5 Distributed Cloud multi-cloud networking support with the NetApp ability to safeguard mission critical enterprise data, anywhere, was found to be a smooth, intuitive set of guided configuration steps. Within an hour, protected inside networks were established in two popular cloud providers, AWS and Azure, as well as in an existing on premises data center. With the connectivity encrypted and standard firewall rules available, including the option to run data flows through inline third-party NGFW instances, the focus upon practical usage of the cloud infrastructure could commence.
A multi-site file storage solution was deployed using the NetApp BlueXP SaaS console, whereby an on premises ONTAP cluster received local files through the NFS protocol. To demonstrate the value of a multi-cloud deployment, the F5 XC NetworkConnect module allowed real-time file replication of the on-prem cluster contents to separate and independent volumes securely located within an AWS VPC and Azure VNet, respectively. Using F5 XC, the target networks within the cloud providers were highly secured, only permitting access from the data center.
The net result is a solution that can accommodate disaster recovery requirements, for instance a clone of the AWS or Azure volumes could be created and utilized for business continuity in the event of data corruption or disk failure on premises. Other use cases would be to clone the cloud-based volumes for research and development purposes, analytics, and further backup purposes that could utilize snapshotting or imaging of the data. The inherent redundancy offered by using multiple, secured cloud instances could be enhanced easily by expanding to other hyperscalers, for instance Google Cloud Platform when business purposes dictate such a configuration is prudent.
A simple and intuitive simulator is available to walk users quickly through the setup of an F5 Distributed Cloud MCN deployment such as the one reflected in this article. The simulator can be found here.
For a complete, comprehensive walk-through of F5 Distributed Coud Multi-Cloud Networking MCN, including setup steps, please see this DevCentral article - Multi-Cloud Networking Walkthrough with Distributed Cloud.
As part of release cycle management F5 Distributed Cloud (F5 XC) keeps on releasing new features. July[1] upgrade has released 3 new features in Web Application and Api Protection (WAAP) and Security dashboards.
Let’s dive into them one by one.
Security dashboards capture different types of logging metrics and sometimes users may need these logs to analyze them offline. WAAP Exports feature addresses this problem by exporting the latest 500 security related logs in csv format. Users can export logs from events, incidents and requests tabs of security dashboard.
Production security dashboards show plenty of logging information to understand the security posture of their Apps and API’s currently for the ongoing traffic. Owners can go through them to analyze the traffic and come to decisions if ongoing data is malicious and has any threats. This process is a little time-consuming and needs human expertise in traffic analysis. Users are looking for a top-level overview of how many attacks are seen in a specific period compared to the last period.
WAAP Trends feature in security dashboards of HTTP load balancer enables users to view the change in metrics (up or down) compared with previous period. Incoming traffic is analyzed using internal tools to decide the sentiment (positive, negative or neutral) and is displayed in UI thereby saving lot of time. Users can instantly check the sentiment and if needed can update the existing configurations to safeguard the applications.
As I was writing this article, I keep remembering this famous generic quote “Trend is your Friend” which conveys the importance of identifying the current trend in safeguarding your applications.
Two operators (Present and Not Present) are newly added for filters in Security Analytics page. These operators help users to easily search and filter through security events and incidents to identify specific violations, event types, and/or application attributes.
Present operator helps users to identify and segregate the events/incidents with the provided key. Users should provide a key according to their need from the available list of keys and Distributed Cloud (XC) internally validates all the requests if the provided key is Present and filter them. The filtered data will be displayed on dashboard to users and other requests will be ignored. This granular filtering can accelerate investigation time and improves users' ability to respond quickly.
Similarly, Not Present operator identifies and displays the events/incidents in which the mentioned attribute/key is not available.
Here is a basic example which explains the usage of operators:
In this manner, ease of filtering can be achieved using operators in XC console.
I hope this article has provided a summary of newly implemented features of WAAP events export, trends and new operators which focus on logging and security dashboards.
Stay tuned for more feature article. For more details refer below links:
F5 Distributed Cloud services (XC) provide full REST APIs to enable automation of the deployment and management of multi-cloud infrastructure. Organizations looking to implement infrastructure-as-code operations for modern apps, distribute and secure multi-cloud deployments can utilize and adapt the Terraform and Ansible scripts in the many articles on F5 DevCentral that cover automation topics for F5 Distributed Cloud. Typically these scripts automate and help to consistently :
This article focuses on only the Deliver part of the distributed app lifecycle, where using Terraform script with F5 Distributed Cloud Services organizations can easily deploy and configure multi-cloud networking & app connectivity of their distributed applications that span across:
The easiest place to get started with Automation of Multi-Cloud Networking (MCN) and Edge Compute scenarios is by cloning the corresponding GitHub repositories from the Demo Guides, which include sample applications and provide opportunities to see automation scripts in action. The Terraform scripts within the following Demo Guides can be used as a template to quickly customize to your organization’s requirements to automate repetitive tasks or the creation of resources with just a quick update of variables unique to your environment to customize automation actions.
Multi-cloud networking use-cases Demo Guide where you can use Terraform to enable connectivity for multiple clouds and explore using HTTP and TCP load balancers to connect the provided sample application. You can use the provided scripts in the GitHub repositories to deploy the required sample app, and other components representative of a traditional 3-tier app architecture (backend + database + frontend).
Furthermore, the scripts provide flexibility of choosing the target clouds (AWS or Azure or both), which you can adapt to your environment and app topologies based on which clouds the different app services should be deployed to. Use the guide to get familiar with how to update variables for each cloud configuration, so that you can further customize to your environment to help automate and simplify deployment of the networking topologies across Azure and AWS, ultimately saving time and effort.
Edge Compute for Multi-cloud Apps Demo Guide where Terraform scripts help automate deployment of the application infrastructure across AWS (sample app and other components representative of a traditional 3-tier app architecture – backend, database, frontend). The result is a multi-cloud architecture, with components deployed on Microsoft Azure and Amazon AWS.
By adapting the included Terraform script you can easily deploy and securely network app services to create a distributed app model that spans across:
In the process you get familiar with the configuration of TCP and HTTP Load Balancers, create a vK8s that spans multiple locations / clouds, and deploy distributed app services across those locations with the help of the Terraform scripts.
Deploying high-availability configurations Demo Guide is an important resource for getting familiar with and automating High-Availability (HA) configuration of backend resources. In this guide, as an example, you can use a PostgreSQL database HA deployment on a CE (Customer Edge), which is a common use-case leveraging F5 Distributed Cloud Customer Edge for deploying a backend. First, deploy the AWS Site environment, followed by deployment of a vK8s, and then customizing and running Bitnami Helm chart to configure a multi-node PostgreSQL deployment.
Of course, you can leverage this type of automation with a Helm chart of your choice to configure a different backend resource or database type. Adapt to your environment with a few changes to the script variables, and feel free to combine with scripts from the other two guides to deploy the app(s) and configure networking (MCN) should you choose to automate the entire workflow.
Terraform scripts represent ready-to-use code, which you can easily adapt to your own apps, environments, and services or extend as needed. The baseline for most scripts is using the Volterra Provider with required edits / updates of the variables in Terraform. These variables are special elements that allow us to store and pass values to different aspects of modules without changing the code in the main configuration file. Variables allow the flexibility of updating the settings and parameters of the infrastructure, and it facilitates its configuration and support.
Variables are stored and can be found in .tf files of the respective folders. Using the Deploying high-availability configurations Demo Guide as example, you change the environment variable values related to your app, which you can find in the terraform folder and the application subfolder. Open the var.tf file to update the values:
More detailed information on variables can be found here.
In summary, Demo Guide repositories include Terraform scripts used to help automate different operations, including deployment of the environment required for the sample distributed app, as well as deploying the app itself. You can take a closer look at the Demo Guide use-cases together with their respective Terraform scripts, run a quick test to get familiar with the use-case, and then adapt the scripts to your environment and your applications.
Whether your app has high availability requirements or distributed multi-cloud infrastructure, using Terraform with F5 Distributed Cloud Services can simplify deployment, automate infrastructure on any cloud and save time and effort managing and securing app resources in any cloud or data center.
Edge Compute for Multi-cloud Apps Demo Guide
Terraform scripts & assets for the Edge Compute Demo Guide
https://registry.terraform.io/providers/volterraedge/volterra/latest/docs
To prevent attackers from exploiting mobile apps to launch bots, F5 provides customers with the F5 Distributed Cloud (XC) Mobile SDK, which collects signals for the detection of bots. To gain this protection, the SDK must be integrated into mobile apps, a process F5 explains in clear step-by-step technical documentation. Now, F5 provides an even easier option, the F5 Distributed Cloud Mobile SDK Integrator, a console app that performs the integration directly into app binaries without any need for coding, which means no need for programmer resources, no need to integration delays.
The Mobile SDK Integrator supports most iOS and Android native apps. As a console application, it can be tied directly into CI/CD pipelines to support rapid deployments.
While motivations for using SDK Integrator may vary, below are some of the more common reasons:
The work of the SDK Integrator is done through two commands: the first command creates a configuration profile for the SDK injection, and the second performs the injection.
$ python3 ./create_config.py --target-os Android --apiguard-config ./base_configuration_android.json --url-filter "*.domain.com/*/login" --enable-logs --outfile my_app_android_profile.dat
In Step 1, apiguard-config lets the user specify the base configuration to be used in integration. With url-filter we specify the pattern for URLs which require Bot Defense protection, enable-logs allows for APIGuard logs to be seen in the console, outfile specifies the name of this integration profile.
$ java -jar SDK-Integrator.jar --plugin F5-XC-Mobile-SDK-Integrator-Android-plugin-4.1.1-4.dat --plugin my_app_android_profile.dat ./input_app.apk --output ./output_app.apk --keystore ~/my-key.keystore --keyname mykeyname --keypass xyz123 --storepass xyz123
In Step 2, we specify which SDK Integrator plugin and configuration profile should be used. In the same step, we can optionally pass parameters for app-signing: keystore, keyname, keypass and storepass. Output parameter specifies the resulting file name. The resulting .apk or .aab file is a fully integrated app, which can be tested and released.
Injection steps for iOS are similar. The commands are described in greater detail in the SDK Integrator user guides distributed with the SDK Integrator.
In order to thwart potential attackers from capitalizing on mobile apps to initiate automated bots, The F5 Distributed Cloud Mobile SDK Integrator seamlessly incorporates the SDK into app binaries, completely bypassing the necessity for coding making the process easy and fast.
To add to Nikoolayy1's comment, F5 can generate API Definitions within XC, and we are working on integration for BIG-IP and nginx deployments. This will allow traffic that does not use XC for client traffic to provide the Swagger/OpenAPI files and security assessments available for XC Load Balancers.
Currently, you use a logger on the proxy (BIG-IP or nginx) to gather the request and response data and that is then sent in to XC via a separate service. The advantage here is that you don't need to change the traffic flow of your client traffic.
If you're interested in learning more, please PM me or reach out to your local account team.
As I have mentioned in https://community.f5.com/t5/codeshare/generating-irule-logs-emails-and-reports-for-shadow-api/ta-p/313912 only F5 XC Distributed Cloud can do this and as of now there is no native option with F5 BIG-IP.
Date: Tuesday, September 19, 2023
Time: 10:00am PT | 1:00pm ET
Speaker: Krista Baum
What's the webinar about?
F5's SaaS services delivery platform, F5 Distributed Cloud, has expanded. We'll answer common customer questions like: How does this SaaS platform work with other F5 products and services? What best practices help securely deliver my applications? And we'll look at how pairing F5 Distributed Cloud Services with your existing security services can help you become an application security hero in your organization.
In this session, we will:
Note: If you can't make this session, please still register and will send you a link to the on-demand recording.
Date: Thursday, September 7, 2023
Time: 10:00am PT | 1:00pm ET
Speaker: Kyle Twenty, F5 Solutions Engineer II
What's the webinar about?
Whether an application is hosted in one or more locations, is migrating, or has advanced delivery and security needs, it's crucial to provide effective and consistent security. Fortunately, application owners can use the F5 Distributed Cloud Platform and portfolio of services to engage multi-cloud networking for on-premises and cloud-based application deployments to provide consistent security operations.
In this session, we will:
Note: If you can't make this session, please still register and will send you a link to the on-demand recording.
An open group to foster discourse around the integration of security, networking, and application management services across public/private cloud and network edge compute services.
User |
---|
jimtw
![]() Fog
|
YangHsieh
![]() Fog
|
PhatANhappy
![]() Cirrus
|
425877
![]() Nimbostratus
|
jrojas8923
![]() Nimbostratus
|