api
76 TopicsMoving HTTP Load Balancers Between F5 Distributed Cloud Namespaces — Why It's Harder Than You Think
The Problem If you have been working with F5 Distributed Cloud (XC) for a while, you have probably run into this: your namespace structure no longer reflects how your teams or applications are organized. Maybe the initial layout was a quick decision during onboarding. Maybe teams have merged, projects have grown, or your naming convention has evolved. Either way, you now want to move a handful of HTTP load balancers from one namespace to another. Simple enough, right? Just change the namespace field and save... Except you can't. There is no "move" operation on F5 XC - not in the UI, not in the API. Changing the namespace of a load balancer means deleting it in the source and re-creating it in the target. And that is where things get complicated. Why a Simple Delete-and-Recreate Is Not Enough On the surface, the API is straightforward: "GET" the config, "DELETE" the object, "POST" it into the new namespace. But a production HTTP load balancer on XC is rarely a standalone object. It sits at the top of a dependency tree that can include origin pools, health checks, TLS certificates, service policies, app firewalls, rate limiters, and more. Every one of those dependencies needs to be handled correctly - or the migration breaks. Here are the main challenges we might run into. Referential Integrity F5 XC enforces strict referential integrity. You cannot delete an origin pool that is still referenced by a load balancer. You cannot create a load balancer that references an origin pool that does not exist yet. This means the order of operations matters: delete top-down (LBs first, then dependencies), create bottom-up (dependencies first, then LBs). It also means that if two load balancers share an origin pool, you cannot move them independently. Delete the first LB, try to delete the shared pool, and the API returns a 409 Conflict because the second LB still references it. Both LBs - and all of their shared dependencies - have to be moved together as a single atomic unit. New CNAMEs After Every Move When you delete and re-create an HTTP load balancer, F5 XC assigns a new "host_name" (the CNAME target that your DNS records point to). If the LB uses Let's Encrypt auto-certificates, the ACME challenge CNAME changes too. That means after every move, someone needs to update external DNS records - and until that happens, the application is unreachable or the TLS certificate renewal fails. For tenants using XC-managed DNS zones with "Allow Application Loadbalancer Managed Records" enabled, this is handled automatically. But many customers manage their own DNS, and they need the old and new CNAME values for every moved LB. Certificates with Non-Portable Private Keys This one is subtle. When a load balancer uses a manually imported TLS certificate, the private key is stored in one of several formats: blindfolded (encrypted with the Volterra blindfold key) or clear secret. In both of these cases, the XC API never returns the private key material in its GET response. You get the certificate and metadata, but not the key. That means you cannot extract-and-recreate the certificate in a new namespace via the API. Cross-namespace certificate references (outside of "shared" namespace) are also not supported. So if an LB in namespace A uses a manually imported certificate stored in namespace A, and you want to move that LB to namespace B, you need to first manually upload the same certificate into namespace B (or into the "shared" namespace) before the migration can proceed. API Metadata The XC API returns a "referring_objects" field on every config GET response. In theory, this tells you what other objects reference a given resource - exactly what you need to know before deleting something. In practice, this field can be empty even when active references exist. The only reliable way to detect all external references is to actively scan: fetch the config of every load balancer in the namespace and check their specs for references to the objects you are about to move. Cross-Namespace References Are Not Allowed On F5 XC, an HTTP load balancer can only reference objects in its own namespace, in "system" or "shared" namespace. If your origin pool lives in namespace A and you move the LB to namespace B, the origin pool must either come along to namespace B or already exist there. There is no way to have the LB in namespace B point to a pool in namespace A. This means you need to discover the complete transitive dependency tree of every LB, determine which dependencies need to move, detect which are shared between multiple LBs, and batch everything accordingly. The Tool: XC Namespace migration To deal with all of this, (A)I built **xc-ns-mover** — a Python CLI tool that automates the entire process. It has two components: Scanner - scans all namespaces on your tenant, lists every HTTP load balancer, and writes a CSV report. This gives you the inventory to decide what to move. Mover - takes a CSV of load balancers, discovers all dependencies, groups LBs that share dependencies into atomic batches, runs a series of pre-flight checks, and then executes the migration - or generates a dry-run report so you can review everything first, or do the job manually (JSON Code blocks available in the report) What the Mover Does Before Touching Anything The mover runs six pre-flight phases before making any changes: Discovery and batching - fetches every LB config, walks the dependency tree, and uses a union-find algorithm to cluster LBs with shared dependencies into batches. External reference scan - for every dependency being moved, checks whether any LB outside the move list references it. If so, that dependency cannot be moved without breaking the external LB, and the batch is blocked. Conflict detection - lists all existing objects in the target namespace. If a name already exists, the user can skip the object or rename it with a configurable prefix (e.g., "migrated-my-pool"). All internal JSON references are updated automatically. Certificate pre-flight - identifies certificates with non-portable private keys, then searches the target and "shared" namespaces for a matching certificate by domain/SAN comparison (including wildcard matching per RFC 6125). If a match is found, the LB's certificate reference is automatically rewritten. If not, the batch is blocked until the certificate is manually created. DNS zone pre-flight - queries the tenant's DNS zones to detect which ones have managed LB records enabled. LBs under managed zones are flagged as "auto-managed" in the report — no manual DNS update needed. After all checks pass, the actual migration follows a strict order per batch: backup everything, delete top-down, create bottom-up, verify new CNAMEs. If anything fails, automatic rollback kicks in — objects created in the target are deleted, objects deleted from the source are restored from backups. The Reports Every run produces an HTML report. The dry-run report shows planned configurations, the full dependency graph , certificate issues, DNS changes required, and any blocking issues — all before a single API call mutates anything. The post-migration report includes old and new CNAME values, a DNS changes table with exactly which records need updating, and full configuration backups of everything that was touched. Things to Keep in Mind A few caveats that are worth highlighting: Brief interruption is unavoidable - The migration deletes and re-creates load balancers. During that window (typically seconds to a few minutes per batch), traffic to affected domains will be impacted. Plan a change window. Only HTTP load balancers are supported - TCP load balancers and other object types are not handled by this tool. DNS updates are your responsibility - The report gives you all the values - old CNAME, new CNAME, ACME challenge CNAME - but you need to update your DNS provider. Always run the dry-run first - The tool enforces this by default: it stores a fingerprint after a dry-run and verifies it before executing. If the config changes, a new dry-run is required. The project is open source and available on GitHub. This is privately maintained and not "officially supported": https://github.com/de1chk1nd/resources-and-tools/blob/main/tools/xc-ns-mover/README.md If you find bugs or have feature requests, please open a GitHub issue.98Views2likes0CommentsAdvanced WAF v16.0 - Declarative API
Since v15.1 (in draft), F5® BIG-IP® Advanced WAF™ can import Declarative WAF policy in JSON format. The F5® BIG-IP® Advanced Web Application Firewall (Advanced WAF) security policies can be deployed using the declarative JSON format, facilitating easy integration into a CI/CD pipeline. The declarative policies are extracted from a source control system, for example Git, and imported into the BIG-IP. Using the provided declarative policy templates, you can modify the necessary parameters, save the JSON file, and import the updated security policy into your BIG-IP devices. The declarative policy copies the content of the template and adds the adjustments and modifications on to it. The templates therefore allow you to concentrate only on the specific settings that need to be adapted for the specific application that the policy protects. This Declarative WAF JSON policy is similar to NGINX App Protect policy. You can find more information on the Declarative Policy here : NAP : https://docs.nginx.com/nginx-app-protect/policy/ Adv. WAF : https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-declarative-security-policy.html Audience This guide is written for IT professionals who need to automate their WAF policy and are familiar with Advanced WAF configuration. These IT professionals can fill a variety of roles: SecOps deploying and maintaining WAF policy in Advanced WAF DevOps deploying applications in modern environment and willing to integrate Advanced WAF in their CI/CD pipeline F5 partners who sell technology or create implementation documentation This article covers how to PUSH/PULL a declarative WAF policy in Advanced WAF: With Postman With AS3 Table of contents Upload Policy in BIG-IP Check the import Apply the policy OpenAPI Spec File import AS3 declaration CI/CD integration Find the Policy-ID Update an existing policy Video demonstration First of all, you need a JSON WAF policy, as below : { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false } } } 1. Upload Policy in BIG-IP There are 2 options to upload a JSON file into the BIG-IP: 1.1 Either you PUSH the file into the BIG-IP and you IMPORT IT OR 1.2 the BIG-IP PULL the file from a repository (and the IMPORT is included) <- BEST option 1.1 PUSH JSON file into the BIG-IP The call is below. As you can notice, it requires a 'Content-Range' header. And the value is 0-(filesize-1)/filesize. In the example below, the file size is 662 bytes. This is not easy to integrate in a CICD pipeline, so we created the PULL method instead of the PUSH (in v16.0) curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/file-transfer/uploads/policy-api.json' \ --header 'Content-Range: 0-661/662' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --header 'Content-Type: application/json' \ --data-binary '@/C:/Users/user/Desktop/policy-api.json' At this stage, the policy is still a file in the BIG-IP file system. We need to import it into Adv. WAF. To do so, the next call is required. This call import the file "policy-api.json" uploaded previously. An CREATE the policy /Common/policy-api-arcadia curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/javascript' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "filename":"policy-api.json", "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 1.2 PULL JSON file from a repository Here, the JSON file is hosted somewhere (in Gitlab or Github ...). And the BIG-IP will pull it. The call is below. As you can notice, the call refers to the remote repo and the body is a JSON payload. Just change the link value with your JSON policy URL. With one call, the policy is PULLED and IMPORTED. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }' A second version of this call exists, and refer to the fullPath of the policy. This will allow you to update the policy, from a second version of the JSON file, easily. One call for the creation and the update. As you can notice below, we add the "policy":"fullPath" directive. The value of the "fullPath" is the partition and the name of the policy set in the JSON policy file. This method is VERY USEFUL for CI/CD integrations. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 2. Check the IMPORT Check if the IMPORT worked. To do so, run the next call. curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ You should see a 200 OK, with the content below (truncated in this example). Please notice the "status":"COMPLETED". { "kind": "tm:asm:tasks:import-policy:import-policy-taskcollectionstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy?ver=16.0.0", "totalItems": 11, "items": [ { "isBase64": false, "executionStartTime": "2020-07-21T15:50:22Z", "status": "COMPLETED", "lastUpdateMicros": 1.595346627e+15, "getPolicyAttributesOnly": false, ... From now, your policy is imported and created in the BIG-IP. You can assign it to a VS as usual (Imperative Call or AS3 Call). But in the next session, I will show you how to create a Service with AS3 including the WAF policy. 3. APPLY the policy As you may know, a WAF policy needs to be applied after each change. This is the call. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/apply-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{"policy":{"fullPath":"/Common/policy-api-arcadia"}}' 4. OpenAPI spec file IMPORT As you know, Adv. WAF supports OpenAPI spec (2.0 and 3.0). Now, with the declarative WAF, we can import the OAS file as well. The BEST solution, is to PULL the OAS file from a repo. And in most of the customer' projects, it will be the case. In the example below, the OAS file is hosted in SwaggerHub (Github for Swagger files). But the file could reside in a private Gitlab repo for instance. The URL of the project is : https://app.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 The URL of the OAS file is : https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 This swagger file (OpenAPI 3.0 Spec file) includes all the application URL and parameters. What's more, it includes the documentation (for NGINX APIm Dev Portal). Now, it is pretty easy to create a WAF JSON Policy with API Security template, referring to the OAS file. Below, you can notice the new section "open-api-files" with the link reference to SwaggerHub. And the new template POLICY_TEMPLATE_API_SECURITY. Now, when I upload / import and apply the policy, Adv. WAF will download the OAS file from SwaggerHub and create the policy based on API_Security template. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3" } ] } } 5. AS3 declaration Now, it is time to learn how we can do all of these steps in one call with AS3 (3.18 minimum). The documentation is here : https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/application-security.html?highlight=waf_policy#virtual-service-referencing-an-external-security-policy With this AS3 declaration, we: Import the WAF policy from a external repo Import the Swagger file (if the WAF policy refers to an OAS file) from an external repo Create the service { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "API": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["10.1.10.27"], "redirect80": false, "pool": "pool_NGINX_API_AS3", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://10.1.20.4/root/as3-waf-api/-/raw/master/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API_AS3": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["10.1.20.9"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } 6. CI/CID integration As you can notice, it is very easy to create a service with a WAF policy pulled from an external repo. So, it is easy to integrate these calls (or the AS3 call) into a CI/CD pipeline. Below, an Ansible playbook example. This playbook run the AS3 call above. That's it :) --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "admin" my_password: "admin" bigip: "10.1.1.12" tasks: - name: Deploy AS3 WebApp uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic YWRtaW46YWRtaW4=" body: "{{ lookup('file','as3.json') }}" body_format: json validate_certs: no status_code: 200 7. FIND the Policy-ID When the policy is created, a Policy-ID is assigned. By default, this ID doesn't appear anywhere. Neither in the GUI, nor in the response after the creation. You have to calculate it or ask for it. This ID is required for several actions in a CI/CD pipeline. 7.1 Calculate the Policy-ID We created this python script to calculate the Policy-ID. It is an hash from the Policy name (including the partition). For the previous created policy named "/Common/policy-api-arcadia", the policy ID is "Ar5wrwmFRroUYsMA6DuxlQ" Paste this python code in a new waf-policy-id.py file, and run the command python waf-policy-id.py "/Common/policy-api-arcadia" Outcome will be The Policy-ID for /Common/policy-api-arcadia is: Ar5wrwmFRroUYsMA6DuxlQ #!/usr/bin/python from hashlib import md5 import base64 import sys pname = sys.argv[1] print 'The Policy-ID for', sys.argv[1], 'is:', base64.b64encode(md5(pname.encode()).digest()).replace("=", "") 7.2 Retrieve the Policy-ID and fullPath with a REST API call Make this call below, and you will see in the response, all the policy creations. Find yours and collect the PolicyReference directive. The Policy-ID is in the link value "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0" You can see as well, at the end of the definition, the "fileReference" referring to the JSON file pulled by the BIG-IP. And please notice the "fullPath", required if you want to update your policy curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Range: 0-601/601' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ { "isBase64": false, "executionStartTime": "2020-07-22T11:23:42Z", "status": "COMPLETED", "lastUpdateMicros": 1.595417027e+15, "getPolicyAttributesOnly": false, "kind": "tm:asm:tasks:import-policy:import-policy-taskstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy/B45J0ySjSJ9y9fsPZ2JNvA?ver=16.0.0", "filename": "", "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "endTime": "2020-07-22T11:23:47Z", "startTime": "2020-07-22T11:23:42Z", "id": "B45J0ySjSJ9y9fsPZ2JNvA", "retainInheritanceSettings": false, "result": { "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "message": "The operation was completed successfully. The security policy name is '/Common/policy-api-arcadia'. " }, "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }, 8 UPDATE an existing policy It is pretty easy to update the WAF policy from a new JSON file version. To do so, collect from the previous call 7.2 Retrieve the Policy-ID and fullPath with a REST API call the "Policy" and "fullPath" directive. This is the path of the Policy in the BIG-IP. Then run the call below, same as 1.2 PULL JSON file from a repository, but add the Policy and fullPath directives Don't forget to APPLY this new version of the policy 3. APPLY the policy curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' TIP : this call, above, can be used in place of the FIRST call when we created the policy "1.2 PULL JSON file from a repository". But be careful, the fullPath is the name set in the JSON policy file. The 2 values need to match: "name": "policy-api-arcadia" in the JSON Policy file pulled by the BIG-IP "policy":"fullPath" in the POST call 9 Video demonstration In order to help you to understand how it looks with the BIG-IP, I created this video covering 4 topics explained in this article : The JSON WAF policy Pull the policy from a remote repository Update the WAF policy with a new version of the declarative JSON file Deploy a full service with AS3 and Declarative WAF policy At the end of this video, you will be able to adapt the REST Declarative API calls to your infrastructure, in order to deploy protected services with your CI/CD pipelines. Direct link to the video on DevCentral YouTube channel : https://youtu.be/EDvVwlwEFRw4.7KViews5likes3CommentsF5 API Security: Discovery and Protection
Introduction APIs are everywhere, accounting for around 83% of all internet traffic today, with API calls growing nearly 300% faster than overall web traffic. Last year, the F5 Office of the CTO estimated that the number of APIs being deployed to production could reach between 500 million to over a billion by 2030. At the time, the notion of over a billion APIs in the wild was overwhelming, made even more concerning by estimates indicating that a significant portion were unmanaged or, in some cases, entirely undocumented. Now, in the era of AI driven development and automation, that estimate of over a billion APIs may prove to be a significant understatement. According to recent research by IDC on API Sprawl and AI Enablement, "Organizations with GenAI enhanced applications/services in production have roughly 5x more APIs than organizations not yet investing significantly in GenAI". That all makes for a very large and complicated attack surface, and complexity is the enemy of security. Discovery, Monitoring, and Protection So, how do we begin securing such a large and complex attack surface? It requires a continuous approach that blends visibility, management, and enforcement. This includes multi-lens Discovery and Learning to detect unknown or shadow APIs, determine authentication status, identify sensitive data, and generate accurate OpenAPI schemas. It also involves Monitoring to establish baselines for endpoint parameters, behaviors, and characteristics, enabling the detection of anomalies. Finally, we must Protect by blocking suspicious requests, applying rate limiting, and enforcing schema validation to prevent misuse. The API Security capabilities of the F5 product portfolio are essential for providing that continuous, defense in depth approach to protecting your APIs from DevTime to Runtime. F5 API Security Essentials Additional Resources F5 API Security Article Series: Out of the Shadows: API Discovery Beyond Rest: Protecting GraphQL Deploy F5 Distributed Cloud API Discovery and Security: F5 Distributed Cloud WAAP Terraform Examples GitHub Repo Deploy F5 Hybrid Architectures API Discovery and Security: F5 Distributed Cloud Hybrid Security Architectures GitHub Repo F5 Distributed Cloud Documentation: F5 Distributed Cloud Terraform Provider Documentation F5 Distributed Cloud Services API Documentation
443Views2likes1CommentF5's API Security Alignment with NIST SP 800-228
Introduction F5 has positioned itself as a comprehensive API security leader with solutions that directly address the emerging NIST SP 800-228 "Guidelines for API Protection for Cloud-Native Systems."F5’s multi-layered approach covers the entire API lifecycle, from development to runtime protection. It is closely aligned with NIST’s recommended controls and architectural patterns. F5's product portfolio comprehensively addresses NIST 800-228 requirements F5's current API security ecosystem includes BIG-IP Advanced WAFF5 Distributed Cloud Services, and NGINX Plus . This creates a unified platform that addresses all 22 NIST recommended controls (REC-API-1 through REC-API-22). The company's 2024 acquisition of Wib Security strengthened its pre-runtime protection capabilities, while Heyhack enhanced its penetration testing offerings. These strategic moves demonstrate F5's commitment to comprehensive API security coverage. The F5 Distributed Cloud Services API Security platform is a comprehensive WAAP solution. The platform provides AI-powered API discovery, real-time threat detection, advanced bot protection, web application firewall, DoS/DDoS protection, and automated policy enforcement. This directly supports NIST's focus on continuous monitoring and adaptive security. Comprehensive mapping to NIST SP 800-228 control framework F5's solutions address all seven thematic groups outlined in NIST SP 800-228. These "target" objectives include security controls that address the OWASP API Top 10. These mitigations address broken object-level authentication, sensitive information disclosure, input validation, and other security vulnerabilities. If you haven't read the new document, I encourage you to do so. You can find the document here. The following may seem confusing at first, but the REC-API headings map to the NIST document. These are high-level target controls. You can further group these by thinking of Pre-Runtime Protections (REC-API-1 through REC-API-8) and Runtime Protections (REC-API-9 through REC-API-22). We have done our best to map F5's capabilities at a high level to the target controls below. In a future article, we will provide specific configuration controls mapping to each target level. API specification and inventory management (REC-API-1 to REC-API-4) F5's AI/ML-powered API discovery automatically identifies and catalogs API endpoints, including shadow APIs that pose security risks. The platform generates OpenAPI specifications from traffic analysis and maintains a real-time API inventory with risk scoring. The F5 Distributed Cloud Services platform provides external domain crawling and comprehensive API lifecycle tracking. This directly addressing NIST's requirements for preventing unauthorized APIs from becoming attack vectors. API Discovery of API Endpoints Schema validation and input handling (REC-API-5 to REC-API-8) F5 implements a positive security model that enforces OpenAPI specifications at runtime. F5 platforms provide granular parameter validation, content-type enforcement, and request size limiting. The platform automatically validates request/response schemas against predefined specifications and uses machine learning to detect schema drift, ensuring continuous compliance with API contracts. In cases when a pre-defined schema is not available, the platform can "learn" through discovery and build an Open API Spec that can later be imported into the platform for adding security controls. Authentication and authorization (REC-API-9 to REC-API-12) F5's authentication architecture supports OAuth 2.0, OpenID Connect, SAML, and JWT validation with comprehensive scope checking. The F5 Application Delivery and Security platform provides per-request policy enforcement with role-based access control (RBAC) and attribute-based access control (ABAC). The platform's cryptographic X.509 identity bootstrapping ensures every component receives unique identity credentials, supporting NIST's emphasis on strong authentication mechanisms. Sensitive data protection (REC-API-13 to REC-API-15) F5's data classification engine automatically identifies and protects PII, HIPAA, GDPR, and PCI-DSS data types flowing through APIs. The platform implements real-time data flow policies with redaction mechanisms and monitors for potential data exfiltration. The F5 Distributed Cloud Services provides context-aware data protection that goes beyond traditional PII to include business-sensitive information. Sensitive Information Discovery and Redaction Access control and request flow (REC-API-16 to REC-API-18) F5's real-time response capabilities enable immediate blocking of specific keys or users on demand. The platform implements mature token management with hardened API behavior detection for abnormal usage patterns. The behavioral analytics engine continuously monitors API usage patterns to detect compromised credentials and automated attacks. Rate limiting and abuse prevention (REC-API-19 to REC-API-21) F5 provides granular rate limiting by user, IP, application ID, method, and field through multiple implementation approaches. The NGINX Plus leaky bucket algorithm ensures smooth traffic management, while BIG-IP APM offers sophisticated quota management with spike arrest capabilities. The platform's L7 DDoS protection uses machine learning to detect and mitigate application-layer attacks accurately. API Endpoint Rate Limiting Settings Logging and observability (REC-API-22) F5's comprehensive logging framework captures all API interactions, authentication events, and data access with contextual information. The platform provides real-time analytics with application performance monitoring, security event correlation, and business intelligence capabilities. Integration with SIEM platforms like Splunk and Datadog ensures actionable intelligence connects to operational response capabilities. Implementation of NIST's three API gateway patterns F5's architecture uniquely supports all three API gateway patterns outlined in NIST SP 800-228: Centralized gateway pattern The F5 Distributed Cloud ADN provides a global application delivery network with centralized policy management through a unified SaaS console. This approach ensures consistent security policy enforcement across all endpoints while leveraging F5's global network infrastructure for optimal performance and threat intelligence sharing. Hybrid gateway pattern F5's distributed data plane with centralized control represents the optimal balance between centralized management and distributed performance. The F5 Distributed Customer Edge nodes deployed at customer sites provide local API processing with global policy synchronization. This enables organizations to maintain data sovereignty while benefiting from centralized security management. Decentralized gateway pattern The NGINX Plus deployment model enables lightweight API gateways positioned close to applications, perfect for microservices architectures. The NGINX Ingress Controller provides Kubernetes-native API management with per-service gateway deployment in service mesh environments. This ensures policy enforcement occurs as close to individual service instances as possible. In addition, BIG-IP can be deployed to provide API security and provide many of the same mitigations as listed above. This can be beneficial as most modern enterprises already have F5 BIG-IPs in their environments. Advanced zero trust and identity-based segmentation F5's zero trust architecture implements NIST's identity-centric security principles through cryptographic principles. TLS is a cornerstone of F5 technologies. Our platforms are purpose-built for cryptography, including TLS 1.3 and Post Quantum. mTLS can be used to authenticate both sides of the TLS handshake. F5's strong authentication and authorization features fit nicely into an API Security Zero Trust design. The continuous verification model ensures no implicit trust based on network location, while least privilege enforcement provides granular access control based on identity and attributes. F5's integration with enterprise identity providers like Microsoft Entra ID and Okta enables seamless implementation of zero trust principles across existing infrastructure. Comprehensive pre-runtime and runtime protection F5's pre-runtime protection includes integration with CI/CD pipelines through the recent Wib Security acquisition, enabling vulnerability detection during development. The platform provides automated security reconnaissance through Heyhack's capabilities and API scanning before production deployment. For runtime protection, F5's behavioral analytics engine establishes baseline API behavior and detects anomalies in real-time. The threat intelligence integration protects coordinated attack campaigns. API endpoint markup automatically identifies and tokenizes dynamic URL components for enhanced protection. Implementation recommendations Organizations implementing F5 solutions for NIST SP 800-228 compliance should consider a phased approach starting with API discovery and inventory management, followed by authentication and authorization controls, and culminating in comprehensive monitoring and analytics. For a purely SaaS solution, Distributed Cloud presents a mature API security solution offering cutting-edge capabilities. For enterprises requiring on-premises deployment, BIG-IP Advanced WAF and Access Policy Manager provide the most robust capabilities with enterprise-grade performance and extensive customization options. The hybrid deployment model of SaaS and on-premises often provides the optimal balance of cost, performance, and security for large organizations with complex infrastructure requirements. Conclusion F5's API security portfolio represents a mature, comprehensive solution that directly addresses the full spectrum of NIST SP 800-228 requirements. F5’s strategic acquisitions, innovative AI integration, and proven enterprise scalability position it as a leading choice for organizations seeking to implement comprehensive API security aligned with emerging federal guidelines. With continued investment in cloud-native capabilities and AI-powered threat detection, F5 is well-positioned to maintain its leadership as API security requirements continue evolving.359Views1like0CommentsFrom Terra Incognita to API Incognita: What Captain Cook Teaches Us About API Discovery
When I was young, my parents often took me on UK seaside holidays, including a trip to Whitby, which was known for its kippers (smoked herring), jet (a semi-precious stone), and the then vibrant fishing industry. Whitby was also where Captain Cook, a famous British explorer, learned seamanship. During the 1760s, Cook surveyed Newfoundland and Labrador's coasts, creating precise charts of harbors, anchorages, and dangerous waters, such as Trinity Bay and the Grand Banks. His work, crucial for fishing and navigation, demonstrated exceptional skill and established his reputation, leading to his Pacific expeditions. Cook's charts, featuring detailed observations and navigational hazards, were so accurate they remained in use well into the 20th century. This made me think about how cartography and API Discovery are very similar. API discovery is about mapping and finding unknown and undocumented APIs. When you know which APIs you're actually putting out there, you're much less likely to run into problems that could sink your application rollout like a ship that runs aground, or leaves you dead in the water like a vessel that's lost both mast and rudder in stormy seas. This inspired me to show F5’s process for finding cloud SaaS APIs with a simple REST API. I honored Captain Cook and his Newfoundland trip by using this. Roughly speaking...this is my architecture. I thought about a simple rest API running in AWS. I call it the “Cook API”. An overview of the API interface is as follows. "title": "Captain Cook's Newfoundland Mapping API", "description": "Imaginary Rest API for testing API Discovery", "available_endpoints": [ {"path": "/api/charts", "methods": ["GET", "POST"], "description": "Access and create mapping charts"}, {"path": "/api/charts/<chart_id>", "methods": ["GET"], "description": "Get details of a specific chart"}, {"path": "/api/hazards", "methods": ["GET"], "description": "Get current navigation hazards"}, {"path": "/api/journal", "methods": ["GET"], "description": "Access Captain Cook's expedition journal"}, {"path": "/api/vessels", "methods": ["GET"], "description": "Get information about expedition vessels"}, {"path": "/api/vessels/<vessel_id>", "methods": ["GET"], "description": "Get status of a specific vessel"}, {"path": "/api/resources", "methods": ["GET", "POST"], "description": "Manage expedition resources"} ], I set up a container running on a server in AWS that hosts my REST API. To test my API, I create a partial swagger file that represents only a subset of the APIs that the container is advertising. openapi: 3.0.0 info: title: Captain Cook's Newfoundland Mapping API (Simplified) description: > A simplified version of the API. This documentation shows only the Charts and Vessels endpoints. version: 1.0.0 contact: name: API Support email: support@example.com license: name: MIT url: https://opensource.org/licenses/MIT servers: - url: http://localhost:1234 description: Development server - url: http://your-instance-ip description: Instance tags: - name: Charts description: Mapping charts created by Captain Cook - name: Vessels description: Information about expedition vessels paths: /: get: summary: API welcome page and endpoint documentation description: Provides an overview of the available endpoints and API capabilities responses: '200': description: Successful operation content: application/json: schema: type: object properties: title: type: string example: "Captain Cook's Newfoundland Mapping API" description: type: string available_endpoints: type: array items: type: object properties: path: type: string methods: type: array items: type: string description: type: string /api/charts: get: summary: Get all mapping charts description: Retrieves a list of all mapping charts created during the Newfoundland expedition tags: - Charts responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: type: object additionalProperties: $ref: '#/components/schemas/Chart' post: summary: Create a new chart description: Adds a new mapping chart to the collection tags: - Charts requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/ChartInput' responses: '201': description: Chart created successfully content: application/json: schema: type: object properties: status: type: string example: "success" message: type: string example: "Chart added" id: type: string example: "6" '400': description: Invalid input content: application/json: schema: $ref: '#/components/schemas/Error' /api/charts/{chartId}: get: summary: Get a specific chart description: Retrieves a specific mapping chart by its ID tags: - Charts parameters: - name: chartId in: path required: true description: ID of the chart to retrieve schema: type: string responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: $ref: '#/components/schemas/Chart' '404': description: Chart not found content: application/json: schema: $ref: '#/components/schemas/Error' /api/vessels: get: summary: Get all expedition vessels description: Retrieves information about all vessels involved in the expedition tags: - Vessels responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: type: array items: $ref: '#/components/schemas/VesselBasic' /api/vessels/{vesselId}: get: summary: Get a specific vessel description: Retrieves detailed information about a specific vessel by its ID tags: - Vessels parameters: - name: vesselId in: path required: true description: ID of the vessel to retrieve schema: type: string responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: $ref: '#/components/schemas/VesselDetailed' '404': description: Vessel not found content: application/json: schema: $ref: '#/components/schemas/Error' components: schemas: Chart: type: object properties: name: type: string example: "Trinity Bay" completed: type: boolean example: true date: type: string format: date example: "1763-06-14" landmarks: type: integer example: 12 risk_level: type: string enum: [Low, Medium, High] example: "Medium" ChartInput: type: object required: - name properties: name: type: string example: "St. Mary Bay" completed: type: boolean example: false date: type: string format: date example: "1767-05-20" landmarks: type: integer example: 7 risk_level: type: string enum: [Low, Medium, High, Unknown] example: "Medium" VesselBasic: type: object properties: id: type: string example: "HMS_Grenville" type: type: string example: "Survey Sloop" crew: type: integer example: 18 status: type: string enum: [Active, In-port, Damaged, Repairs] example: "Active" VesselDetailed: allOf: - $ref: '#/components/schemas/VesselBasic' - type: object properties: current_position: type: string example: "LAT: 48.2342°N, LONG: -53.4891°W" heading: type: string example: "Northeast" weather_conditions: type: string enum: [Favorable, Challenging, Dangerous] example: "Favorable" Error: type: object properties: status: type: string example: "error" message: type: string example: "Resource not found" The process to set up API discovery in F5 Distributed Cloud is very simple. I create an origin pool that points to my upstream REST API. I then create a load balancer in distributed cloud and o Associate the origin pool with the load balancer o Enable API definition and import my partial swagger file as my API inventory. Some Screenshots below. o Enable API Discovery Select Enable from Redirect Traffic Run a shell script that tests my API. Take a break or do something else for the API Discovery capabilities to populate the dashboard. The process of API Discovery to show up in the XC Security Dashboard can take several hours. Results Well, as predicted, API discovery has found that my Swagger file is only representing a subset of my APIs. API discovery has found an additional 4 APIs that were not included in the swagger file. Distributed cloud describes these as “Shadow” APIs, or APIs that you may not have known about. API Discovery has also discovered that sensitive data is being returned by couple of the APIs What Now? If this were a real-world situation, you would have to review what was found, paying special attention to APIs that may be returning sensitive data. Each of what we call “shadow” APIs could pose a security risk, so you should review each of these APIs. The good thing is that we are now using distributed cloud, and we can use distributed cloud to protect our APIs. It is very easy to allow only those APIs that your project team might be using. For the APIs that you are exposing through the platform, you can:- Implement JWT Authentication if none exists and authentication is required. Configure Rate Limiting Add a WAF Policy Implement Bot Protection Policy Continually log and monitor your API traffic. Also, you should update your API inventory to include the entirety of the APIs that the Application provides You should only expose the APIs that are being used All of these things are simple to set up in the F5 Distributed Cloud Application Conclusion You need effective API management and discovery. Detecting "Shadow" APIs is crucial to preventing sensitive data exposure. Much like Captain Cook charting unknown territories, the process of uncovering APIs previously hidden in the system reveals a need for precision and vigilance. Cook’s expeditions needed detailed maps and careful navigation to avoid hidden dangers. Modern API management needs tools that can accurately map and monitor every endpoint. By embracing this meticulous approach, we can not only safeguard sensitive data but also steer our digital operations toward a more secure and efficient future. To quote a pirate who happens to be an API security expert and likes a Haiku. Know yer API seas, Map each endpoint 'fore ye sail— Blind waters sink ships.195Views1like0CommentsMitigating OWASP API Security Risk: Security Misconfiguration using F5 BIG-IP
This article covers basics of security misconfiguration along with demo of CORS misconfiguration use case as an example and how these types of misconfigurations can be effectively mitigated using F5 Advanced WAF.333Views1like1CommentEnhance your GenAI chatbot with the power of Agentic RAG and F5 platform
Agentic RAG (Retrieval-Augmented Generation) enhances the capabilities of a GenAI chatbot by integrating dynamic knowledge retrieval into its conversational abilities, making it more context-aware and accurate. In this demo, I will demonstrate an autonomous decision-making GenAI chatbot utilizing Agentic RAG. I will explore what Agentic RAG is and why it's crucial in today's AI landscape. I will also discuss how organizations can leverage GPUaaS (GPU as a Service) or AI Factory providers to accelerate their AI strategy. F5 platform provides robust security features that protect sensitive data while ensuring high availability and performance. They optimize the chatbot by streamlining traffic management and reducing latency, ensuring smooth interactions even during high demand. This integration ensures the GenAI chatbot is not only smart but also reliable and secure for enterprise use.597Views2likes0CommentsHow to use F5 Distributed Cloud for API Discovery
In today's digital landscape, APIs are crucial for integrating diverse applications and services by enabling seamless communication and data sharing between systems. API discovery involves finding, exploring, and assessing APIs for their suitability in applications, considering their varied sources and functionalities. F5 Distributed provides multiple architectures that make it ridiculously easy to discover APIs.697Views1like1Comment