api
167 TopicsMoving HTTP Load Balancers Between F5 Distributed Cloud Namespaces — Why It's Harder Than You Think
The Problem If you have been working with F5 Distributed Cloud (XC) for a while, you have probably run into this: your namespace structure no longer reflects how your teams or applications are organized. Maybe the initial layout was a quick decision during onboarding. Maybe teams have merged, projects have grown, or your naming convention has evolved. Either way, you now want to move a handful of HTTP load balancers from one namespace to another. Simple enough, right? Just change the namespace field and save... Except you can't. There is no "move" operation on F5 XC - not in the UI, not in the API. Changing the namespace of a load balancer means deleting it in the source and re-creating it in the target. And that is where things get complicated. Why a Simple Delete-and-Recreate Is Not Enough On the surface, the API is straightforward: "GET" the config, "DELETE" the object, "POST" it into the new namespace. But a production HTTP load balancer on XC is rarely a standalone object. It sits at the top of a dependency tree that can include origin pools, health checks, TLS certificates, service policies, app firewalls, rate limiters, and more. Every one of those dependencies needs to be handled correctly - or the migration breaks. Here are the main challenges we might run into. Referential Integrity F5 XC enforces strict referential integrity. You cannot delete an origin pool that is still referenced by a load balancer. You cannot create a load balancer that references an origin pool that does not exist yet. This means the order of operations matters: delete top-down (LBs first, then dependencies), create bottom-up (dependencies first, then LBs). It also means that if two load balancers share an origin pool, you cannot move them independently. Delete the first LB, try to delete the shared pool, and the API returns a 409 Conflict because the second LB still references it. Both LBs - and all of their shared dependencies - have to be moved together as a single atomic unit. New CNAMEs After Every Move When you delete and re-create an HTTP load balancer, F5 XC assigns a new "host_name" (the CNAME target that your DNS records point to). If the LB uses Let's Encrypt auto-certificates, the ACME challenge CNAME changes too. That means after every move, someone needs to update external DNS records - and until that happens, the application is unreachable or the TLS certificate renewal fails. For tenants using XC-managed DNS zones with "Allow Application Loadbalancer Managed Records" enabled, this is handled automatically. But many customers manage their own DNS, and they need the old and new CNAME values for every moved LB. Certificates with Non-Portable Private Keys This one is subtle. When a load balancer uses a manually imported TLS certificate, the private key is stored in one of several formats: blindfolded (encrypted with the Volterra blindfold key) or clear secret. In both of these cases, the XC API never returns the private key material in its GET response. You get the certificate and metadata, but not the key. That means you cannot extract-and-recreate the certificate in a new namespace via the API. Cross-namespace certificate references (outside of "shared" namespace) are also not supported. So if an LB in namespace A uses a manually imported certificate stored in namespace A, and you want to move that LB to namespace B, you need to first manually upload the same certificate into namespace B (or into the "shared" namespace) before the migration can proceed. API Metadata The XC API returns a "referring_objects" field on every config GET response. In theory, this tells you what other objects reference a given resource - exactly what you need to know before deleting something. In practice, this field can be empty even when active references exist. The only reliable way to detect all external references is to actively scan: fetch the config of every load balancer in the namespace and check their specs for references to the objects you are about to move. Cross-Namespace References Are Not Allowed On F5 XC, an HTTP load balancer can only reference objects in its own namespace, in "system" or "shared" namespace. If your origin pool lives in namespace A and you move the LB to namespace B, the origin pool must either come along to namespace B or already exist there. There is no way to have the LB in namespace B point to a pool in namespace A. This means you need to discover the complete transitive dependency tree of every LB, determine which dependencies need to move, detect which are shared between multiple LBs, and batch everything accordingly. The Tool: XC Namespace migration To deal with all of this, (A)I built **xc-ns-mover** — a Python CLI tool that automates the entire process. It has two components: Scanner - scans all namespaces on your tenant, lists every HTTP load balancer, and writes a CSV report. This gives you the inventory to decide what to move. Mover - takes a CSV of load balancers, discovers all dependencies, groups LBs that share dependencies into atomic batches, runs a series of pre-flight checks, and then executes the migration - or generates a dry-run report so you can review everything first, or do the job manually (JSON Code blocks available in the report) What the Mover Does Before Touching Anything The mover runs six pre-flight phases before making any changes: Discovery and batching - fetches every LB config, walks the dependency tree, and uses a union-find algorithm to cluster LBs with shared dependencies into batches. External reference scan - for every dependency being moved, checks whether any LB outside the move list references it. If so, that dependency cannot be moved without breaking the external LB, and the batch is blocked. Conflict detection - lists all existing objects in the target namespace. If a name already exists, the user can skip the object or rename it with a configurable prefix (e.g., "migrated-my-pool"). All internal JSON references are updated automatically. Certificate pre-flight - identifies certificates with non-portable private keys, then searches the target and "shared" namespaces for a matching certificate by domain/SAN comparison (including wildcard matching per RFC 6125). If a match is found, the LB's certificate reference is automatically rewritten. If not, the batch is blocked until the certificate is manually created. DNS zone pre-flight - queries the tenant's DNS zones to detect which ones have managed LB records enabled. LBs under managed zones are flagged as "auto-managed" in the report — no manual DNS update needed. After all checks pass, the actual migration follows a strict order per batch: backup everything, delete top-down, create bottom-up, verify new CNAMEs. If anything fails, automatic rollback kicks in — objects created in the target are deleted, objects deleted from the source are restored from backups. The Reports Every run produces an HTML report. The dry-run report shows planned configurations, the full dependency graph , certificate issues, DNS changes required, and any blocking issues — all before a single API call mutates anything. The post-migration report includes old and new CNAME values, a DNS changes table with exactly which records need updating, and full configuration backups of everything that was touched. Things to Keep in Mind A few caveats that are worth highlighting: Brief interruption is unavoidable - The migration deletes and re-creates load balancers. During that window (typically seconds to a few minutes per batch), traffic to affected domains will be impacted. Plan a change window. Only HTTP load balancers are supported - TCP load balancers and other object types are not handled by this tool. DNS updates are your responsibility - The report gives you all the values - old CNAME, new CNAME, ACME challenge CNAME - but you need to update your DNS provider. Always run the dry-run first - The tool enforces this by default: it stores a fingerprint after a dry-run and verifies it before executing. If the config changes, a new dry-run is required. The project is open source and available on GitHub. This is privately maintained and not "officially supported": https://github.com/de1chk1nd/resources-and-tools/blob/main/tools/xc-ns-mover/README.md If you find bugs or have feature requests, please open a GitHub issue.98Views2likes0CommentsAPI Pre-Authentication
Some time ago I was dealing with legacy web applications being retrofitted with modern authentication. As you can imagine this opens up a world of pain when the developers of said applications have long since left. In this particular case we had an application migrating to a new API which was secured behind a non-f5 reverse proxy. The difficulty was API authentication. First the user would SSO to access the application but this was only for web application access. The API used by the application was hosted in a secure environment behind the proxy and there is no support by the standards for any authentication flows under the hood. I refer to fetch() and XHTMLRequest() calls by the application. Key: Redirect = > Issue Flow User requests web application > Federated auth > Web application loads in user browser context and starts making API calls which fail. I did not have LTM handy to whip up some awesome solution to resolve this which would have been ideal case so had to figure out how to solve this another way. The Non-F5 Solution All authentication requests have to occur at the user level so the only way in this scenario, was to redirect them. They were redirected to a dedicated endpoint on the proxy which automatically triggered an auth redirect for federated authentication. This would see the user has already logged in and issue the relevant proxy credentials (as a cookie) to the client who would then return to the proxy with these credentials. The only issue with this flow is everything about the original redirect from the client is lost during the authentication process. The only thing that remains is the original URL the user requested. Prototype Solution User requests web application > Federated auth > Web application Loads - API Access? No > Proxy > Federated auth > Proxy > ? Javascript handling Developers add javascript calls to API endpoint and if the return code is XXX redirects the user to proxy with parameter ?return=current url Final Solution User requests web application > Federated auth > Web application loads - API Access? No > Proxy with return URL > (No Authentication) Federated auth > (Adds Proxy Authentication Cookie into Users Browser) Proxy with return URL > (Redirect user to return URL) Web application loads - API Access (Using Proxy Authentication Cookie) ? Yes, proceed as normal. Since that time there may have been new standards that make this challenge simpler to navigate. Are you aware of any? Moreover which of the various F5 offerings would have been able to assist us with this issue?Solved120Views1like8CommentsCORS with API calls
Hello, Sorry if this is an obvious question -- we're very new to XC. We're using XC with one load balancer with CORS activated. It works fine for web applications but all API calls (to our internal APIs) are blocked because of missing origin header. What is the correct way to handle it? Ask the connecting party to insert origin headers? Dedicate another load balancer (to be used for APIs only) with CORS disabled? Thank you.Solved126Views0likes4CommentsAdvanced WAF v16.0 - Declarative API
Since v15.1 (in draft), F5® BIG-IP® Advanced WAF™ can import Declarative WAF policy in JSON format. The F5® BIG-IP® Advanced Web Application Firewall (Advanced WAF) security policies can be deployed using the declarative JSON format, facilitating easy integration into a CI/CD pipeline. The declarative policies are extracted from a source control system, for example Git, and imported into the BIG-IP. Using the provided declarative policy templates, you can modify the necessary parameters, save the JSON file, and import the updated security policy into your BIG-IP devices. The declarative policy copies the content of the template and adds the adjustments and modifications on to it. The templates therefore allow you to concentrate only on the specific settings that need to be adapted for the specific application that the policy protects. This Declarative WAF JSON policy is similar to NGINX App Protect policy. You can find more information on the Declarative Policy here : NAP : https://docs.nginx.com/nginx-app-protect/policy/ Adv. WAF : https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-declarative-security-policy.html Audience This guide is written for IT professionals who need to automate their WAF policy and are familiar with Advanced WAF configuration. These IT professionals can fill a variety of roles: SecOps deploying and maintaining WAF policy in Advanced WAF DevOps deploying applications in modern environment and willing to integrate Advanced WAF in their CI/CD pipeline F5 partners who sell technology or create implementation documentation This article covers how to PUSH/PULL a declarative WAF policy in Advanced WAF: With Postman With AS3 Table of contents Upload Policy in BIG-IP Check the import Apply the policy OpenAPI Spec File import AS3 declaration CI/CD integration Find the Policy-ID Update an existing policy Video demonstration First of all, you need a JSON WAF policy, as below : { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false } } } 1. Upload Policy in BIG-IP There are 2 options to upload a JSON file into the BIG-IP: 1.1 Either you PUSH the file into the BIG-IP and you IMPORT IT OR 1.2 the BIG-IP PULL the file from a repository (and the IMPORT is included) <- BEST option 1.1 PUSH JSON file into the BIG-IP The call is below. As you can notice, it requires a 'Content-Range' header. And the value is 0-(filesize-1)/filesize. In the example below, the file size is 662 bytes. This is not easy to integrate in a CICD pipeline, so we created the PULL method instead of the PUSH (in v16.0) curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/file-transfer/uploads/policy-api.json' \ --header 'Content-Range: 0-661/662' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --header 'Content-Type: application/json' \ --data-binary '@/C:/Users/user/Desktop/policy-api.json' At this stage, the policy is still a file in the BIG-IP file system. We need to import it into Adv. WAF. To do so, the next call is required. This call import the file "policy-api.json" uploaded previously. An CREATE the policy /Common/policy-api-arcadia curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/javascript' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "filename":"policy-api.json", "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 1.2 PULL JSON file from a repository Here, the JSON file is hosted somewhere (in Gitlab or Github ...). And the BIG-IP will pull it. The call is below. As you can notice, the call refers to the remote repo and the body is a JSON payload. Just change the link value with your JSON policy URL. With one call, the policy is PULLED and IMPORTED. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }' A second version of this call exists, and refer to the fullPath of the policy. This will allow you to update the policy, from a second version of the JSON file, easily. One call for the creation and the update. As you can notice below, we add the "policy":"fullPath" directive. The value of the "fullPath" is the partition and the name of the policy set in the JSON policy file. This method is VERY USEFUL for CI/CD integrations. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 2. Check the IMPORT Check if the IMPORT worked. To do so, run the next call. curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ You should see a 200 OK, with the content below (truncated in this example). Please notice the "status":"COMPLETED". { "kind": "tm:asm:tasks:import-policy:import-policy-taskcollectionstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy?ver=16.0.0", "totalItems": 11, "items": [ { "isBase64": false, "executionStartTime": "2020-07-21T15:50:22Z", "status": "COMPLETED", "lastUpdateMicros": 1.595346627e+15, "getPolicyAttributesOnly": false, ... From now, your policy is imported and created in the BIG-IP. You can assign it to a VS as usual (Imperative Call or AS3 Call). But in the next session, I will show you how to create a Service with AS3 including the WAF policy. 3. APPLY the policy As you may know, a WAF policy needs to be applied after each change. This is the call. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/apply-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{"policy":{"fullPath":"/Common/policy-api-arcadia"}}' 4. OpenAPI spec file IMPORT As you know, Adv. WAF supports OpenAPI spec (2.0 and 3.0). Now, with the declarative WAF, we can import the OAS file as well. The BEST solution, is to PULL the OAS file from a repo. And in most of the customer' projects, it will be the case. In the example below, the OAS file is hosted in SwaggerHub (Github for Swagger files). But the file could reside in a private Gitlab repo for instance. The URL of the project is : https://app.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 The URL of the OAS file is : https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 This swagger file (OpenAPI 3.0 Spec file) includes all the application URL and parameters. What's more, it includes the documentation (for NGINX APIm Dev Portal). Now, it is pretty easy to create a WAF JSON Policy with API Security template, referring to the OAS file. Below, you can notice the new section "open-api-files" with the link reference to SwaggerHub. And the new template POLICY_TEMPLATE_API_SECURITY. Now, when I upload / import and apply the policy, Adv. WAF will download the OAS file from SwaggerHub and create the policy based on API_Security template. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3" } ] } } 5. AS3 declaration Now, it is time to learn how we can do all of these steps in one call with AS3 (3.18 minimum). The documentation is here : https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/application-security.html?highlight=waf_policy#virtual-service-referencing-an-external-security-policy With this AS3 declaration, we: Import the WAF policy from a external repo Import the Swagger file (if the WAF policy refers to an OAS file) from an external repo Create the service { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "API": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["10.1.10.27"], "redirect80": false, "pool": "pool_NGINX_API_AS3", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://10.1.20.4/root/as3-waf-api/-/raw/master/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API_AS3": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["10.1.20.9"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } 6. CI/CID integration As you can notice, it is very easy to create a service with a WAF policy pulled from an external repo. So, it is easy to integrate these calls (or the AS3 call) into a CI/CD pipeline. Below, an Ansible playbook example. This playbook run the AS3 call above. That's it :) --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "admin" my_password: "admin" bigip: "10.1.1.12" tasks: - name: Deploy AS3 WebApp uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic YWRtaW46YWRtaW4=" body: "{{ lookup('file','as3.json') }}" body_format: json validate_certs: no status_code: 200 7. FIND the Policy-ID When the policy is created, a Policy-ID is assigned. By default, this ID doesn't appear anywhere. Neither in the GUI, nor in the response after the creation. You have to calculate it or ask for it. This ID is required for several actions in a CI/CD pipeline. 7.1 Calculate the Policy-ID We created this python script to calculate the Policy-ID. It is an hash from the Policy name (including the partition). For the previous created policy named "/Common/policy-api-arcadia", the policy ID is "Ar5wrwmFRroUYsMA6DuxlQ" Paste this python code in a new waf-policy-id.py file, and run the command python waf-policy-id.py "/Common/policy-api-arcadia" Outcome will be The Policy-ID for /Common/policy-api-arcadia is: Ar5wrwmFRroUYsMA6DuxlQ #!/usr/bin/python from hashlib import md5 import base64 import sys pname = sys.argv[1] print 'The Policy-ID for', sys.argv[1], 'is:', base64.b64encode(md5(pname.encode()).digest()).replace("=", "") 7.2 Retrieve the Policy-ID and fullPath with a REST API call Make this call below, and you will see in the response, all the policy creations. Find yours and collect the PolicyReference directive. The Policy-ID is in the link value "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0" You can see as well, at the end of the definition, the "fileReference" referring to the JSON file pulled by the BIG-IP. And please notice the "fullPath", required if you want to update your policy curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Range: 0-601/601' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ { "isBase64": false, "executionStartTime": "2020-07-22T11:23:42Z", "status": "COMPLETED", "lastUpdateMicros": 1.595417027e+15, "getPolicyAttributesOnly": false, "kind": "tm:asm:tasks:import-policy:import-policy-taskstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy/B45J0ySjSJ9y9fsPZ2JNvA?ver=16.0.0", "filename": "", "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "endTime": "2020-07-22T11:23:47Z", "startTime": "2020-07-22T11:23:42Z", "id": "B45J0ySjSJ9y9fsPZ2JNvA", "retainInheritanceSettings": false, "result": { "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "message": "The operation was completed successfully. The security policy name is '/Common/policy-api-arcadia'. " }, "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }, 8 UPDATE an existing policy It is pretty easy to update the WAF policy from a new JSON file version. To do so, collect from the previous call 7.2 Retrieve the Policy-ID and fullPath with a REST API call the "Policy" and "fullPath" directive. This is the path of the Policy in the BIG-IP. Then run the call below, same as 1.2 PULL JSON file from a repository, but add the Policy and fullPath directives Don't forget to APPLY this new version of the policy 3. APPLY the policy curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' TIP : this call, above, can be used in place of the FIRST call when we created the policy "1.2 PULL JSON file from a repository". But be careful, the fullPath is the name set in the JSON policy file. The 2 values need to match: "name": "policy-api-arcadia" in the JSON Policy file pulled by the BIG-IP "policy":"fullPath" in the POST call 9 Video demonstration In order to help you to understand how it looks with the BIG-IP, I created this video covering 4 topics explained in this article : The JSON WAF policy Pull the policy from a remote repository Update the WAF policy with a new version of the declarative JSON file Deploy a full service with AS3 and Declarative WAF policy At the end of this video, you will be able to adapt the REST Declarative API calls to your infrastructure, in order to deploy protected services with your CI/CD pipelines. Direct link to the video on DevCentral YouTube channel : https://youtu.be/EDvVwlwEFRw4.7KViews5likes3CommentsF5 API Security: Discovery and Protection
Introduction APIs are everywhere, accounting for around 83% of all internet traffic today, with API calls growing nearly 300% faster than overall web traffic. Last year, the F5 Office of the CTO estimated that the number of APIs being deployed to production could reach between 500 million to over a billion by 2030. At the time, the notion of over a billion APIs in the wild was overwhelming, made even more concerning by estimates indicating that a significant portion were unmanaged or, in some cases, entirely undocumented. Now, in the era of AI driven development and automation, that estimate of over a billion APIs may prove to be a significant understatement. According to recent research by IDC on API Sprawl and AI Enablement, "Organizations with GenAI enhanced applications/services in production have roughly 5x more APIs than organizations not yet investing significantly in GenAI". That all makes for a very large and complicated attack surface, and complexity is the enemy of security. Discovery, Monitoring, and Protection So, how do we begin securing such a large and complex attack surface? It requires a continuous approach that blends visibility, management, and enforcement. This includes multi-lens Discovery and Learning to detect unknown or shadow APIs, determine authentication status, identify sensitive data, and generate accurate OpenAPI schemas. It also involves Monitoring to establish baselines for endpoint parameters, behaviors, and characteristics, enabling the detection of anomalies. Finally, we must Protect by blocking suspicious requests, applying rate limiting, and enforcing schema validation to prevent misuse. The API Security capabilities of the F5 product portfolio are essential for providing that continuous, defense in depth approach to protecting your APIs from DevTime to Runtime. F5 API Security Essentials Additional Resources F5 API Security Article Series: Out of the Shadows: API Discovery Beyond Rest: Protecting GraphQL Deploy F5 Distributed Cloud API Discovery and Security: F5 Distributed Cloud WAAP Terraform Examples GitHub Repo Deploy F5 Hybrid Architectures API Discovery and Security: F5 Distributed Cloud Hybrid Security Architectures GitHub Repo F5 Distributed Cloud Documentation: F5 Distributed Cloud Terraform Provider Documentation F5 Distributed Cloud Services API Documentation
443Views2likes1CommentF5's API Security Alignment with NIST SP 800-228
Introduction F5 has positioned itself as a comprehensive API security leader with solutions that directly address the emerging NIST SP 800-228 "Guidelines for API Protection for Cloud-Native Systems."F5’s multi-layered approach covers the entire API lifecycle, from development to runtime protection. It is closely aligned with NIST’s recommended controls and architectural patterns. F5's product portfolio comprehensively addresses NIST 800-228 requirements F5's current API security ecosystem includes BIG-IP Advanced WAFF5 Distributed Cloud Services, and NGINX Plus . This creates a unified platform that addresses all 22 NIST recommended controls (REC-API-1 through REC-API-22). The company's 2024 acquisition of Wib Security strengthened its pre-runtime protection capabilities, while Heyhack enhanced its penetration testing offerings. These strategic moves demonstrate F5's commitment to comprehensive API security coverage. The F5 Distributed Cloud Services API Security platform is a comprehensive WAAP solution. The platform provides AI-powered API discovery, real-time threat detection, advanced bot protection, web application firewall, DoS/DDoS protection, and automated policy enforcement. This directly supports NIST's focus on continuous monitoring and adaptive security. Comprehensive mapping to NIST SP 800-228 control framework F5's solutions address all seven thematic groups outlined in NIST SP 800-228. These "target" objectives include security controls that address the OWASP API Top 10. These mitigations address broken object-level authentication, sensitive information disclosure, input validation, and other security vulnerabilities. If you haven't read the new document, I encourage you to do so. You can find the document here. The following may seem confusing at first, but the REC-API headings map to the NIST document. These are high-level target controls. You can further group these by thinking of Pre-Runtime Protections (REC-API-1 through REC-API-8) and Runtime Protections (REC-API-9 through REC-API-22). We have done our best to map F5's capabilities at a high level to the target controls below. In a future article, we will provide specific configuration controls mapping to each target level. API specification and inventory management (REC-API-1 to REC-API-4) F5's AI/ML-powered API discovery automatically identifies and catalogs API endpoints, including shadow APIs that pose security risks. The platform generates OpenAPI specifications from traffic analysis and maintains a real-time API inventory with risk scoring. The F5 Distributed Cloud Services platform provides external domain crawling and comprehensive API lifecycle tracking. This directly addressing NIST's requirements for preventing unauthorized APIs from becoming attack vectors. API Discovery of API Endpoints Schema validation and input handling (REC-API-5 to REC-API-8) F5 implements a positive security model that enforces OpenAPI specifications at runtime. F5 platforms provide granular parameter validation, content-type enforcement, and request size limiting. The platform automatically validates request/response schemas against predefined specifications and uses machine learning to detect schema drift, ensuring continuous compliance with API contracts. In cases when a pre-defined schema is not available, the platform can "learn" through discovery and build an Open API Spec that can later be imported into the platform for adding security controls. Authentication and authorization (REC-API-9 to REC-API-12) F5's authentication architecture supports OAuth 2.0, OpenID Connect, SAML, and JWT validation with comprehensive scope checking. The F5 Application Delivery and Security platform provides per-request policy enforcement with role-based access control (RBAC) and attribute-based access control (ABAC). The platform's cryptographic X.509 identity bootstrapping ensures every component receives unique identity credentials, supporting NIST's emphasis on strong authentication mechanisms. Sensitive data protection (REC-API-13 to REC-API-15) F5's data classification engine automatically identifies and protects PII, HIPAA, GDPR, and PCI-DSS data types flowing through APIs. The platform implements real-time data flow policies with redaction mechanisms and monitors for potential data exfiltration. The F5 Distributed Cloud Services provides context-aware data protection that goes beyond traditional PII to include business-sensitive information. Sensitive Information Discovery and Redaction Access control and request flow (REC-API-16 to REC-API-18) F5's real-time response capabilities enable immediate blocking of specific keys or users on demand. The platform implements mature token management with hardened API behavior detection for abnormal usage patterns. The behavioral analytics engine continuously monitors API usage patterns to detect compromised credentials and automated attacks. Rate limiting and abuse prevention (REC-API-19 to REC-API-21) F5 provides granular rate limiting by user, IP, application ID, method, and field through multiple implementation approaches. The NGINX Plus leaky bucket algorithm ensures smooth traffic management, while BIG-IP APM offers sophisticated quota management with spike arrest capabilities. The platform's L7 DDoS protection uses machine learning to detect and mitigate application-layer attacks accurately. API Endpoint Rate Limiting Settings Logging and observability (REC-API-22) F5's comprehensive logging framework captures all API interactions, authentication events, and data access with contextual information. The platform provides real-time analytics with application performance monitoring, security event correlation, and business intelligence capabilities. Integration with SIEM platforms like Splunk and Datadog ensures actionable intelligence connects to operational response capabilities. Implementation of NIST's three API gateway patterns F5's architecture uniquely supports all three API gateway patterns outlined in NIST SP 800-228: Centralized gateway pattern The F5 Distributed Cloud ADN provides a global application delivery network with centralized policy management through a unified SaaS console. This approach ensures consistent security policy enforcement across all endpoints while leveraging F5's global network infrastructure for optimal performance and threat intelligence sharing. Hybrid gateway pattern F5's distributed data plane with centralized control represents the optimal balance between centralized management and distributed performance. The F5 Distributed Customer Edge nodes deployed at customer sites provide local API processing with global policy synchronization. This enables organizations to maintain data sovereignty while benefiting from centralized security management. Decentralized gateway pattern The NGINX Plus deployment model enables lightweight API gateways positioned close to applications, perfect for microservices architectures. The NGINX Ingress Controller provides Kubernetes-native API management with per-service gateway deployment in service mesh environments. This ensures policy enforcement occurs as close to individual service instances as possible. In addition, BIG-IP can be deployed to provide API security and provide many of the same mitigations as listed above. This can be beneficial as most modern enterprises already have F5 BIG-IPs in their environments. Advanced zero trust and identity-based segmentation F5's zero trust architecture implements NIST's identity-centric security principles through cryptographic principles. TLS is a cornerstone of F5 technologies. Our platforms are purpose-built for cryptography, including TLS 1.3 and Post Quantum. mTLS can be used to authenticate both sides of the TLS handshake. F5's strong authentication and authorization features fit nicely into an API Security Zero Trust design. The continuous verification model ensures no implicit trust based on network location, while least privilege enforcement provides granular access control based on identity and attributes. F5's integration with enterprise identity providers like Microsoft Entra ID and Okta enables seamless implementation of zero trust principles across existing infrastructure. Comprehensive pre-runtime and runtime protection F5's pre-runtime protection includes integration with CI/CD pipelines through the recent Wib Security acquisition, enabling vulnerability detection during development. The platform provides automated security reconnaissance through Heyhack's capabilities and API scanning before production deployment. For runtime protection, F5's behavioral analytics engine establishes baseline API behavior and detects anomalies in real-time. The threat intelligence integration protects coordinated attack campaigns. API endpoint markup automatically identifies and tokenizes dynamic URL components for enhanced protection. Implementation recommendations Organizations implementing F5 solutions for NIST SP 800-228 compliance should consider a phased approach starting with API discovery and inventory management, followed by authentication and authorization controls, and culminating in comprehensive monitoring and analytics. For a purely SaaS solution, Distributed Cloud presents a mature API security solution offering cutting-edge capabilities. For enterprises requiring on-premises deployment, BIG-IP Advanced WAF and Access Policy Manager provide the most robust capabilities with enterprise-grade performance and extensive customization options. The hybrid deployment model of SaaS and on-premises often provides the optimal balance of cost, performance, and security for large organizations with complex infrastructure requirements. Conclusion F5's API security portfolio represents a mature, comprehensive solution that directly addresses the full spectrum of NIST SP 800-228 requirements. F5’s strategic acquisitions, innovative AI integration, and proven enterprise scalability position it as a leading choice for organizations seeking to implement comprehensive API security aligned with emerging federal guidelines. With continued investment in cloud-native capabilities and AI-powered threat detection, F5 is well-positioned to maintain its leadership as API security requirements continue evolving.359Views1like0CommentsF5 Big-IQ read-only API username creation.
Team, I seem to be facing some challenge getting an API read-only account created. Can someone help with the correct steps or a link to get this account name created. There was a comment to create a user with the Auditor role but that is not working. The additional admin account which I tried is also not working. However, the default admin account works. We are using 17.x version. Regards, N!124Views0likes2Comments