api
168 TopicsNGINX Management Suite API Connectivity Manager - Modern API driven Applications
Introduction API based applications benefits Before we dive into our API gateway use case, we will go one step back and check why the move to API driven applications, below are some of the benefits for this move: Loose coupling: API-based applications can be built and maintained independently, allowing for faster development and deployment cycles. Reusability: APIs can be reused across multiple applications, reducing the need to duplicate code and effort. Scalability: API-based architecture allows for easier scaling of individual services, rather than having to scale the entire application. Flexibility: APIs allow for different client applications to consume the same services, such as web, mobile, and IoT devices. Interoperability: APIs facilitate communication between different systems and platforms, enabling integration with third-party services and data sources. Microservices: API-based architecture allows developers to build small, modular services that can be developed, deployed, and scaled independently. NGINX Management Suite API Connectivity Manager capabilities NGINX Management Suite API Connectivity Manager adds to the capabilities of the API driven applications a secure approach to authenticate, access and developing those API based applications. API Connectivity Manager is used to connect, secure, and govern our APIs. In addition, API Connectivity Manager lets us separate infrastructure lifecycle management from the API lifecycle, giving the IT/Ops teams and application developers the ability to work independently. API Connectivity Manager provides the following features: Create and manage isolated Workspaces for business units, development teams, and so on, so each team can develop and deploy at its own pace without affecting other teams. Create and manage API infrastructure in isolated workspaces. Enforce uniform security policies across all workspaces by applying global policies. Create Developer Portals that align with your brand, with custom color themes, logos, and favicons. Onboard your APIs to an API Gateway cluster and publish your API documentation to the Dev Portal. Let teams apply policies to their API proxies to provide custom quality of service for individual applications. Onboard API documentation by uploading an OpenAPI spec. Publish your API docs to a Dev Portal while keeping your API’s backend service private. Let users issue API keys or basic authentication credentials for access to your API. Send API calls by using the Developer Portal’s API Reference documentation. API Connectivity Manager use case API Connectivity Manager use case overview In our case we will have three teams, Infrastructure team, this one will be responsible for setting up the infrastructure, domains and access policies. API team, this one will be responsible for setting up the API documentation, QoS and gateway for both production and developer portals. Application team, this one will be responsible for learning the APIs through the developer portal and use the APIs through the production portal. Authentication in our case is done via two methods, API Key authentication for API version 1. OAuth2 introspection for API version 2. Note, More Authentication methods can be used (JSON Web Token Assertion) included in the following tutorial. API authentication more detailed discussion can be found here Application Programming Interface (API) Authentication types simplified Additional features like API rate limiting can be applied as well, here's a toturial to enable that feature. API Connectivity Manager traffic flows In our use case will have three flows, Management flow, illustrated below. Metrics and events collection flow, illustrated below Data flow illustrated below NGINX tutorial on how to streamline API operations with API Connectivity Manager, API Connectivity Manager lab & implementation ِThe steps we are going to follow with some useful tutorial videos are highlighted below, Setup backend API application (This step has been already done for you in the lab). Setup API Connectivity Manager infrastructure and policies. Enable API Key Authentication via the following Youtube toturial Enable API Key Authentication with API Connectivity Manager. Publish APIs and Documentation through API Connectivity Manager. Test APIs through API Developer Portal The detailed lab guide and the implementation videos Cloud labs detailed guide https://clouddocs.f5.com/training/community/nginx/html/class10/class10.html UDF lab can be found here as well https://udf.f5.com/b/ed5ffb71-bcce-47ec-9d9f-307441e4c12c#documentation Below a recorded Lab walkthrough by our awesome guru Matt_Dierick References API Connectivity Manager NGINX Management Suite NGINX Docs API Connectivity Manager UDF Lab
2KViews7likes0CommentsGetting started with the F5 Distributed Cloud Web App and API Protection Demo Guide
Overview Sometimes all it takes to get started learning something new is just a little guidance, for those times when we just don't know where to begin. Providing that direction is the purpose of the F5 Distributed Cloud Web App and API Protection (XC WAAP) demo guide. The guide and accompanying demo videos will run you through sample app deployment, as well as App, API, Bot, and DDoS Protection use cases. We provide the app, the testing tool, and the guidance, everything you need to get started and kick the tires. All you need to bring is an F5 Distributed Cloud Account (trial is sufficient for this demo guide) and a bit of time. The guide uses a representative customer app scenario: a Star Ratings application which collects user ratings and reviews/comments on various items. The app runs on Kubernetes, and can be deployed on any cloud, but for the purposes of the demo guide we will deploy it on F5 XC virtual Kubernetes (vK8s) and protect it with F5 XC WAAP. The Scenario The Guide https://github.com/f5devcentral/xcwaapdemoguide Video Series Part 1: Infrastructure Setup Video Series Part 2: WAF Configuration Video Series Part 3: API Protection Video Series Part 4: Bot & DDoS Detection and Protection "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts.
5.3KViews7likes0CommentsApplication Programming Interface (API) Authentication types simplified
API is a critical part of most of our modern applications. In this article we will walkthrough the different authentication types, to help us in the future articles covering NGINX API Connectivity Manager authentication and NGINX Single Sign-on.4.3KViews5likes0CommentsAdvanced WAF v16.0 - Declarative API
Since v15.1 (in draft), F5® BIG-IP® Advanced WAF™ can import Declarative WAF policy in JSON format. The F5® BIG-IP® Advanced Web Application Firewall (Advanced WAF) security policies can be deployed using the declarative JSON format, facilitating easy integration into a CI/CD pipeline. The declarative policies are extracted from a source control system, for example Git, and imported into the BIG-IP. Using the provided declarative policy templates, you can modify the necessary parameters, save the JSON file, and import the updated security policy into your BIG-IP devices. The declarative policy copies the content of the template and adds the adjustments and modifications on to it. The templates therefore allow you to concentrate only on the specific settings that need to be adapted for the specific application that the policy protects. This Declarative WAF JSON policy is similar to NGINX App Protect policy. You can find more information on the Declarative Policy here : NAP : https://docs.nginx.com/nginx-app-protect/policy/ Adv. WAF : https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-declarative-security-policy.html Audience This guide is written for IT professionals who need to automate their WAF policy and are familiar with Advanced WAF configuration. These IT professionals can fill a variety of roles: SecOps deploying and maintaining WAF policy in Advanced WAF DevOps deploying applications in modern environment and willing to integrate Advanced WAF in their CI/CD pipeline F5 partners who sell technology or create implementation documentation This article covers how to PUSH/PULL a declarative WAF policy in Advanced WAF: With Postman With AS3 Table of contents Upload Policy in BIG-IP Check the import Apply the policy OpenAPI Spec File import AS3 declaration CI/CD integration Find the Policy-ID Update an existing policy Video demonstration First of all, you need a JSON WAF policy, as below : { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false } } } 1. Upload Policy in BIG-IP There are 2 options to upload a JSON file into the BIG-IP: 1.1 Either you PUSH the file into the BIG-IP and you IMPORT IT OR 1.2 the BIG-IP PULL the file from a repository (and the IMPORT is included) <- BEST option 1.1 PUSH JSON file into the BIG-IP The call is below. As you can notice, it requires a 'Content-Range' header. And the value is 0-(filesize-1)/filesize. In the example below, the file size is 662 bytes. This is not easy to integrate in a CICD pipeline, so we created the PULL method instead of the PUSH (in v16.0) curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/file-transfer/uploads/policy-api.json' \ --header 'Content-Range: 0-661/662' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --header 'Content-Type: application/json' \ --data-binary '@/C:/Users/user/Desktop/policy-api.json' At this stage, the policy is still a file in the BIG-IP file system. We need to import it into Adv. WAF. To do so, the next call is required. This call import the file "policy-api.json" uploaded previously. An CREATE the policy /Common/policy-api-arcadia curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/javascript' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "filename":"policy-api.json", "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 1.2 PULL JSON file from a repository Here, the JSON file is hosted somewhere (in Gitlab or Github ...). And the BIG-IP will pull it. The call is below. As you can notice, the call refers to the remote repo and the body is a JSON payload. Just change the link value with your JSON policy URL. With one call, the policy is PULLED and IMPORTED. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }' A second version of this call exists, and refer to the fullPath of the policy. This will allow you to update the policy, from a second version of the JSON file, easily. One call for the creation and the update. As you can notice below, we add the "policy":"fullPath" directive. The value of the "fullPath" is the partition and the name of the policy set in the JSON policy file. This method is VERY USEFUL for CI/CD integrations. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 2. Check the IMPORT Check if the IMPORT worked. To do so, run the next call. curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ You should see a 200 OK, with the content below (truncated in this example). Please notice the "status":"COMPLETED". { "kind": "tm:asm:tasks:import-policy:import-policy-taskcollectionstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy?ver=16.0.0", "totalItems": 11, "items": [ { "isBase64": false, "executionStartTime": "2020-07-21T15:50:22Z", "status": "COMPLETED", "lastUpdateMicros": 1.595346627e+15, "getPolicyAttributesOnly": false, ... From now, your policy is imported and created in the BIG-IP. You can assign it to a VS as usual (Imperative Call or AS3 Call). But in the next session, I will show you how to create a Service with AS3 including the WAF policy. 3. APPLY the policy As you may know, a WAF policy needs to be applied after each change. This is the call. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/apply-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{"policy":{"fullPath":"/Common/policy-api-arcadia"}}' 4. OpenAPI spec file IMPORT As you know, Adv. WAF supports OpenAPI spec (2.0 and 3.0). Now, with the declarative WAF, we can import the OAS file as well. The BEST solution, is to PULL the OAS file from a repo. And in most of the customer' projects, it will be the case. In the example below, the OAS file is hosted in SwaggerHub (Github for Swagger files). But the file could reside in a private Gitlab repo for instance. The URL of the project is : https://app.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 The URL of the OAS file is : https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 This swagger file (OpenAPI 3.0 Spec file) includes all the application URL and parameters. What's more, it includes the documentation (for NGINX APIm Dev Portal). Now, it is pretty easy to create a WAF JSON Policy with API Security template, referring to the OAS file. Below, you can notice the new section "open-api-files" with the link reference to SwaggerHub. And the new template POLICY_TEMPLATE_API_SECURITY. Now, when I upload / import and apply the policy, Adv. WAF will download the OAS file from SwaggerHub and create the policy based on API_Security template. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3" } ] } } 5. AS3 declaration Now, it is time to learn how we can do all of these steps in one call with AS3 (3.18 minimum). The documentation is here : https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/application-security.html?highlight=waf_policy#virtual-service-referencing-an-external-security-policy With this AS3 declaration, we: Import the WAF policy from a external repo Import the Swagger file (if the WAF policy refers to an OAS file) from an external repo Create the service { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "API": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["10.1.10.27"], "redirect80": false, "pool": "pool_NGINX_API_AS3", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://10.1.20.4/root/as3-waf-api/-/raw/master/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API_AS3": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["10.1.20.9"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } 6. CI/CID integration As you can notice, it is very easy to create a service with a WAF policy pulled from an external repo. So, it is easy to integrate these calls (or the AS3 call) into a CI/CD pipeline. Below, an Ansible playbook example. This playbook run the AS3 call above. That's it :) --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "admin" my_password: "admin" bigip: "10.1.1.12" tasks: - name: Deploy AS3 WebApp uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic YWRtaW46YWRtaW4=" body: "{{ lookup('file','as3.json') }}" body_format: json validate_certs: no status_code: 200 7. FIND the Policy-ID When the policy is created, a Policy-ID is assigned. By default, this ID doesn't appear anywhere. Neither in the GUI, nor in the response after the creation. You have to calculate it or ask for it. This ID is required for several actions in a CI/CD pipeline. 7.1 Calculate the Policy-ID We created this python script to calculate the Policy-ID. It is an hash from the Policy name (including the partition). For the previous created policy named "/Common/policy-api-arcadia", the policy ID is "Ar5wrwmFRroUYsMA6DuxlQ" Paste this python code in a new waf-policy-id.py file, and run the command python waf-policy-id.py "/Common/policy-api-arcadia" Outcome will be The Policy-ID for /Common/policy-api-arcadia is: Ar5wrwmFRroUYsMA6DuxlQ #!/usr/bin/python from hashlib import md5 import base64 import sys pname = sys.argv[1] print 'The Policy-ID for', sys.argv[1], 'is:', base64.b64encode(md5(pname.encode()).digest()).replace("=", "") 7.2 Retrieve the Policy-ID and fullPath with a REST API call Make this call below, and you will see in the response, all the policy creations. Find yours and collect the PolicyReference directive. The Policy-ID is in the link value "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0" You can see as well, at the end of the definition, the "fileReference" referring to the JSON file pulled by the BIG-IP. And please notice the "fullPath", required if you want to update your policy curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Range: 0-601/601' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ { "isBase64": false, "executionStartTime": "2020-07-22T11:23:42Z", "status": "COMPLETED", "lastUpdateMicros": 1.595417027e+15, "getPolicyAttributesOnly": false, "kind": "tm:asm:tasks:import-policy:import-policy-taskstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy/B45J0ySjSJ9y9fsPZ2JNvA?ver=16.0.0", "filename": "", "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "endTime": "2020-07-22T11:23:47Z", "startTime": "2020-07-22T11:23:42Z", "id": "B45J0ySjSJ9y9fsPZ2JNvA", "retainInheritanceSettings": false, "result": { "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "message": "The operation was completed successfully. The security policy name is '/Common/policy-api-arcadia'. " }, "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }, 8 UPDATE an existing policy It is pretty easy to update the WAF policy from a new JSON file version. To do so, collect from the previous call 7.2 Retrieve the Policy-ID and fullPath with a REST API call the "Policy" and "fullPath" directive. This is the path of the Policy in the BIG-IP. Then run the call below, same as 1.2 PULL JSON file from a repository, but add the Policy and fullPath directives Don't forget to APPLY this new version of the policy 3. APPLY the policy curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' TIP : this call, above, can be used in place of the FIRST call when we created the policy "1.2 PULL JSON file from a repository". But be careful, the fullPath is the name set in the JSON policy file. The 2 values need to match: "name": "policy-api-arcadia" in the JSON Policy file pulled by the BIG-IP "policy":"fullPath" in the POST call 9 Video demonstration In order to help you to understand how it looks with the BIG-IP, I created this video covering 4 topics explained in this article : The JSON WAF policy Pull the policy from a remote repository Update the WAF policy with a new version of the declarative JSON file Deploy a full service with AS3 and Declarative WAF policy At the end of this video, you will be able to adapt the REST Declarative API calls to your infrastructure, in order to deploy protected services with your CI/CD pipelines. Direct link to the video on DevCentral YouTube channel : https://youtu.be/EDvVwlwEFRw4.8KViews5likes3CommentsMoving HTTP Load Balancers Between F5 Distributed Cloud Namespaces — Why It's Harder Than You Think
The Problem If you have been working with F5 Distributed Cloud (XC) for a while, you have probably run into this: your namespace structure no longer reflects how your teams or applications are organized. Maybe the initial layout was a quick decision during onboarding. Maybe teams have merged, projects have grown, or your naming convention has evolved. Either way, you now want to move a handful of HTTP load balancers from one namespace to another. Simple enough, right? Just change the namespace field and save... Except you can't. There is no "move" operation on F5 XC - not in the UI, not in the API. Changing the namespace of a load balancer means deleting it in the source and re-creating it in the target. And that is where things get complicated. Why a Simple Delete-and-Recreate Is Not Enough On the surface, the API is straightforward: "GET" the config, "DELETE" the object, "POST" it into the new namespace. But a production HTTP load balancer on XC is rarely a standalone object. It sits at the top of a dependency tree that can include origin pools, health checks, TLS certificates, service policies, app firewalls, rate limiters, and more. Every one of those dependencies needs to be handled correctly - or the migration breaks. Here are the main challenges we might run into. Referential Integrity F5 XC enforces strict referential integrity. You cannot delete an origin pool that is still referenced by a load balancer. You cannot create a load balancer that references an origin pool that does not exist yet. This means the order of operations matters: delete top-down (LBs first, then dependencies), create bottom-up (dependencies first, then LBs). It also means that if two load balancers share an origin pool, you cannot move them independently. Delete the first LB, try to delete the shared pool, and the API returns a 409 Conflict because the second LB still references it. Both LBs - and all of their shared dependencies - have to be moved together as a single atomic unit. New CNAMEs After Every Move When you delete and re-create an HTTP load balancer, F5 XC assigns a new "host_name" (the CNAME target that your DNS records point to). If the LB uses Let's Encrypt auto-certificates, the ACME challenge CNAME changes too. That means after every move, someone needs to update external DNS records - and until that happens, the application is unreachable or the TLS certificate renewal fails. For tenants using XC-managed DNS zones with "Allow Application Loadbalancer Managed Records" enabled, this is handled automatically. But many customers manage their own DNS, and they need the old and new CNAME values for every moved LB. Certificates with Non-Portable Private Keys This one is subtle. When a load balancer uses a manually imported TLS certificate, the private key is stored in one of several formats: blindfolded (encrypted with the Volterra blindfold key) or clear secret. In both of these cases, the XC API never returns the private key material in its GET response. You get the certificate and metadata, but not the key. That means you cannot extract-and-recreate the certificate in a new namespace via the API. Cross-namespace certificate references (outside of "shared" namespace) are also not supported. So if an LB in namespace A uses a manually imported certificate stored in namespace A, and you want to move that LB to namespace B, you need to first manually upload the same certificate into namespace B (or into the "shared" namespace) before the migration can proceed. API Metadata The XC API returns a "referring_objects" field on every config GET response. In theory, this tells you what other objects reference a given resource - exactly what you need to know before deleting something. In practice, this field can be empty even when active references exist. The only reliable way to detect all external references is to actively scan: fetch the config of every load balancer in the namespace and check their specs for references to the objects you are about to move. Cross-Namespace References Are Not Allowed On F5 XC, an HTTP load balancer can only reference objects in its own namespace, in "system" or "shared" namespace. If your origin pool lives in namespace A and you move the LB to namespace B, the origin pool must either come along to namespace B or already exist there. There is no way to have the LB in namespace B point to a pool in namespace A. This means you need to discover the complete transitive dependency tree of every LB, determine which dependencies need to move, detect which are shared between multiple LBs, and batch everything accordingly. The Tool: XC Namespace migration To deal with all of this, (A)I built **xc-ns-mover** — a Python CLI tool that automates the entire process. It has two components: Scanner - scans all namespaces on your tenant, lists every HTTP load balancer, and writes a CSV report. This gives you the inventory to decide what to move. Mover - takes a CSV of load balancers, discovers all dependencies, groups LBs that share dependencies into atomic batches, runs a series of pre-flight checks, and then executes the migration - or generates a dry-run report so you can review everything first, or do the job manually (JSON Code blocks available in the report) What the Mover Does Before Touching Anything The mover runs six pre-flight phases before making any changes: Discovery and batching - fetches every LB config, walks the dependency tree, and uses a union-find algorithm to cluster LBs with shared dependencies into batches. External reference scan - for every dependency being moved, checks whether any LB outside the move list references it. If so, that dependency cannot be moved without breaking the external LB, and the batch is blocked. Conflict detection - lists all existing objects in the target namespace. If a name already exists, the user can skip the object or rename it with a configurable prefix (e.g., "migrated-my-pool"). All internal JSON references are updated automatically. Certificate pre-flight - identifies certificates with non-portable private keys, then searches the target and "shared" namespaces for a matching certificate by domain/SAN comparison (including wildcard matching per RFC 6125). If a match is found, the LB's certificate reference is automatically rewritten. If not, the batch is blocked until the certificate is manually created. DNS zone pre-flight - queries the tenant's DNS zones to detect which ones have managed LB records enabled. LBs under managed zones are flagged as "auto-managed" in the report — no manual DNS update needed. After all checks pass, the actual migration follows a strict order per batch: backup everything, delete top-down, create bottom-up, verify new CNAMEs. If anything fails, automatic rollback kicks in — objects created in the target are deleted, objects deleted from the source are restored from backups. The Reports Every run produces an HTML report. The dry-run report shows planned configurations, the full dependency graph , certificate issues, DNS changes required, and any blocking issues — all before a single API call mutates anything. The post-migration report includes old and new CNAME values, a DNS changes table with exactly which records need updating, and full configuration backups of everything that was touched. Things to Keep in Mind A few caveats that are worth highlighting: Brief interruption is unavoidable - The migration deletes and re-creates load balancers. During that window (typically seconds to a few minutes per batch), traffic to affected domains will be impacted. Plan a change window. Only HTTP load balancers are supported - TCP load balancers and other object types are not handled by this tool. DNS updates are your responsibility - The report gives you all the values - old CNAME, new CNAME, ACME challenge CNAME - but you need to update your DNS provider. Always run the dry-run first - The tool enforces this by default: it stores a fingerprint after a dry-run and verifies it before executing. If the config changes, a new dry-run is required. The project is open source and available on GitHub. This is privately maintained and not "officially supported": https://github.com/de1chk1nd/resources-and-tools/blob/main/tools/xc-ns-mover/README.md If you find bugs or have feature requests, please open a GitHub issue.172Views3likes0CommentsPwned Passwords Check
Problem this snippet solves: This snippet makes it possible to use Troy Hunt’s ‘Pwned Passwords’ API. By using this API one can check if the password being used was exposed in earlier data breaches. You can use this information to deny access to highly secure resources or to force a user to first change it’s password to one that isn’t known to be exposed to earlier data breaches. Or you could choose to just to inform a user that it would be wise to change it’s password. It’s good to note that the password itself will not be shared while using this API. This snippet uses a mathematical property called k-anonymity. For more information about k-anonymity and Troy Hunt’s ‘Pwned Passwords’ API see: https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/ This snippet also uses Patt-tom McDonnell’s hibp-checker node package. How to use this snippet: Prepare the BIG-IP Provision the BIG-IP with iRuleLX. Create LX Workspace: hibp Add iRule: hibp-irule Add Extension: hibp-extension Add LX Plugin: hibp-plugin -> From Workspace: hibp Install the node.js hibp-checker module # cd /var/ilx/workspaces/Common/hibp/extensions/hibp-extension/ # npm install hibp-checker --save /var/ilx/workspaces/Common/hibp/extensions/hibp-extension └── hibp-checker@1.0.0 # irule To make it works, you need to install the irule on the Virtual Server that publish your application with APM authentication. access profile If you already have an existing access profile, you will need to modify it and include some additionnal configuration in your VPE. If you have no access profile, you can starts building your own based on the description we provide below. Configuring the Visual Policy Editor The printscreen below is an example Visual Policy Editor on how you can use the Pnwed Password snippet. VA – Force Password Change This is a Variable Assignment agent that triggers APM to show a Change Password window. Set variable: session.logon.last.change_password to Custom Expression: expr { 1 } VA – Get Password This is a Variable Assignment agent that copies the password to a session variable that can be read by the hibp irule. Set variable: session.custom.hibp.password to Custom Expression: return [mcget -secure {session.logon.last.password}] IE - HIBP This is an irule event with the ID set to ‘hibp’. This will trigger the hibp_irule to come into action. EA – HIBP Verdict This is an Empty Action with two branches. The branch named "Not Pwned" contains the following expression : expr { [mcget -nocache {session.custom.hibp.status} ] == 0 } MB – Exposed Password This is a message box that will inform the user that it’s password was exposed in earlier data breaches and a password change is needed. The message could be something like this: The password you are using was found in %{session.custom.hibp.status} data breaches. In order to be compliant with our security policy, you must change your password. hibp_irule when ACCESS_POLICY_AGENT_EVENT { if { [ACCESS::policy agent_id ] eq "hibp" } { set password [ACCESS::session data get session.custom.hibp.password] set failonerror 0 if { $password eq "" } { log local0. "Error: no password set" ACCESS::session data set session.custom.hibp.status $failonerror return } set rpc_handle [ ILX::init hibp-plugin hibp-extension ] if {[ catch { ILX::call $rpc_handle -timeout 12000 hibpCheck $password } result ] } { log local0. "hibpCheck failed. ILX failure: $result" ACCESS::session data set session.custom.hibp.status $failonerror return } ACCESS::session data set session.custom.hibp.status [expr { $result }] } } Code : var f5 = require('f5-nodejs'); const checkPassword = require('hibp-checker'); // Create a new rpc server for listening to TCL iRule calls. var ilx = new f5.ILXServer(); ilx.addMethod('hibpCheck', function(req, res) { var password = req.params()[0]; var breachCount = checkPassword(password); breachCount.then(function(result) { return res.reply(result); }, function(err) { return res.reply(err); }); }); // Start listening for ILX::call and ILX::notify events. ilx.listen(); Tested this on version: 13.02.1KViews3likes15CommentsF5 API Security: Discovery and Protection
Introduction APIs are everywhere, accounting for around 83% of all internet traffic today, with API calls growing nearly 300% faster than overall web traffic. Last year, the F5 Office of the CTO estimated that the number of APIs being deployed to production could reach between 500 million to over a billion by 2030. At the time, the notion of over a billion APIs in the wild was overwhelming, made even more concerning by estimates indicating that a significant portion were unmanaged or, in some cases, entirely undocumented. Now, in the era of AI driven development and automation, that estimate of over a billion APIs may prove to be a significant understatement. According to recent research by IDC on API Sprawl and AI Enablement, "Organizations with GenAI enhanced applications/services in production have roughly 5x more APIs than organizations not yet investing significantly in GenAI". That all makes for a very large and complicated attack surface, and complexity is the enemy of security. Discovery, Monitoring, and Protection So, how do we begin securing such a large and complex attack surface? It requires a continuous approach that blends visibility, management, and enforcement. This includes multi-lens Discovery and Learning to detect unknown or shadow APIs, determine authentication status, identify sensitive data, and generate accurate OpenAPI schemas. It also involves Monitoring to establish baselines for endpoint parameters, behaviors, and characteristics, enabling the detection of anomalies. Finally, we must Protect by blocking suspicious requests, applying rate limiting, and enforcing schema validation to prevent misuse. The API Security capabilities of the F5 product portfolio are essential for providing that continuous, defense in depth approach to protecting your APIs from DevTime to Runtime. F5 API Security Essentials Additional Resources F5 API Security Article Series: Out of the Shadows: API Discovery Beyond Rest: Protecting GraphQL Deploy F5 Distributed Cloud API Discovery and Security: F5 Distributed Cloud WAAP Terraform Examples GitHub Repo Deploy F5 Hybrid Architectures API Discovery and Security: F5 Distributed Cloud Hybrid Security Architectures GitHub Repo F5 Distributed Cloud Documentation: F5 Distributed Cloud Terraform Provider Documentation F5 Distributed Cloud Services API Documentation
489Views2likes1CommentEnhance your GenAI chatbot with the power of Agentic RAG and F5 platform
Agentic RAG (Retrieval-Augmented Generation) enhances the capabilities of a GenAI chatbot by integrating dynamic knowledge retrieval into its conversational abilities, making it more context-aware and accurate. In this demo, I will demonstrate an autonomous decision-making GenAI chatbot utilizing Agentic RAG. I will explore what Agentic RAG is and why it's crucial in today's AI landscape. I will also discuss how organizations can leverage GPUaaS (GPU as a Service) or AI Factory providers to accelerate their AI strategy. F5 platform provides robust security features that protect sensitive data while ensuring high availability and performance. They optimize the chatbot by streamlining traffic management and reducing latency, ensuring smooth interactions even during high demand. This integration ensures the GenAI chatbot is not only smart but also reliable and secure for enterprise use.606Views2likes0Comments
