application delivery
43157 TopicsVIP is not responding on SYN after enabling other modules like ASM, APM and AFM.
Hi all, I have an F5 VE running 17.5.1.3 in my lab environment for learning purposes. As back-end I installed the phpauction webpage and all configuration works flawlessly if only the LTM module is enabled. This in the most simple form: Virtual server on port 80. TCP profile HTTP profile Pool Automap When I add another module, for example ASM, the vip stopped working although it's still green/up and not even a security policy has been attached to the vip. Captures show that the SYN is reaching the F5 but I do not get a response from it: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type EN10MB (Ethernet), capture size 65535 bytes 16:24:51.691462 IP 192.168.1.100.64282 > 192.168.2.10.80: Flags [S], seq 5173934, win 65535, options [mss 1260,nop,wscale 8,nop,nop,sackOK], length 0 in slot1/tmm1 lis= port=1.1 trunk= 16:24:51.942738 IP 192.168.1.100.64625 > 192.168.2.10.80: Flags [S], seq 1642892817, win 65535, options [mss 1260,nop,wscale 8,nop,nop,sackOK], length 0 in slot1/tmm0 lis= port=1.1 trunk= I checked the back-end connection as well but the F5 is not sending out the SYN to the webserver. So it looks like it's blackholing my traffic. When I disable ASM and use only LTM, everything starts to work again. Even when trying with different modules like APM, the same issue happens. VIP is not responding after only enabling APM or AFM. I tried the following: - Factory reset the machine. - Upgrade to 17.5.1.3. - Enable RST CAUSE. (but there isn't any because the SYN isn't there in the first place) - Force reload config on the mcpd process. - Enabled ltm debugging without receiving any logs about the connection. - Looked into the dos and bot defense logs to see if traffic is dropped at an earlier point in the chain. - Enabled tmm debug without getting any relevant logs. - Changing the vip from standard to fastl4. - Remove http profile. I did play a lot with other modules as well like ASM, APM, AFM, SSLO, DNS, so that's why I though it was a configuration issue at first. But make the machine factory default, did not solve it. Is it possible there are some left overs during my learning path on this machine? Do you know what additional steps I can take to solve this issue? Thanks. Best regards, Mitchel40Views0likes2CommentsVIP in https that redirect to another vip in https
Hi, I have a VIP in https with a certificate that have a policy LTM attached. In the policy, if the path is /prova, i'm trying to redirect to another VIP in https, but this doesn't work. Usually I redirect the calls only to VIPs in HTTP. There's a solution for use all the VIPS in HTTPS? Thanks45Views0likes3CommentsBIG-IP VE: 40G Throughput from 4x10G physical NICs
Hello F5 Community, I'm designing a BIG-IP VE deployment and need to achieve 40G throughput from 4x10G physical NICs. After extensive research (including reading K97995640), I've created this flowchart to summarize the options. Can you verify if this understanding is correct? **My Environment:** - Physical server: 4x10G NICs - ESXi 7.0 - BIG-IP VE (Performance LTM license) - Goal: Maximize throughput for data plane **Research Findings:** From F5 K97995640: "Trunking is supported on BIG-IP VE... intended to be used with SR-IOV interfaces but not with the default vmxnet3 driver. [Need 40G to F5 VE] ┌──────┴──---------------------- ────┐ │ │ [F5 controls] [ESXi controls] (F5 does LACP) (ESXi does LACP) │ │ Only SR-IOV Link Aggregation │ │ ┌───┴───┐ ┌───┴───┐ │40G per│ │40G agg │ │ flow │ │10G/flow │ └───────┘ └───────┘Solved106Views0likes5CommentsDistributed Cloud for App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.366Views4likes0CommentsWhat’s New in the NGINX Plus R36 Native OIDC Module
NGINX Plus R36 is out, and with it we hit a really important milestone for the native `ngx_http_oidc_module`, which now supports a broad set of OpenID Connect (OIDC) features commonly relied on in production environments. In this release, we add: Support for OIDC Front‑Channel Logout 1.0, enabling proper single sign‑out across multiple apps Built‑in PKCE (Proof Key for Code Exchange) support Support for the `client_secret_post` client authentication method at the token endpoint R35 gave the native module RP‑initiated logout and a UserInfo integration, R36 builds on that and closes several important gaps. In this post I’ll walk through all the new features in detail, using Microsoft’s Entra ID as the concrete example IdP. Front‑Channel Logout: Real Single Sign‑Out Why RP‑initiated logout alone isn’t enough Until now, `ngx_http_oidc_module` supported only RP‑initiated logout (per OpenID Connect RP‑Initiated Logout 1.0). That gave us a standards‑compliant “logout button”: when the user clicks “Logout” in your app, Nginx Plus sends them to the IdP’s logout endpoint, and the IdP tears down its own session. The catch is that RP‑initiated logout only reliably logs you out of: The current application (the RP that initiated the logout), and The IdP session itself Other applications that share the same IdP session typically stay logged in unless they also have a custom logout flow that goes through the IdP. That’s not what most people think of as “single sign‑out”. Imagine you borrow your partner’s personal laptop, log into a few internal apps that are all protected by NGINX Plus, finish your work, and hit “Logout” in one of them. You really want to be sure you’re logged out of all of those apps, not just the one where you pressed the button. That’s exactly what front‑channel logout is for. What front‑channel logout does The OpenID Connect Front‑Channel Logout 1.0 spec defines a way for the OP (the OpenID Provider) to notify all RPs that share a user’s session that the user has logged out. At a high level: The user logs out (either from an app using RP‑initiated logout, or directly on the IdP). The OP figures out which RPs are part of that single sign‑on session. The OP renders a page with one `<iframe>` per RP, each pointing at the RP’s `frontchannel_logout_uri`. Each RP receives a front‑channel logout request in its own back‑end and clears its local session. The browser coordinates this via iframes, but the session termination logic lives entirely in Nginx Plus, see diagram below: Configuring Front‑Channel Logout in NGINX Plus Let’s start with the NGINX Plus configuration. The change is intentionally minimal: you only need to add one directive to your existing `oidc_provider` block: oidc_provider entra_app1 { issuer https://login.microsoftonline.com/<tenant_id>/v2.0; client_id your_client_id; client_secret your_client_secret; logout_uri /logout; post_logout_uri /post_logout/; logout_token_hint on; frontchannel_logout_uri /front_logout; # Enables front-channel logout userinfo on; } That’s all that’s required on the NGINX Plus side to enable a single logout for this provider: `logout_uri` - path in your app that starts RP‑initiated logout `post_logout_uri` - where the IdP will send the browser after logout `logout_token_hint on;` - instructs NGINX Plus to send `id_token_hint` when calling the IdP’s logout endpoint `frontchannel_logout_uri` - path that will receive front‑channel logout requests from the IdP You’ll repeat that pattern for every app/provider block that should participate in single sign‑out. Configuring Front‑Channel Logout in Microsoft Entra ID On the Microsoft Entra ID side, you need to register a Front‑channel logout URL for each application. For each app: Go to Microsoft Entra admin center -> App registrations -> Your application -> Authentication. In Front‑channel logout URL, enter the URL that corresponds to your NGINX configuration, for example: `https://app1.example.com/front_logout`. This must match the URI you configured with `frontchannel_logout_uri` in `oidc_provider` configuration. Repeat for `app2.example.com`, `app3.example.com`, and any other RP that should take part in single sign‑out. End‑to‑End Flow with Three Apps Assume you have three apps configured the same way: https://app1.example.com https://app2.example.com https://app3.example.com All of them: Use `ngx_http_oidc_module` with the same Microsoft Entra tenant Have `frontchannel_logout_uri` configured in Nginx Have the same URL registered as Front‑channel logout URL in Entra ID User signs in to multiple apps The user navigates to `app1.example.com` and gets redirected to Microsoft’s Entra ID for authentication. After a successful login, NGINX Plus establishes a local OIDC session, and the user can access app1. They then repeat this process for app2 and app3. At this point, the user has active sessions in all three apps: User clicks `Logout` in app1 -> HTTP GET `https://app1.example.com/logout` Nginx redirects to Entra logout endpoint -> HTTP GET `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/logout?...` User confirms logout at Microsoft IdP renders iframes that call all registered `frontchannel_logout_uri` values: GET `https://app1.example.com/front_logout?sid=...` GET `https://app2.example.com/front_logout?sid=...` GET `https://app3.example.com/front_logout?sid=...` `ngx_http_oidc_module` maps these `sid` values to Nginx sessions and deletes them IdP redirects browser back to https://app1.example.com/post_logout/ How Nginx Maps a sid to a Session? So how does the module know which session to terminate when it receives a front‑channel logout request like: GET /front_logout?sid=ec91a1f3-... HTTP/1.1 Host: app2.example.com The key is the `sid` claim in the ID token. Per the Front‑Channel Logout spec, when an OP supports session‑based logout it: Includes a `sid` claim in ID tokens May send `sid` (and `iss`) as query parameters to the `frontchannel_logout_uri` When `ngx_http_oidc_module` authenticates a user, it: Obtains an ID token from the provider. Extracts the sid claim (if present). Stores that sid alongside the rest of the session data in the module’s session store. Later, when a front‑channel logout request arrives: The module inspects the `sid` query parameter. It looks up any active session in its session store that matches this `sid` for the current provider. If it finds a matching active session, it terminates that session (clears cookies, removes data). If there’s no match, it ignores the request. This makes the module resilient to bogus or replayed logout requests: a random `sid` that doesn’t match any active session is simply discarded. Where is the iss Parameter? If you’ve studied the Front‑Channel Logout spec carefully, you might be wondering: where is `iss` (issuer)? The spec says: The OP MAY add `iss` and `sid` query parameters when rendering the logout URI, and if either is included, both MUST be. The reason is that the `sid` value is only guaranteed to be unique per issuer, combining `iss + sid` makes the pair globally unique. In practice, though, reality is messy. For example, Microsoft Entra ID sends a sid in front‑channel logout requests but does not send iss, even though its discovery document advertises `frontchannel_logout_session_supported: true`. This behavior has been reported publicly and has been acknowledged by Microsoft. If `ngx_http_oidc_module` strictly required iss, you simply couldn’t use front‑channel logout with Entra ID and some other providers. Instead, the module takes a pragmatic approach: It does not require iss in the logout request It already knows which provider it’s dealing with (from the `oidc_provider` context) It stores sid values per provider, so sid collisions across providers can’t happen inside that context So while this is technically looser than what the spec recommends for general‑purpose RPs, it’s safe given how the module scopes sessions and it makes the feature usable with real‑world IdPs. Cookie‑Only Front‑Channel Logout (and why you probably don’t want it) Front‑channel logout has another mode that doesn’t rely on sid at all. The spec allows an OP to call the RP’s `frontchannel_logout_uri` without any query parameters and relies entirely on the browser sending the RP’s session cookie. The RP then just checks, “do I have a session cookie?” and if yes, logs that user out. ngx_http_oidc_module supports this. However, modern browser behavior makes this approach very fragile: Recent browser versions treat cookies without a SameSite attribute as SameSite=Lax. Front‑channel logout uses iframes, which are third‑party / cross‑site contexts. SameSite=Lax cookies do not get sent on these sub‑requests, so your RP will never see its own session cookie in the front‑channel iframe request. To make cookie‑only front‑channel logout work, your session cookie would need: Set-Cookie: NGX_OIDC_SESSION=...; SameSite=None; Secure …and that has some serious downsides: SameSite=None opens you up to cross‑site request forgery (CSRF). The current version of `ngx_http_oidc_module` does not expose a way to set `SameSite=None` on its session cookie directly. Even if you tweak cookies at a lower level, you might not want to weaken your CSRF posture just to accommodate this logout variant. Because of that, the recommended and practical approach is the sid‑based mechanism: It doesn’t rely on third‑party cookies. It works in modern browsers with strict SameSite behaviors. It’s easy to reason about and debug. Is Relying on sid Secure Enough? It’s a fair question: if you no longer rely on your own session cookie, how safe is it to accept a logout request based solely on a sid received from the IdP? A few points to keep in mind: The spec defines `sid` as an opaque, high-entropy session identifier generated by the IdP. Implementations are expected to use cryptographically strong randomness with enough entropy to prevent guessing or brute force. Even if an attacker somehow learned a valid sid and sent a fake front‑channel logout request, the worst they can do is log a user out of your application. Providers like Microsoft Entra ID treat sid as a session‑scoped identifier. New sessions get new sid values, and sessions expire over time. `ngx_http_oidc_module` validates that the sid from the logout request matches an active session in its session store for that provider. A random or stale sid that doesn’t match anything is ignored. Taken together, a sid‑based front‑channel logout is a very reasonable trade‑off: you get robust single sign‑out without weakening cookie security, and the remaining risks are small and easy to understand. Front‑Channel Logout Troubleshooting If you’ve wired everything up and a single logout still doesn’t work as expected, here’s a quick checklist. Confirm that your IdP actually issues front‑channel requests Make sure: The provider’s discovery document (.well-known/openid-configuration) includes `frontchannel_logout_supported: true`. You have configured the Front‑channel logout URL for each application in your IdP. If Entra ID doesn’t send requests to your `frontchannel_logout_uri`, the RP will never know that it should log out the user. Ensure the ID token contains a sid claim Many IdPs, including Microsoft Entra ID, don’t include sid in ID tokens by default, even if they support front‑channel logout. For Entra ID you typically need to: Open your app registration -> go to Token configuration -> click add optional claim -> select Token type: ID, then select sid and add it. After that, new ID tokens will carry a sid claim, which ngx_http_oidc_module can store and later match on logout. Check what the IdP actually sends on front‑channel logout If you rely on the sid‑based mechanism, inspect the HTTP requests your app receives at frontchannel_logout_uri: Do you see a sid and iss query parameter? Does your provider also advertise `frontchannel_logout_session_supported: true` in the metadata? If all of the above is in place, front‑channel logout should “just work.” PKCE Support in ngx_http_oidc_module In earlier versions of the `ngx_http_oidc_module`, we did not support PKCE because it is not required for confidential clients, such as nginx, which are able to securely store and transmit a client_secret. However, as the module gained popularity and with the release of the OAuth 2.1 draft specification recommending the use of PKCE for all client types, we decided to add PKCE support to ngx_http_oidc_module. PKCE is an extension to OAuth 2.0 that adds an additional layer of security to the authorization code flow. The core idea is that the client generates a random code_verifier and derives a code_challenge from it, which is sent with the authorization request. When the client later exchanges the authorization code for tokens, it must send back the original code_verifier. The authorization server validates that the code_verifier matches the previously supplied code_challenge, preventing attacks such as authorization code interception. This is a brief overview of PKCE. If you’d like to learn more, I recommend reviewing the official RFC 7636 specification: https://datatracker.ietf.org/doc/html/rfc7636. How is PKCE support implemented in the ngx_http_oidc_module? The implementation of PKCE support in ngx_http_oidc_module is straightforward and intuitive. Moreover, if your identity provider supports PKCE and includes the parameter `code_challenge_methods_supported = S256` in its OIDC metadata, the module automatically enables PKCE with no configuration changes required. When initiating the authorization flow, the module generates a random code_verifier and derives a code_challenge from it using the S256 method. These parameters are sent with the authorization request. When the module later receives the authorization code, it sends the original code_verifier when requesting tokens, ensuring the authorization code exchange remains secure. If your identity provider does not support automatic PKCE discovery, you can explicitly enable PKCE in your provider configuration by adding the `pkce on;` directive inside the oidc_provider block. For example: oidc_provider entra_app2 { issuer https://login.microsoftonline.com/<tenant_id>/v2.0; client_id your_client_id; client_secret your_client_secret; pkce on; # <- this directive enables PKCE support } That is all you need to do to enable PKCE support in the ngx_http_oidc_module. client_secret_post Client Authentication Another important enhancement in the ngx_http_oidc_module is the addition of support for the client_secret_post client authentication method. Previously, the module supported only client_secret_basic, which requires sending the client_id and client_secret in the Authorization header. According to the OAuth 2.0 specification, all providers must support client_secret_basic; however, for some providers, the use of client_secret_basic may be restricted due to security or policy considerations. For this reason, we added support for client_secret_post. This method allows sending the client_id and client_secret in the body of the POST request when exchanging the authorization code for tokens. To use the client_secret_post method in ngx_http_oidc_module, you don’t need to do anything at all - the module automatically determines which method to use based on the identity provider’s metadata. If the provider indicates that it supports only client_secret_post, the module will use this method when exchanging authorization codes for tokens. If the provider supports both client_secret_basic and client_secret_post, the module will use client_secret_basic by default. Verifying this is simple - check the value of `token_endpoint_auth_methods_supported` in the provider’s OIDC metadata: $ curl https://login.microsoftonline.com/<tenant_id>/v2.0/.well-known/openid-configuration | jq { ... "token_endpoint_auth_methods_supported": [ "client_secret_post", "private_key_jwt", "client_secret_basic", "self_signed_tls_client_auth" ], ... } In this example, Microsoft Entra ID supports both methods, so the module will use client_secret_basic by default. Wrapping Up As you can see, in this release, we have significantly expanded the functionality of the ngx_http_oidc_module by adding support for front-channel logout, PKCE, and the client_secret_post client authentication method. These enhancements make the module more flexible and secure, enabling better integration with various OpenID Connect providers and offering a higher level of security for your applications. I hope this overview was useful and informative for you! See you soon!32Views1like0CommentsTLS handshake failure from BIG-IP to backend – Fatal Alert: Decode Error (Server SSL)
Hello DevCentral Team, I am troubleshooting a server-side TLS issue where BIG-IP intermittently fails to establish a TLS connection to a backend service. Observed behavior: Client to BIG-IP TLS handshake completes successfully. BIG-IP to backend TLS handshake fails. Backend responds with a TLS alert: Level Fatal, Description Decode Error. Failure occurs very early in the handshake, immediately after ClientHello. Configuration details (sanitized): Backend service listens on HTTPS using TLS 1.2. BIG-IP is operating in full-proxy mode. The default serverssl profile has been removed. A custom Server SSL profile is attached with an explicit server-name configured and server-side SNI enabled. No client certificate authentication is required by the backend. Validation already performed: Direct openssl s_client testing from BIG-IP to the backend succeeds. TLS version and cipher suites are compatible. Backend certificate chain appears valid when tested outside BIG-IP. The issue appears specific to BIG-IP initiated server-side TLS. Questions: Can a backend return a fatal decode_error even when BIG-IP sends SNI correctly? Are there known cases where certain TLS extensions sent by BIG-IP but not by OpenSSL trigger this error? Are there Server SSL settings commonly associated with decode_error responses? Any recommended BIG-IP specific debugging steps beyond tcpdump and ssldump? Thanks in advance for any guidance or similar experiences.71Views0likes2CommentsCisco TACACS+ Config on ISE LTM Pair
I'm trying to add TACACS+ configuration to my ISE LTMs (v17.1.3). We use Active Directory for authentication. The problem is when I try to create the profile, the "type" dropdown does not show "TACACS+". APM is not provisioned either, not if that is needed. I provisioned it on our lab, but no help.104Views0likes8CommentsIllegal Metacharacter in Parameter Name in Json Data
Dears, Can someone tell what is the issue here as the BIG IP is reporting the illegal metacharacter "#" in parameter name but the highlighted part of the violation doesnt contain metacharacter # in the first place and the parameter which BIG IP displayed in the highlighted part is actually not a parameter. I believe the issue is with the BIG IP only. Any suggestions here, please? I think issue is that BIG IP is not paring the Json payload properly69Views0likes3CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
105Views2likes0Comments