application delivery
2265 TopicsIntroducing the F5 Application Study Tool (AST)
In the ever-evolving world of application delivery and security, gaining actionable insights into your infrastructure and applications has become more critical than ever. The Application Study Tool (AST) is designed to help technical teams and administrators leverage the power of open-source telemetry and visualization tools to enhance their monitoring, diagnostics, and analysis workflows.7.8KViews7likes13CommentsRegional Edge Resiliency Zones and Virtual Sites
Introduction: This article is a follow-up article to my earlier article, F5 Distributed Cloud: Virtual Sites – Regional Edge (RE). In the last article, I talked about how to build custom topologies using Virtual Sites on our SaaS data plane, aka Regional Edges. In this article, we’re going to review an update to our Regional Edge architecture. With this new update to Regional Edges, there are some best practices regarding Virtual Sites that I’d like to review. As F5 has seen continuous growth and utilization of F5’s Distributed Cloud platform, we’ve needed to expand our capacity. We have added capacity through many different methods over the years. One strategic approach to expanding capacity is building new POPs. However, in some cases, even with new POPs, there are certain regions of the world that have a high density of connectivity. This will always cause higher utilization than in other regions. A perfect example of that is Ashburn, Virginia in the United States. Within the Ashburn POP that has high density of connectivity and utilization, we could simply “throw compute at it” within common software stacks. This is not what we’ve decided to do; F5 has decided to provide additional benefits to capacity expansions by introducing what we’re calling “Resiliency Zones”. Introduction to Resiliency Zones: What is a Resiliency Zone? A Resiliency Zone is simply another Regional Edge cluster within the same metropolitan (metro) area. These Resiliency Zones may be within the same POP, or within a common campus of POPs. The Resiliency Zones are made up of dedicated compute structures and have network hardware for different networks that make up our Regional Edge infrastructure. So why not follow in AWS’s footsteps and call these Availability Zones? Well, while in some cases we may split Resiliency Zones across a campus of data centers and be within separate physical buildings, that may not always be the design. It is possible that the Resiliency Zones are within the same facility and split between racks. We didn’t feel this level of separation provided a full Availability Zone-like infrastructure as AWS has built out. Remember, F5’s services are globally significant. While most of the cloud providers services are locally significant to a region and set of Availability Zones (in AWS case). While we strive to ensure our services are protected from catastrophic failures, F5 Distributed Cloud’s global availability of services affords us to be more condensed in our data center footprint within a single region or metro. I spoke of “additional benefits” above; let’s look at those. With Resiliency Zones, we’ve created the ability to scale our infrastructure both horizontally and vertically within our POPs. We’ve also created isolated fault and operational domains. I personally believe the operational domain is most critical. Today, when we do maintenance on a Regional Edge, all traffic to that Regional Edge is rerouted to another POP for service. With Resiliency Zones, while one Regional Edge “Zone” is under maintenance, the other Regional Edge Zone(s) can handle the traffic, keeping the traffic local to the same POP. In some regions of the world, this is critical to maintaining traffic within the same region and country. What to Expect with Resiliency Zones Resiliency Zone Visibility: Now that we have a little background on what Resiliency Zones are, what should you expect and look out for? You will begin to see Regional Edges within Console that have a letter associated to them. Example, “dc12-ash” which is the original Regional Edge; you’ll see another Regional Edge “b-dc12-ash”. We will not be appending an “a” to the original Regional Edge. As I write this article, the Resiliency Zones have not been released for routing traffic. They will be soon (June 2025). You can however, see the first resiliency zone today if you use all regional edges by default. If you navigate to a Performance Dashboard for a Load Balancer, and look at the Origin Servers tab, then sort/filter for dc12-ash, you’ll see both dc12-ash and b-dc12-ash. Customer Edge Tunnels: Customer Edge (CE) sites will not terminate their tunnels onto a Resiliency Zone. We’re working to make sure we have the right rules for tunnel terminations in different POPs. We can also give customers the option to choose if they want tunnels to be in the same POP across Resiliency Zones. Once the logic and capabilities are in place, we’ll allow CE tunnels to terminate on Resiliency Zones Regional Edges. Site Selection and Virtual Sites: The Resiliency Zones should not be chosen as the only site or virtual site available for an origin. We’ve built in some safeguards into the UI that’ll give you an error if you try to assign Resiliency Zone RE sites without the original RE site within the same association. For example, you cannot apply b-dc12-ash without including dc12-ash to an origin configuration. If you’re unfamiliar with Virtual Sites on F5’s Regional Edge data planes, please refer to the link at the top of this article. When setting up a Virtual Site, we use a site selector label. In my article, I highlight these labels that are associated per site. What we see used most often are: Country, Region, and SiteName. If you chose to use SiteName, your Virtual Site will not automatically add the new Resiliency Zone. Example, your site selector uses SiteName in dc12-ash. When b-dc12-ash comes online, it will not be matched and automatically used for additional capacity. Whereas if you used “country in USA” or “region in Ashburn”, then dc12-ash and b-dc12-ash would be available to your services right away. Best Practices for Virtual Sites: What is the best practice when it comes to Virtual Sites? I wouldn’t be in tech if I didn’t say “it depends”. It is ultimately up to you on how much control you want versus operational overhead you’re willing to have. Some people may say they don’t want to have to manage their virtual sites every time F5 changes the capacity. This could mean adding new Regional Edges in new POPs or adding Resiliency Zones into existing POPs. Whereas others may say they want to control when traffic starts routing through new capacity and infrastructure to their origins. Often times this control is to ensure customer-controlled security (firewall rules, network security groups, geo-ip db, etc.) are approved and allowed. As shown in the graph, the more control you want, the more operations you will maintain. What would I recommend? I would go less granular in how I setup Regional Edge Virtual Sites. As I would want as much compute capacity as close to them as possible to serve my clients of my applications for F5 Services. I’d also want attackers, bots, bad guys, or the traffic that isn’t an actual client to have security applied as close as possible to the source. Lastly, as we see L7 DDoS continue to rise, the more points of presence for L7 security I can provide and scale. This gives me the best chance of mitigating the attack. To achieve a less granular approach to virtual sites, it is critical to: Pay attention to our maintenance notices. If we’re adding IP prefixes to our allowed firewall/proxy list of IPs, we will send notice well in advance of these new prefixes becoming active. Update your firewall’s security groups, and verify with your geo-ip database provider Understand your client-side/downstream/VIP strategy vs. server-side/upstream/origin strategy and what the different virtual site models might impact. When in doubt, ask. Ask for help from your F5 account team. Open a support ticket. We’re here to help. Summary: F5’s Distributed Cloud platform needed an additional scaling mechanism to the infrastructure, offering services to its customers. To meet those needs, it was determined to add capacity through more Regional Edges within a common POP. This strategy offers both F5 and Customer operations teams enhanced flexibility. Remember, Resiliency Zones are just another Regional Edge. I hope this article is helpful, and please let me know what you think in the comments below.34Views0likes0CommentsContacting F5 Support
Contacting F5 for support can sometimes feel like an overwhelming and daunting task if you're not prepared. Opening your case with accurate diagnostics, and a detailed description of your issues will help us route your case to the appropriate technicians, but where do you start? We'll help you prepare for your first or tenth call to support and ensure it's an easy process. Bookmark this page and stop worrying about what you'll need. Clear focus and a calm head during a critical issue is your best advantage. What You'll Need When Contacting Support There are several key pieces of information support will ask for regardless of what module you're opening a ticket on. It's best to have most or all of the data ready in case support asks. In our experience being over prepared is never a bad thing. Submit a QKView to iHealth - iHealth is supports quickest window into your system health, configuration, and any update requirements. This is the first and most important thing to have when opening a case with F5. Additional Log Files - The QKView provides up to 5MB of logs but often grabbing additional logs may include more information. MD5 checksums for all uploaded files - Generating MD5 Checksums ensures F5 support engineers can validate your log, tcpdumps, and qkviews. tcpdump Often if you're having issues with one or more virtual servers, a tcpdump will expedite understanding traffic flow. This can be something you do before contacting support and we at DevCentral always find it to be valuable information. In fact we spend our evenings capturing traffic for fun. Explicit Details of your issue! - Having detailed information on your failure cannot be stressed enough. You should know the overall status of the system when it failed, the traffic flow, and symptoms when contacting support. This will expedite your case to the appropriate engineer. Problem vagueness will delay your resolution. Severity Level - F5 provides categories for you to determine your issue severity.. Sev 1: Site Down - All network traffic has ceased; critical impact to business. Sev 2: Site At Risk - Primary unit failed; no redundant state. Site/System at risk of going down. Sev 3: Performance Degraded - Network traffic is partially functional causing some applications to be unreachable. Sev 4: General Assistance - This is for questions regarding configurations. This should be your default severity for troubleshooting non-critical issues or general questions about product functionality. There are several ways to contact support, and we've found success by opening a web case for all issues, even when calling in for support. This allows you to get files to F5 engineers quicker and expedite your case routing process prior to calling in. F5 Websupport - The websupport interface allows you to create and update cases quickly without having to call support. You will need to register|anchor your F5 Support account for websupport and it's best to do that BEFORE you have an issue. The web support ticket process will require your serial number or parent system ID. Phone - Standard, Premium, and Premium Plus support plans can call F5 for assistance. Have your serial number or parent system ID ready. Chat - Currently available for AWS hourly billed customers (BYOL uses standard support). See our AWS wiki for more information. Azure customers are currently BYOL only (bring your own license) and can purchases regular support plans. Gathering Logs The default QKView will gather 5MB of recent log activity but support may ask for additional logs to help diagnose your issue. Log into the BIG-IP CLI Create a tar archive called logfiles.tar.gz in /var/tmp directory containing all of the files in /var/log by typing the following command: tar -czpf /var/tmp/logfiles.tar.gz /var/log/* Create an MD5 Checksum of the file so support can validate the file Generating MD5 Checksums MD5 checksum files give support a method to validate your upload is free of problems. It's best to use the md5sum command directly on the BIG-IP to reduce potential issues by transferring the file off host prior to running MD5 tools. To create an md5sum: Log into the BIG-IP CLI Use the previously created logfiles.tar.gz from the above as example: md5sum /var/tmp/logfiles.tar.gz > /var/tmp/logfiles.tar.gz.md5 Use the CAT command to validate the contents of the md5 file: cat /var/tmp/logfiles.tar.gz.md5 You should see a result similar to: 1ebe43a0b0c7800256bfe94cdd079311 /var/tmp/logfiles.tar.gz1.1KViews1like4CommentsHow to Split DNS with Managed Namespace on F5 Distributed Cloud (XC) Part 2 – TCP & UDP
Re-Introduction In Part 1, we covered the deployment of the DNS workloads to our Managed Namespace and creating an HTTPS Load Balancer and Origin Pool for DNS over HTTPS. If you missed Part 1, feel free to jump over and give it a read. In Part 2, we will cover creating a TCP and UDP Load Balancer and Origin Pools for standard TCP & UDP DNS. TCP Origin Pool First, we need to create an origin pool. On the left menu, under Manage, Load Balancers, click Origin Pools. Let’s give our origin pool a name, and add some Origin Servers, so under Origin Servers, click Add Item. In the Origin Server settings, we want to select K8s Service Name of Origin Server on given Sites as our type, and enter our service name, which will be the service name from Part 1 and our namespace, so “servicename.namespace”. For the Site, we select one of the sites we deployed the workload to, and under Select Network on the Site, we want to seledt vK8s Networks on the Site, then click Apply. Do this for each site we deployed to so we have several servers in our Origin Pool. In Part 1, our Services defined the targetPort as 5553. So, we set Port to 5553 on the origin. This is all we need to configure for our TCP Origin, so click Save and Exit. TCP Load Balancer Next, we are going to make a TCP Load Balancer, since its less steps (and quicker) than a UDP Load Balancer (today). On the left menu under Manage, Load Balancers, select TCP Load Balancers. Let’s set a name for our TCP LB and set our listen port, 53 is a reserved port on Customer Edge Sites so we need to use something else, so let’s use 5553 again, under origin pools we set the origin that we created previously, and then we get to the important piece, which is Where to Advertise. In Part 1 we advertised to the internet with some extra steps on how to advertise to an internal network, in this part we will advertise internally. Select Advertise Custom, then click edit configuration. Then under Custom Advertise VIP Configuration, click Add Item. We want to select the Site where we are going to advertise, the network interface we will advertise. Click Apply, then Apply again. We don’t need to configure anything else, so click Save and Exit. UDP Load Balancer For UDP Load Balancers we need to jump to the Load Balancer section again, but instead of a load balancer, we are going to create a Virtual Host which are not listed in the Distributed Applications tile, so from the top drop down “Select Service” choose the Load Balancers tile. In the left menu under Manage, we go to Virtual Hosts instead of Load Balancers. The first thing we will configure is an Advertise Policy, so let’s select that. Advertise Policy Let’s give the policy a name, select the location we want to advertise on the Site Local Inside Network, and set the port to 5553. Save and Exit. Endpoints Now back to Manage, Virtual Hosts, and Endpoints so we can add an endpoint. Name the endpoint and specify based on the screenshot below. Endpoint Specifier: Service Selector Info Discovery: Kubernetes Service: Service Name Service Name: service-name.namespace Protocol: UDP Port: 5553 Virtual Site or Site or Network: Site Reference: Site Name Network Type: Site Local Service Network Save and Exit. Cluster The Cluster configuration will be simple, from Manage, Virtual Hosts, Clusters, add Cluster. We just need a name and select the Origin Servers / Endpoints and select the endpoint we just created. Save and Exit. Route The Route configuration will be simple as well, from Manage, Virtual Hosts, Routes, add Route. Name the route and under List of Routes click Configure, then Add Item. Leave most settings as they are, and under Actions, choose Destination List, then click Configure. Under Origin Pools and Weights, click Add Item. Under Cluster with Weight and Priority select the cluster we created previously, leave Weight as null for this configuration, then click Apply, apply again, apply again, Apply again, Save and Exit. Now we can Finally create a Virtual Host. Virtual Host Under Manage, Virtual Host, Select Virtual Host, then Click Add Virtual Host. There are a ton of options here, but we only care about a couple. Give the Virtual Host a name. Proxy Type: UDP Proxy Advertise Policy: previously created policy Moment of Truth, Again Now that we have our services published we can give them a test. Since they are currently on a non standard port, and most systems dont let us specify a port in default configurations we need to test with dig, nslookup, etc. To test TCP with nslookup: nslookup -port=5553 -vc google.com 192.168.125.229 Server: 192.168.125.229 Address: 192.168.125.229#5553 Non-authoritative answer: Name: google.com Address: 142.251.40.174 To test UDP with nslookup: nslookup -port=5553 google.com 192.168.125.229 Server: 192.168.125.229 Address: 192.168.125.229#5553 Non-authoritative answer: Name: google.com Address: 142.251.40.174 IP Tables for Non-Standard DNS Ports If we wanted to use the nonstandard port tcp/udp dns on Linux or MacOS, we can use IPTABLES to forward all the traffic for us. There isnt a way to set this up in Windows OS today, but as in Part 1, Windows Server 2022 supports encrypted DNS over HTTPS, and it can be pushed as policy through Group Policy as well. iptables -t nat -A PREROUTING -i eth0 -p udp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p udp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 53 -j DNAT –to XXXXXXXXXX:5553 "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Conclusion I hope this helps with a common use-case we are hearing every day, and shows how simple it is to deploy workloads into our Managed Namespaces.1.7KViews2likes4CommentsApplication Study Tool: Make Grafana Listen on HTTPS
The Application Study Tool (AST) from F5 is a powerful utility for monitoring and observing your BIG-IP ecosystem. Its primary interface is the Grafana dashboard, which provides valuable insights into the performance of your BIG-IPs, the applications delivered, traffic patterns, and potential threats. The default installation instructions are quick and easy to set up, enabling you to achieve observability quickly. However, the Grafana dashboard, by default, can only be accessed via HTTP (unencrypted), not HTTPS. This means that any data sent to the dashboard, including passwords, can potentially be intercepted by anyone sniffing traffic between you and the AST host. (Note that connections between AST and BIG-IPs are always encrypted over HTTPS, so your BIG-IP credentials are secure.) This guide will walk you through configuring Grafana to serve HTTPS, thereby encrypting traffic between your web browser and the AST Grafana dashboard. Apply or Generate the Certificate Before encrypting traffic, you’ll need a certificate and key. This can either be a CA-signed certificate or a self-signed certificate. Both encrypt traffic in transit, but only CA-signed certificates establish the authenticity of the server endpoint (in this case, Grafana). Many organizations opt for self-signed certificates for internal-only connections where man-in-the-middle attacks are unlikely. However, CA-signed certificates remain the more secure option. Using a CA-Signed Certificate If you have a CA-signed certificate available, copy the cert and key files to the ./services/grafana/ directory within the AST installation directory. Make note of the certificate and key file names. (This guide was tested with .crt and .pem extensions, but Grafana also supports other formats.) If you need to generate a CA-signed certificate, you can follow the instructions on the Grafana website for creating a CA certificate using Let’s Encrypt: https://grafana.com/docs/grafana/latest/setup-grafana/set-up-https/#obtain-a-signed-certificate-from-letsencrypt . Using a Self-Signed Certificate If you prefer to use a self-signed certificate, you can generate one using the following commands: $ sudo openssl genrsa -out services/grafana/grafana.key 2048 $ sudo openssl req -new -key services/grafana/grafana.key -out services/grafana/grafana.csr (Answer the questions about location, organization, name, email address, etc., as prompted.) $ sudo openssl x509 -req -days 365 -in services/grafana/grafana.csr -signkey services/grafana/grafana.key -out services/grafana/grafana.crt Set the correct file permissions after generating the files: $ sudo chmod 440 services/grafana/grafana.key services/grafana/grafana.crt Additional documentation on this process is available on Grafana’s website: https://grafana.com/docs/grafana/latest/setup-grafana/set-up-https/#generate-a-self-signed-certificate . Configure Grafana to Listen on HTTPS The next step is to create a configuration file for Grafana, named grafana.ini. Create this file under the ./services/grafana directory (e.g., ~/application-study-tool/services/grafana/grafana.ini). The following is an example configuration. Update the values to fit your environment. If your key and certificate files have names other than grafana.key and grafana.crt, modify the cert_key and cert_file paths accordingly. Note that /etc/grafana/ in the example below is the path within the container. This example uses port 3000. You can configure Grafana to listen on port 443 (the default HTTPS port), but elevated permissions are required in most environments. [server] http_addr = http_port = 3000 domain = mysite.com root_url = https://subdomain.mysite.com:3000 cert_key = /etc/grafana/grafana.key cert_file = /etc/grafana/grafana.crt enforce_domain = False protocol = https Find more details on each variable here: https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#server . Configure AST to Point to grafana.ini To enable the Application Study Tool to recognize the new grafana.ini file, you need to update the Docker Compose configuration. Locate the "grafana" service section in docker-compose.yaml. Comment out the existing provisioning mount line: # - ./services/grafana/provisioning/:/etc/grafana/provisioning Then add the following line to mount the updated directory: - ./services/grafana/:/etc/grafana/ Your updated Grafana service configuration should look like this: grafana: image: grafana/grafana:11.2.0 container_name: grafana restart: unless-stopped ports: - 3000:3000 volumes: - grafana:/var/lib/grafana # - ./services/grafana/provisioning/:/etc/grafana/provisioning - ./services/grafana/:/etc/grafana/ env_file: ".env" networks: - 7lc_network Restart AST and Access Grafana via HTTPS Restart Docker Compose with the following commands: $ sudo docker compose down $ sudo docker compose up That's it! Once restarted, the Grafana dashboard will be available over https. Browse to https://localhost:3000/ (be sure to include https) to try it out. If you used a self-signed certificate, your browser may display a warning message such as “This site is unsafe” or “This Connection Is Not Private.” This is expected behavior for self-signed certificates. Now, all web traffic to your Grafana dashboard will be securely encrypted.113Views2likes0CommentsF5 Distributed Cloud - Automatic TLS Certificate Generation - Non-Delegated DNS Zone
F5 Distributed Cloud supports automatic TLS certificate generation and renewal using Let's Encrypt for its HTTP load balancers. We will provide here a quick step by step guide using the non-delegated domains option. 1. Configuring HTTP Load Balancer 1.1. Initial Configuration On the HTTP Load Balancers menu, add an HTTP Load Balancer and configure the desired domain for the application. In this example the domain is demo.f5pslab.com. Select HTTPS with Automatic Certificate option for the Type of Load Balancer as the following: Conclude the remaining configuration such as Origin Pool, WAF policies etc. and click on Save and Exit. 1.2. Obtaining Auto Certificate DNS Information After the HTTP Load Balancer is created the GUI will display a blank information in the Certificate Status column: Click on the three dot menu, then Manage Configuration. Browse to the bottom of the HTTP Load Balancer object configuration to the Auto Cert Information section: This section display the DNS record of type CNAME that needs to be created on the Customer's DNS as well as the expected value for the record. In the case above a DNS record named _acme-challenge.demo.f5lab.com should be created with a CNAME value of debcb0c54cc3410784c8d284400b84d2.autocerts.ves.volterra.io. Observe the DNS record is formed by the _acme-challenge + domain name of the application. Let's Encrypt will query this record in order to verify ownership of the domain. Here you can find additional information about this process from Let's Encrypt. 2. Configuring DNS 2.1. Configuring CNAME record for the Let's Encrypt ACME challenge Now it's time to modify our DNS configuration by creating a CNAME record for the target zone: Verifying the correct DNS resolution. First you can observe the CNAME resolution that points to F5 Distributed Cloud domain. In the screenshot below there is also a TXT record resolution from F5 Distributed Cloud. This TXT record contains the Let's Encrypt ACME challenge response and Let's Encrypt follows the CNAME to obtain it. Once Let's Encrypt confirms the challenge response, the TLS certificate is issued. 2.2. Configuring DNS CNAME for the Virtual Host This step is not related with the Automatic Certificate generation but as the next step for our configuration we would need to configure the application domain with a CNAME pointing to the HTTP Load Balancer in the F5 Distributed cloud. Browse to Manage Configuration in the HTTP Load Balancer and obtain the Host Name for the Load Balancer on the Metadata tab: Let's adjust our DNS configuration in our DNS provider: 3. Validating the New Certificate 3.1. Verifying the certificate in the HTTP Load Balancer configuration Once the TLS certificate is issued you will notice the column Certificate Status showing Valid: Click on the three dot menu, then Manage Configuration. Browse to the bottom of the HTTP Load Balancer object configuration to the Auto Cert Information section: The Auto generated TLS certificate details are available in this section. The TLS certificate is valid for 90 days and it will be renewed automatically by the F5 Distributed Cloud. 3.2. Verifying the application in the browser Finally, access the application in the browser and verify the auto generated TLS certificate by F5 Distributed Cloud: "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. 4. Conclusion This article demonstrated how it is quick and easy to setup F5 Distributed Cloud to generate your TLS certificates automatically using a Non-delegated DNS zone.5.1KViews7likes2CommentsBolster AI Queries - Access to NetApp-Instantiated Corporate Databases via MCP and Distributed Cloud
Since its release in November 2024, Model Context Protocol (MCP) and its ability to enable richer AI outcomes with consideration of additional data sources has been a red hot topic. Unlike Retrieval-Augmented Generation (RAG) solutions, which provide suggested additional datapoints for an AI large language model (LLM) to consider, MCP takes another complementary approach. MCP, instead, opens up various tools for up-to-date data grabbing, think a weather forecast tool or a real-time search engine query tool, to be considered by the LLM before generating answers. An interesting use case for MCP would be to open up tool access into corporate databases, where tables covering product inventories, pricing details, and suppliers might exist. The allows for, as an example, an internally accessible LLM to provide employees with much richer, far more tactical answers. In this article, we will, as a simple example, demonstrate how a sample relational database in one city, with tables existing on NetApp storage, can be securely accessed by the HTTP-based transactions of MCP through F5 Distributed Cloud (XC). In this case, there can easily be a geographic isolation between the AI tools provided by AI teams, and long-established, distributed corporate databases to be leveraged. In this example, the LLMs are controlled from a San Jose office, and the corporate relational databases are elsewhere. In our scenario, we will use PostgreSQL, frequently just called Postgres, in a Seattle-area office and instantiated on a NetApp ONTAP Select appliance. As we discovered, the F5 distributed load balancer, part of the AppConnect offering in XC, will ensure only our AI LLM and its MCP client can reach across the cloud service to interact with the potentially sensitive corporate data housed in Postgres. MCP Client and Servers: Co-Located vs Networked With the release of the original Anthropic MCP specification in November of 2024, a number of MCP clients entered the market, generating great interest. Claude Desktop, for Windows and Mac, is a popular option. It connects Anthropic cloud-based LLMs to your MCP server-provided tools. The included MCP client component is the glue for this enhancement of LLM responses. Various integrated development environment (IDE) MCP clients also hit the market, solutions like Cursor or Windsurf. Due to the simple graphical user interface and wide embrace of Claude Desktop that was the platform used in this exercise. The MCP protocol fundamentals are discussed in numerous on-line articles. One quick-start can be found in the solution’s standard documentation, found here as a sixty-second briefing and this documentation which goes into the details. This article is about networking in MCP. It focuses on connecting MCP clients and servers, both securely and in a highly efficient environment. The first iteration of MCP in November 2024 spoke to two methods for communications, stdio (standard input/output) and SSE (Server-Sent Events). The first is still the only natively supported approach in Claude Desktop’s community edition, and expects to find the MCP server component, which offers access to rich tools, somewhere within the same host as Claude Desktop itself. This is a quick and easy way to get going with lab projects, but a truly networked approach, one where an MCP server can support hundreds of MCP clients, is more interesting for production environments. MCP Remote Access - Server-Sent Events (SSE) and Streamable HTTP The November MCP release spoke to the server-sent event approach to binding MCP clients and servers, namely two separate HTTP/S endpoints. The first endpoint, set up in the MCP client-side setup, will send a GET to the server. The response will give another endpoint for all future client transactions to use, in the form of HTTP Posts. The interesting aspect is the first connection is a long-lasting SSE connection after the first GET. In other words, the server will hold it open, and only when the server arbitrarily decides it has data to update the client with will new communications (“events”) occur on this socket, perhaps updating the list of tools available or updating tool usage instructions. The client, on the other hand, will use HTTP POSTs interactively at its discretion. This is conventional HTTP in that the client remains in charge of transactions, for instance, providing data from the LLM and requesting a tool act upon that data. An issue with the SSE (“first”) connection is that network solutions often do not support extremely long-lasting TCP connections that may not have frequent data events, and should the connection be reset, the SSE channel is not easily resumed, as connection setups are by design in the direction of clients to servers. The following protocol trace demonstrates the server-to-client flow of traffic that SSE sees over time. Notice the steady state is simply events, asynchronously generated by the MCP server (double-click to enlarge image). For a number of reasons, the March update to MCP introduced “Streamable HTTP” which provided some more flexibility in networking solutions. For one thing, unlike the original approach, a single API endpoint may now be used, not separate endpoints for the initial GET and subsequent POSTS. The network traffic may be simplified to the point where a MCP server-facilitated tool, perhaps something like a simple “calculator” function, sees the MCP server provide a response and close the single connection. Should the MCP server reveal more complex tools, perhaps requiring many seconds to complete a request, a statefulness effect is achieved through a server-provided message identifier value. The MCP client may check in on progress at its discretion or after a network impairment, such as a transient home office Wi-Fi blip. The rationale for not staying with a hard, across-the-board interpretation of the original SSE November 2024 specification includes the drain on servers supporting potentially hundreds of MCP clients; each persistent long-term connection has a computational cost. Another factor is the memory cost of networking equipment tracking the state of numerous, often dormant TCP connections. MCP Support - Distributed Cloud Provides AI Access to Remote Databases Databases are among the most critical of enterprise resources; just imagine losing access to customer lists or supplier contacts, let alone the thought of non-authorized tools gaining access to these data goldmines. To enable AI LLMs and MCP clients safely in one site to consider the database table contents in another site, towards the goal of answering employee queries, we have harnessed F5 Distributed Cloud. Specifically, using HTTP/HTTPS distributed load balancers, we can project access to a database MCP server across a secure network to an MCP client supporting Claude Desktop. At the time of this writing, Claude Desktop community edition is limited to local MCP servers, through stdio. As such, a stdio-to-SSE proxy based in Python was invoked to allow networked MCP traffic. An MCP server found here was utilized to leverage a Postgres database, exposing remote MCP-enabled tools to magnify the efficacy of the AI solution. The following images show some sample tables of an enterprise database located in a Seattle office, including the logically named product and supplier tables that are linked and provide sample data enabling day-to-day operations of fictitious boutique apparel distributor. Due to the criticality of databases, many will be instantiated not on direct attached storage but rather on an enterprise-grade NAS or SAN solution. In this example, the Postgres database tables are stored on NetApp ONTAP appliance volumes in the Seattle branch. In our simple example, the product table contains merchandise, including selling price, inventory, and supplier ID values. The supplier ID values are mapped to supplier names in another database table. Allowing AI to Remotely and Securely Access our Enterprise Data Our topology for this setup looked like the following. In our case, NFS was used as the protocol to leverage ONTAP. The HTTP load balancer was easy to set up in the F5 Distributed Cloud console. The origin pool in the above topology would be the Seattle-area (Redmond, Washington) office, on a Ubuntu server, which used TCP port 8080 locally. The ability to isolate the MCP server/database and the San Jose Claude Desktop, in this approach, relies upon a customer edge (CE) node being implemented in both the Redmond and San Jose offices, CEs in Distributed Cloud are frequently just called “sites” for simplicity. The HTTP distributed load balancer that will publish the service availability out of the inside interface of the San Jose site (CE node) is shown below. Many domain names can be associated with the service; in this case, Claude Desktop’s MCP client will reach out to “ubuntu-mcn-sg-1” on TCP port 8080. XC could easily share this service to different places as the company chooses, including the whole DNS/Internet. But in this case, we only want the service used by our Claud Desktop. As such, we will only make a specific San Jose subnet, reachable from the inside CE interface as a consumption point for the MCP service. Only one change to how the XC load balancer works was made. The MCP server used was made to work with the original SSE specification. As such, there can be longer duration idle times between MCP server event messages intended for transmission to the MCP client. As seen below, we have adjusted the XC load balancer to support idle periods of up to 90 seconds on the SSE connection before shutting down connections. As the article mentioned earlier, streaming HTTP is becoming the networking approach of the future. Stateless and stateful approaches to MCP are coming online, which can avoid the need for long-lasting connections. Illustrated Examples of MCP-enabled AI Securely Leveraging NetApp Enterprise Databases Using the chat interface of Claude Desktop, we configure the MCP setup in “Settings”. Note that using the stdio-to-SSE proxy, we simply need to provide the domain name of the Seattle-area MCP server, the local TCP port to use and the API endpoint (in this case “/sse”). At this point we are free to use the solution; Claude Desktop augmented with the toolsets of the discovered MCP server. In this case, support for querying Postgres database tables becomes realized and employees now have an AI “speech-to-SQL” experience that leverages their enterprise data. It was not necessary to provide Claude Desktop with hints that the answers would not be found within its trained data, it knew enough to utilize the MCP protocol and then how to act upon the discovered MCP tools. If we examine the packet trace, the SSE channel carries the tools that Claude utilizes above. The first image below shows the tools described in raw ASCII, highlighted in yellow. The ensuing image shows that when decoded in a JSON viewer, we can see there are six tools listed, with some fields of interest highlighted. Observations on Networked MCP MCP is widely discussed hot-button topic; new articles are published on-line weekly. The following simply serves as a high-level overview. When using a tool like Claude Desktop, the MCP client portion is pre-packaged, the result is to supplement AI, and one only needs to be concerned with providing the MCP server component. Sample MCP servers are widely available, two broad and interesting repositories can be found both here and here. Three potential tasks, in no particular order, of an MCP server are: Providing prompt templates that a client may invoke, allows users and LLMs to collaborate efficiently. Potential prompts are provided with pre-filled default values. Allow access to resources; think of static files or other non-dynamic content. The most discussed, tool access and tool usage instructions. This can allow an AI to be complemented with up-to-date data, today’s high temperatures in Frankfurt, or access to proprietary data such as enterprise sales reports, as just two simple examples. Let us examine one single annotated MCP tool invocation from the Claude Desktop MCP client (double click to enlarge). We observe in this tool usage, the MCP client POSTs a request to the /message endpoint and utilizes a “sessionID” value assigned by the MCP server upon the original MCP connection protocol setup (to /sse endpoint). The tool command is carried as payload (“method”:”tools:call” and in this particular example, references to the “query_postgres1” name and a list of arguments to shape the tool’s usage. The MCP server, after interaction with the Postgres database, has returned in json format data directly taken from the sales database with inventory and pricing fields. The above packet trace, seen through the opensource Wireshark utility leverages lib pcap files to analyze raw packet traffic for richer understandings of protocols. Since F5 Distributed Cloud is a full in-line proxy, this is advantageous; the solution itself can be a wealth of capture points for analysis. Every customer edge (CE) node has the built-in ability to generate lib pcap files using the built in tcpdump utility, something NETOPS likely can use regularly. The following shows the simple workflow for generating packet captures. In this case, we are simply trying to capture health check traffic in the Redmond, WA, office that ensures our MCP server is up and responsive. Our health checks are using TCP port 80, and we have asked for 20 packets over a maximum of 120 seconds to be captured. All is healthy as the checks are soliciting 200 Okay messages from the server (double-click to enlarge). General, “at a glance” monitoring of MCP traffic is also available in the HTTPS load balancer dashboard. Here we see over time traffic summary, including rich details showing the HTTP verb MCP used (GET or POST), the response code and the valuable latency numbers for each transaction. Summary of MCP Findings In the case of Claude Desktop, with its integrated MCP client, very general inquiries led it automatically relying upon the MCP tools available to generate meaningful answers. Without mentioning MCP in the AI chatbot query, the solution made use of the discovered tools and database tables that were harnessed to answer product questions. The solution was also able to impart knowledge from separate tables through items like supplier ID columns to answer user requests correctly with data spread across tables. The MCP server that was used supports both Postgres and MySQL, Postgres was investigated strictly based upon a larger installed base within modern enterprise. MCP servers exist for semi-structured and unstructured databases, for example MongoDB. To create a NetApp instantiated Postgres deployment, the general steps followed were: Install Postgres on Ubuntu. Instructions at https://www.postgresql.org/download/linux/ubuntu/, leading to #sudo apt -y install postgresql Database management may be easier using the pgadmin GUI, which can be downloaded at https://www.pgadmin.org/download/pgadmin-4-apt/ Provision the NFS export on the ONTAP appliance; this is where the database contents will live and be secured. Ensure the NFS mount point is persistent across reboots, /etc/fstab - nfs_server_ip:/nfs_share_path /mnt/nfs_postgres nfs defaults 0 0 After stopping the initial automatic start of Postgres, change the data directory: etc/postgresql/<version>/main/postgresql.conf) in postgres to utilize a data_directory of '/mnt/nfs_postgres' (or whatever the mount path is) Set permissions of postgres system user to allow data access and restart postgres Although this article demonstrates F5 Distributed Cloud HTTPS load balancers for secure, remote MCP communications, one could also use the F5 Distributed Cloud Network Connect module to allow secure layer 3 connectivity between MCP clients and servers. In this article, the MCP client in San Jose could then have reached the distant Seattle MCP server and Postgres solution over the shared cloud global fabric. The routing table updates were set up automatically. One added benefit for solutions not implemented with streamable HTTP but rather the original SSE specification, such as in this article, the long-lasting SSE connection would stay established with no timeout adjustment; the solution is at layer three and not concerned with lengthy layer 4 connections.72Views2likes1CommentBoosting BIG-IP AFM Efficiency with BIG-IQ: Technical Use Cases and Integration Guide
Introduction Security teams depend on BIG-IP’s Advanced Firewall Manager (AFM) to deliver robust DDoS protection and granular access control, but managing these protections at scale requires centralized intelligence and streamlined workflows. Here comes BIG-IQ, the platform that transforms how BIG-IP AFM is managed across the enterprise. Whether you're looking to centralize firewall rule management, gain visibility into real-time security metrics, or automate backup and restoration of device configurations, BIG-IQ offers the tools to operationalize and optimize BIG-IP AFM deployments. This article shows how to connect BIG-IQ with BIG-IP AFM. It also talks about how to set up the system, best practices, and the real benefits of using a centralized security management model. Understanding Components In this section we go through the main components, BIG-IP AFM, BIG-IQ CM and BIG-IQ DCD, BIG-IP AFM BIG-IP AFM is a full-proxy, firewall module designed to protect applications and infrastructure against DDoS attacks and malicious traffic. It provides: Stateful firewalling IP intelligence and geolocation enforcement DoS protection BIG-IQ CM It is F5's centralized management and analytics platform that supports: Centralized device and policy management Automated backups and version control Real-time event logging and dashboards BIG-IQ DCD This is responsible for gathering logs from the deployments, Centralized Data collection. Data storage and processing. Can operate in cluster. BIG-IQ to transform BIG-IP AFM experience BIG-IQ enhances the way network and security teams work with BIG-IP AFM, by providing: Centralized Policy Management: Define, deploy, and monitor firewall policies from a single interface. Analytics and Logging: View real-time DDoS and ACL event dashboards. Automated Backups: Schedule regular configuration backups and quickly restore devices. Operational Consistency: Prevent misconfiguration with version control and role-based access. BIG-IQ deployments BIG-IQ can fit in different deployments, ranging from a simple version without any DCDs just BIG-IQ CM up to BIG-IQ CM and DCD with separate internal network between cluster members. A simple version with only BIG-IQ CM to manage configurations, perform devices backup, and view stats and analytics without Data Collection Devices. A version where we need Data Collection Devices. In this version, we have: BIG-IQ CM BIG-IQ DCD Remote storage server for data and backup archive. In a more advanced scenario, we can have separate cluster networks connecting BIG-IQ CM and BIG-IQ DCDs to achieve further segmentation between network flows. Integration Walkthrough Installing BIG-IQ Centralized Manager Deploy the BIG-IQ Virtual Machine (Can be completed via Hardware/VE): Use your preferred hypervisor (For example., VMware, Hyper-V) to deploy the BIG-IQ OVA or ISO image. Allocate resources as per the BIG-IQ system requirements. Initial Configuration and Licensing: Access the BIG-IQ GUI via a web browser using the management IP. Log in with default credentials and change the password upon first login. Configure network settings, DNS, and NTP. Enter your license key and activate it online or manually if required. High Availability (Optional): For HA setup, deploy a second BIG-IQ instance. Navigate to System > High Availability and follow the prompts to pair the devices. Setting Up Data Collection Devices (DCDs) Deploy DCD Virtual Machines: Similar to BIG-IQ, deploy the DCD OVA or ISO images on your hypervisor. Ensure each DCD has network connectivity to the BIG-IQ manager. Initial Configuration: Access each DCD via SSH or console. Configure network settings, DNS, and NTP. Add DCDs to BIG-IQ: In the BIG-IQ GUI, navigate to System > BIG-IQ Data Collection > BIG-IQ Data Collection Devices. Click Add, enter the DCD's IP address, and provide administrative credentials. Repeat for each DCD you wish to add. Cluster Configuration: Once all DCDs are added, navigate to System > BIG-IQ Data Collection > BIG-IQ Data Collection Cluster. Configure the cluster settings, including replication factors, based on your data retention and performance requirements. Configuring Data Collection and Retention Policies Statistics Collection: Navigate to Monitoring > Statistics Collection. Enable statistics collection for desired BIG-IP devices and modules. Retention Policies: In the BIG-IQ GUI, go to System > BIG-IQ Data Collection > BIG-IQ Data Collection Cluster. Under Configuration, set data retention periods for different data types (For example., events, alerts, statistics). Snapshot Schedules: Navigate to System > BIG-IQ Data Collection > BIG-IQ Data Collection Cluster. Under Configuration, select External Storage & Snapshots. Define snapshot schedules based on your organization's requirements. To create snapshots of your DCD data: Integrating BIG-IP AFM and BIG-IQ Discover BIG-IP Devices: Navigate to Devices > BIG-IP Devices. Click Add Device, enter the management IP, credentials, and select the services to manage. Import and Manage Configurations: After discovery, import configurations and manage services like LTM, ASM, AFM, etc., directly from BIG-IQ. Monitoring and Alerts: Use the Monitoring section to view real-time statistics, logs, and alerts from managed BIG-IP devices. Managing BIG-IP AFM from BIG-IQ In the previous section, we integrated our F5 BIG-IP AFM with BIG-IQ Central Manager and enabled logging on the Data Collection Device. Once we integrate and import the configurations, we can see the configurations and dashboard at BIG-IQ CM. Enabled features for BIG-IP AFM, Network Firewall. DoS/DDoS protection. IP reputation. Scrubbing center. Enable Logging / statistics BIG-IP From BIG-IQ dashboard, Go to Devices > Select the BIG-IP device. Click on Enable / Disabled under statistics collection column. Enable statistics collection and analytics. Managing BIG-IP from BIG-IQ Deploying Configurations BIG-IQ provides a centralized dashboard for both configuring BIG-IP and dashboard monitoring. From the configurations tab, Create the new version of configuration you need, whether virtual server, network policy, network configurations or something else. Once the virtual server is created, we add the virtual server context to add specific policies Dashboard and Monitoring Head to dashboard tab and we can observe AFM statistics at two main levels, DDoS protection dashboard AFM rules specific dashboard. In DDoS dashboard we can observe different types of information Attacks and filter on wide range of functions. BIG-IQ scheduled reports can help provide daily, weekly, or custom-defined period reports that are beneficial to both operations and management. Network DoS and filter on different flow elements. Add events to the same graph to highlight any system event during specific traffic conditions. In the AFM specific dashboard, we can observe: AFM firewall rules hit count. Ability to include IP reputation. Ability to view event logs in a centralized location. Conclusion Integrating BIG-IQ with BIG-IP AFM empowers network security teams with a scalable, centralized approach to firewall management. From simplifying policy deployment and automating backups to delivering deep visibility through logging and analytics, BIG-IQ transforms how AFM is operationalized. For teams managing complex, distributed environments, this integration is not just helpful, it’s essential. Related Content BIG-IQ Planning and deployment BIG-IQ Sizing BIG-IQ Labs92Views3likes0CommentsAccess Troubleshooting: BIG-IP APM OIDC integration
Introduction Troubleshooting Access use cases can be challenging due to the interconnected components used to achieve such use cases. A simple example for Active Directory authentication can go through below challenges, DNS resolution of Domain Controller (DC) configured. Reachability between F5 and DC. Communication ports used. Domain account privileges. Looking at the issue of non-working Active Directory (AD) authentication is a complex task, yet looking at each component to verify the functionality is much easier and shows output the influence further troubleshooting actions. Implementation and troubleshooting We discussed the implementation of OpenID Connect over here Let's discuss here how we can troubleshoot issues in OIDC implementation, here's a summary of the main points we are checking Role Troubleshooting main points OAuth Authorization Server DNS resolution for the authentication destination. Routing setup to the authentication system. Authentication configurations and settings. Scope settings. Token signing and settings. OAuth Client DNS resolution for the authorization server. Routing setup. Token settings. Authorization attributes and parameters. OAuth Resource Server Token settings. Scope settings Looking at the main points, you can see the common areas we need to check while troubleshooting OAuth / OIDC solutions, below are the troubleshooting approach we are following, Check the logs. APM logging provides a comprehensive set of logs, the main logs to be checked apm, ltm and tmm. DNS resolution and check DNS resolver settings. Routing setup. Authentication methods settings. OAuth settings and parameters. Check the logs The logs are your true friends when it comes to troubleshooting. We start by creating debug logging profile Overview > Event logs > Setting. Select the target Access Policy to apply the debug profile. Case 1: Connection reset after authentication In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 but connection resets at this point. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. From the below logs we can see the logon was successful but somehow the Authorization code wasn’t detected. One main reason would be mismatched settings between Auth server and Client configurations. In our setup I’m using provider flow type as Hybrid and format code-idtoken. Local Time 2024-06-11 06:47:48 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:204adb19: Session variable 'session.logon.last.result' set to '1' Partition Common Local Time 2024-06-11 06:47:49 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:204adb19: Authorization code not found. Partition Common Checking back the configuration to validate the needed flow type: adjust flow type at the provider settings to be Authorization Code instead of Hybrid. Case 2: Expired JWT Keys In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. From the below logs we can see the logon was successful but somehow the Authorization code wasn’t detected. One main reason can be the need to rediscover JWT keys. Local Time 2024-06-11 06:51:06 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:848f0568: Session variable 'session.oauth.client.last.errMsg' set to 'None of the configured JWK keys match the received JWT token, JWT Header: eyJhbGciOiJSUzI1NiIsImtpZCI6ImMzYWJlNDEzYjIyNjhhZTk3NjQ1OGM4MmMxNTE3OTU0N2U5NzUyN2UiLCJ0eXAiOiJKV1QifQ' Partition Common The action to be taken would be to rediscover the JWT keys if they are automatic or add the new one manually. Head to Access ›› Federation : OAuth Client / Resource Server : Provider Select the created provider. Click Discover to fetch new keys from provider Save and apply the new policies settings. Case 3: OAuth Client DNS resolver failure In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. From the below logs we can see the logon was successful but somehow the Authorization code wasn’t detected. Another reason for such behavior can be the DNS failure to reach to OAuth provider to validate JWT keys. Local Time 2024-06-12 19:36:12 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:fb5d96bc: Session variable 'session.oauth.client.last.errMsg' set to 'HTTP error 503, DNS lookup failed' Partition Common Checking DNS resolver Network ›› DNS Resolvers : DNS Resolver List Validate resolver config. is correct. Check route to DNS server Network ›› Routes Note, DNS resolver uses TMM traffic routes not the management plane system routing. Case 4: Token Mismatch In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. We will find the logs showing Bearer token is received yet no token enabled at the client / resource server connections. Local Time 2024-06-21 07:25:12 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:c224c941: Session variable 'session.oauth.client./Common/f5_local_client_rs.app/f5_local_client_rs_oauthServer_f5_local_provider.token_type' set to 'Bearer' Partition Common Local Time 2024-06-21 07:25:12 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:c224c941: Session variable 'session.oauth.scope./Common/f5_local_client_rs.app/f5_local_client_rs_oauthServer_f5_local_provider.errMsg' set to 'Token is not active' Partition Common We need to make sure client and resource server have JWT token enabled instead of opaque and proper JWT token is selected. Case 5: Audience mismatch In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. We will find the logs stating incorrect or unmatched audience. Local Time 2024-06-23 21:32:42 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:42ef6c51: Session variable 'session.oauth.scope.last.errMsg' set to 'Audience not found : Claim audience= f5local JWT_Config Audience=' Partition Common Case 6: Scope mismatch In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users receive authorization error with wrong scope. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. Scope name is mentioned in the logs, in this case I named it “wrongscope” You will see scope includes openid string, this is because we have openid enabled. Change the scope to the one configured at the provider side. Local Time 2024-06-24 06:20:28 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:edacbe31:/Common/oidc_google_t1.app/oidc_google_t1_act_oauth_client_0_ag: OAuth: Request parameter 'scope=openid wrongscope' Partition Common Case 7: Incorrect JWT Signature In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. We will find the logs showing Bearer token is received yet no token enabled at the client / resource server connections. Local Time 2024-06-21 07:25:12 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:c224c941: Session variable 'session.oauth.scope./Common/f5_local_client_rs.app/f5_local_client_rs_oauthServer_f5_local_provider.errMsg' set to 'Token is not active' Partition Common When trying to renew the JWT key we see this error in the GUI. An error occurred: Error in processing URL https://accounts.google.com/.well-known/openid-configuration. The message is - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target We need at this step to validate the used CA bundle and if we need to allow the trust of expired or self-signed JWT tokens. General issues In addition to the listed cases above, we have some general issues: DNS failure at client side not able to reach whether the F5 virtual server or OAuth provider to provide authentication information. In this case, please verify DNS configurations and Network setup on the client machine. Validate HTTP / SSL / TCP profiles at the virtual server are correctly configured. Related Content DNS Resolver Overview BIG-IP APM deployments using OAuth/OIDC with Microsoft Azure AD may fail to authenticate OAuth and OpenID Connect - Made easy with Access Guided Configurations templates Request and validate OAuth / OIDC tokens with APM F5 APM OIDC with Azure Entra AD Configuring an OAuth setup using one BIG-IP APM system as an OAuth authorization server and another as the OAuth client809Views1like6CommentsA Simple One-way Generic MRF Implementation to load balance syslog message
The BIG-IP Generic Message Protocol implements a protocol filter compatible with MRF (Message Routing Framework). MRF is designed to implement the most complex use cases, but it can be daunting if you need to create a simple configuration. This article provides a simple baseline to understand the relationships of the MRF components and how they can be combined for a simple one way implementation . A production implementation will in most case be more complex. The following virtual, profiles and iRules load balances a one way stream of new line delimited messages (in this case syslog) to a pool of message consumers. The messages will be parsed and distributed with a simple MLB protocol. Return traffic will not be returned to the client with this configuration. To implement this we will need these configuration objects: Virtual Server - Accepts incoming traffic and configure the Generic Protocol Generic Protocol - Defines message parsing. Generic Router - Configures message routing and point to the Generic Route Generic Route - Points to a Generic Peer Generic Peer - Defines an LTM pool members and points to the Generic Transport Config Generic Transport Config - Defines the server side protocol and server side irule iRule - Defines the message peers (Connections in the message streams) In this case we have a single client that is sending messages to a virtual server that will then be distributed to 3 pool members. Each message will be sent to one pool member only. This can only be configured from the CLI and the official F5 recommendation is to not make any changes in the web GUI to the virtual server. This was tested with BIG-IP 12.1.3.5 and 14.1.2.6. Here is the virtual with a tcp profile and required protocol and routing profiles along with an iRule to setup the connection peer on the client side. ltm virtual /Common/mrftest_simple { destination /Common/10.10.20.201:515 ip-protocol tcp mask 255.255.255.255 profiles { /Common/simple_syslog_protocol { } /Common/simple_syslog_router { } /Common/tcp { } } rules { /Common/mrf_simple } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled } The first profile is the protocol. The only difference between the default protocol (genericmsg) is the field no-response must be configured to yes if this is a one way stream. Otherwise the server side will allocate buffers for return traffic that will cause severe free memory depletion. ltm message-routing generic protocol simple_syslog_protocol { app-service none defaults-from genericmsg description none disable-parser no max-egress-buffer 32768 max-message-size 32768 message-terminator %0a no-response yes } The Generic Router profile points to a generic route ltm message-routing generic router simple_syslog_router { app-service none defaults-from messagerouter description none ignore-client-port no max-pending-bytes 23768 max-pending-messages 64 mirror disabled mirrored-message-sweeper-interval 1000 routes { simple_syslog_route } traffic-group traffic-group-1 use-local-connection yes } The Generic Route points to the Generic Peer: ltm message-routing generic route simple_syslog_route { peers { simple_syslog_peer } } The Generic Peer configures the server pool and points to the Generic Transport Config. Note the pool is configured here instead of the more common configuration in the virtual server. ltm message-routing generic peer simple_syslog_peer { pool mrfpool transport-config simple_syslog_tcp_tc } The Generic Transport Config also has the Generic Protocol configured along with the iRule to setup the server side peers. ltm message-routing generic transport-config simple_syslog_tcp_tc { ip-protocol tcp profiles { simple_syslog_protocol { } tcp { } } rules { mrf_simple } } An iRule must be configured on both the Virtual Server and Generic Transport Config. This iRule must be linked as a profile in both the virtual server and generic transport configuration. ltm rule /Common/mrf_simple { when CLIENT_ACCEPTED { GENERICMESSAGE::peer name "[IP::local_addr]:[TCP::local_port]_[IP::remote_addr]:[TCP::remote_port]" } when SERVER_CONNECTED { GENERICMESSAGE::peer name "[IP::local_addr]:[TCP::local_port]_[IP::remote_addr]:[TCP::remote_port]" } } This example is from a user case where a single syslog client was load balanced to multiple syslog server pool members. Messages are parsed with the newline (0x0a) character as configured in the generic protocol, but this can easily be adapted to other message types.2KViews2likes3Comments