security
18035 TopicsSFP Port LEDs Blinking Yellow
Hi I upgraded the F5 OS to version 1.8 and the tenant software to 17.5.1.3. The upgrade went smoothly and both the Active and Standby devices successfully handled traffic after the upgrade. However I have noticed that the SFP port LEDs on both the Primary and Secondary devices are blinking yellow. Both devices appear to be operating normally but I would like to confirm whether this is expected behavior Could the yellow blinking indicate a speed mismatch or should the LEDs be green under normal conditionsSolved50Views0likes2Commentstcl logic in SAML Attribute value field possible?
Hi. We're running BigIP as a SAML IDP. Can I somehow issue tcl logic in a SAML attributes? I'm talking about the Access ›› Federation : SAML Identity Provider : Local IdP Services, editing an object, under SAML Attributes. Based on what's in the memberOf attribute, I need to issue as a value either empty string or "SpecificValue". I am familiar with the %{session.variable} construct, but I don't want to clutter the session with more variables if I can avoid it, as that impacts all sessions using our IDP (30 or so federated services on the same VIP and AP). I tried these two approches: %{ set result {} ; if { [mcget {session.ad.last.attr.memberOf}] contains {| CN=SpecificGroup,OU=Resource groups,OU=Groups,DC=Domain,DC=com |}} { set result {SpecificValue} } ; return $result } expr { set result {} ; if { [mcget {session.ad.last.attr.memberOf}] contains {| CN=SpecificGroup,OU=Resource groups,OU=Groups,DC=Domain,DC=com |}} { set result {SpecificValue} } ; return $result } Expected result: An issued claim with the value "" or "SpecificValue" Actual result: An issued claim with the above code as the value As I mentioned, we've set it up using one VIP that is hosting 30 or so services. We're running 16.1.3.1. They are using the same SSO configuration and there's an iRule triggerd at ACCESS_POLICY_AGENT_EVENT, which does some magic to extract issuer and suchlike, and that helps to make decisions later in the Access Policy. It also populates a few session variables under the session.custom namespace for use in the Access Policy. Additional session variables are being populated in the Access Policy, such as resolved manager and their email address. I have looked briefly at the ASSERT::saml functions, but even if it would be possible to manipulate that way, I wish to keep this set up as stream lined as possible and with as few new "special cases" in an iRule. So while I appreciate pointers along that route as well, I would first of all like to know if there is a way to do it natively in the SAML attribute value field. And if there are any options I have not yet explored here?1.1KViews0likes6CommentsXC -Web Application Firewall - Exclude FQDN but log security events
Hello all, I have LB with many FQDNs. LB is with block waf policy. I want to add new application with another FQDN to same LB. During application onboarding I want to first review security events and then enforce policy to avoid false positives. I have two options: I will add application to the LB and then define rule to skip WAF processing for application. But in this case I will not see security events. Can I enable logs for such configuration for purpose of the configuration of the WAF exclusion rules? I will create new LB and configure application there and after that I will move application to prod LB. I prefer point 1 as in point two I will have to trigger Jenkins job to apply new Terraform config what will destroy resources and after that I will execute another job to recreate resources on productive LB. This will lead to the short outage. But due to this outage I have to follow up process what I would like to avoid. Thank you.Solved64Views0likes7CommentsBIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough
Introduction Modern AI factories—hyperscale environments powering everything from generative AI to autonomous systems—are pushing the limits of traditional infrastructure. As these facilities process exabytes of data and demand near-real-time communication between thousands of GPUs, legacy CPUs struggle to balance application logic with infrastructure tasks like networking, encryption, and storage management. Data Processing Units (DPUs), purpose-built accelerators that offload these housekeeping tasks, freeing CPUs and GPUs to focus on what they do best. DPUs are specialized system-on-chip (SoC) devices designed to handle data-centric operations such as network virtualization, storage processing, and security enforcement. By decoupling infrastructure management from computational workloads, DPUs reduce latency, lower operational costs, and enable AI factories to scale horizontally. BIG-IP Next for Kubernetes and Nvidia DPU Looking at F5 ability to deliver and secure every app, we needed it to be deployed at multiple levels, a crucial one being edge and DPU. Installing F5 BIG-IP Next for Kubernetes on Nvidia DPU requires installing Nvidia’s DOCA framework to be installed. What’s DOCA? NVIDIA DOCA is a software development kit for NVIDIA BlueField DPUs. BlueField provides data center infrastructure-on-a-chip, optimized for high-performance enterprise and cloud computing. DOCA is the key to unlocking the potential of the NVIDIA BlueField data processing unit (DPU) to offload, accelerate, and isolate data center workloads. With DOCA, developers can program the data center infrastructure of tomorrow by creating software-defined, cloud-native, GPU-accelerated services with zero-trust protection. Now, let's explore BIG-IP Next for Kubernetes components, The BIG-IP Next for Kubernetes solution has two main parts: the Data Plane - Traffic Management Micro-kernel (TMM) and the Control Plane. The Control Plane watches over the Kubernetes cluster and updates the TMM’s configurations. The BIG-IP Next for Kubernetes Data Plane (TMM) manages the supply of network traffic both entering and leaving the Kubernetes cluster. It also proxies the traffic to applications running in the Kubernetes cluster. The Data Plane (TMM) runs on the BlueField-3 Data Processing Unit (DPU) node. It uses all the DPU resources to handle the traffic and frees up the Host (CPU) for applications. The Control Plane can work on the CPU or other nodes in the Kubernetes cluster. This makes sure that the DPU is still used for processing traffic. Use-case examples: There are some recently awesome use cases released by F5’s team based on conversation and work from the field. Let’s explore those items: Protecting MCP servers with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs LLM routing with dynamic load balancing with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs F5 optimizes GPUs for distributed AI inferencing with NVIDIA Dynamo and KV cache integration. Deployment walk-through In our demo, we go through the configurations from BIG-IP Next for Kubernetes Main BIG-IP Next for Kubernetes features L4 ingress flow HTTP/HTTPs ingress flow Egress flow BGP integration Logging and troubleshooting (Qkview, iHealth) You can find a quick walk-through via BIG-IP Next for Kubernetes - walk-through Related Content BIG-IP Next for Kubernetes - walk-through BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes and Nvidia DPU-3 walkthrough BIG-IP Next for Kubernetes F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs
749Views2likes3CommentsF5 Distributed Cloud Telemetry (Metrics) - Prometheus
Scope This article walks through the process of collecting metrics from F5 Distributed Cloud’s (XC) Service Graph API and exposing them in a format that Prometheus can scrape. Prometheus then scrapes these metrics, which can be visualized in Grafana. Introduction Metrics are essential for gaining real-time insight into service performance and behaviour. F5 Distributed Cloud (XC) provides a Service Graph API that captures service-to-service communication data across your infrastructure. Prometheus, a leading open-source monitoring system, can scrape and store time-series metrics — and when paired with Grafana, offers powerful visualization capabilities. This article shows how to integrate a custom Python-based exporter that transforms Service Graph API data into Prometheus-compatible metrics. These metrics are then scraped by Prometheus and visualized in Grafana, all running in Docker for easy deployment. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant VM with Python3 installed Running Prometheus instance (If not check "Configuring Prometheus" section below) Running Grafana instance (If not check "Configuring Grafana" section below) Note – In this demo, an AWS VM is used with Python installed and running exporter (port - 8888), Prometheus (host port - 9090) and Grafana (port - 3000) running as docker instance, all in same VM. Architecture Overview F5 XC API → Python Exporter → Prometheus → Grafana Building the Python Exporter To collect metrics from the F5 Distributed Cloud (XC) Service Graph API and expose them in a format Prometheus understands, we created a lightweight Python exporter using Flask. This exporter acts as a transformation layer — it fetches service graph data, parses it, and exposes it through a /metrics endpoint that Prometheus can scrape. Code Link -> exporter.py Key Functions of the Exporter Uses XC-Provided .p12 File for Authentication: To authenticate API requests to F5 Distributed Cloud (XC), the exporter uses a client certificate packaged in a .p12 file. This file must be manually downloaded from the F5 XC console (steps) and stored on the VM where the Python script runs. The script expects the full path to the .p12 file and its associated password to be specified in the configuration section. Fetches Service Graph Metrics: The script pulls service-level metrics such as request rates, error rates, throughput, and latency from the XC API. It supports both aggregated and individual load balancer views. Processes and Structures the Data: The exporter parses the raw API response to extract the latest metric values and converts them into Prometheus exposition format. Each metric is labelled (e.g., by vhost and direction) for flexibility in Grafana queries. Exposes a /metrics Endpoint: A Flask web server runs on port 8888, serving the /metrics endpoint. Prometheus periodically scrapes this endpoint to ingest the latest metrics. Handles Multiple Metric Types: Traffic metrics and health scores are handled and formatted individually. Each metric includes a descriptive name, type declaration, and optional labels for fine-grained monitoring and visualization. Running the Exporter python3 exporter.py > python.log 2>&1 & This command runs exporter.py using Python3 in background and redirects all standard output and error messages to python.log for easier debugging. Configuring Prometheus docker run -d --name=prometheus --network=host -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:latest Prometheus is running as docker instance in host network (port 9090) mode with below configuration (prometheus.yml), scrapping /metrics endpoint exposed from python flask exporter on port 8888 every 60 seconds. Configuring Grafana docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Private IP of the Prometheus docker instance along with port (9090) is used as data source in Grafana configuration. Once Prometheus is configured under Grafana Data sources, follow below steps: Navigate to Explore menu Select “Prometheus” in data source picker Choose appropriate metric, in this case “f5xc_downstream_http_request_rate” Select desired time range and click “Run query” Observe metrics graph will be displayed Note : Some requests need to be generated for metrics to be visible in Grafana. A broader, high-level view of all metrics can be accessed by navigating to “Drilldown” and selecting “Metrics”, providing a comprehensive snapshot across services. Conclusion F5 Distributed Cloud’s (F5 XC) Service Graph API provides deep visibility into service-to-service communication, and when paired with Prometheus and Grafana, it enables powerful, real-time monitoring without vendor lock-in. This integration highlights F5 XC’s alignment with open-source ecosystems, allowing users to build flexible and scalable observability pipelines. The custom Python exporter bridges the gap between the XC API and Prometheus, offering a lightweight and adaptable solution for transforming and exposing metrics. With Grafana dashboards on top, teams can gain instant insight into service health and performance. This open approach empowers operations teams to respond faster, optimize more effectively, and evolve their observability practices with confidence and control.320Views3likes2Comments10 Settings to Lock Down your BIG-IP
EDITORS NOTE, Oct 16, 2025: This article was originally written in 2012 and may contain guidance that is out of date. Please see the new Security Best Practices for F5 Products article, updated in Oct 2025. Earlier this year, F5 notified its customers about a severe vulnerability in F5 products. This vulnerability had to do with SSH keys and you may have heard it called “the SSH key issue” and documented as CVE-2012-1493. The severity of this vulnerability cannot be overstated. F5 has gone above and beyond its normal process for customer notification but there is evidence that there are still BIG-IP devices with the exposed SSH keys accessible from the internet. There are several options available to reduce your organization’s exposure to this issue. Here are 10 mitigation techniques that you can implement today to secure your F5 infrastructure. 1. Install the hotfix. Do it. Do it now. The hotfix is non-invasive and requires little testing since it has no impact on the F5 data processing functionality. It simply edits the authorized key file to remove access for the offending key. Control Network Access to the F5 2. Audit your BIG-IP management ports and Self-IPs. Of course you should pay special attention to public addresses (non-RFC-1918), but don’t forget that even private addresses can be vulnerable to internal threats such as malware, malicious employees, and rogue wireless access points. By default, Self-IPs have many ports open – lock these down to just the ones that you know you need. 3. If you absolutely need to have routable addresses on your Self-IPs, at least lock down access to the networks that need it. To lock-down SSH and the GUI for a Self-IP from a specific network: (tmos)# modify /sys sshd allow replace-all-with { 192.168.2.* } (tmos)# modify /sys httpd allow replace-all-with { 192.168.2.* } (tmos)# save /sys config 4. By definition, machines within the network DMZ are at higher risk. If a DMZ machine is compromised, a hacker can use it as a jumping point to penetrate deeper into the network. Use access controls to restrict access to and from the DMZ. See Solution 13309 for more information about restricting access to the management interface. Lock down User Access with Appliance Mode F5’s iHealth system reports consistently that many systems have default passwords for the root and admin accounts and weak passwords for the other users. After controlling access to the management interfaces (see above), this is the most critical part of securing your F5 infrastructure. Here are three easy steps to lock down user access on the BIG-IP. 5. The Appliance Mode license option is simple. When enabled, Appliance Mode locks down the root user and removes the Unix bash shell as a command-line option– Permitting root login is a historical artifact that many F5 power users cherish. But when root logs in, you don’t know who that user really is, do you? This can be an audit issue if there’s a penetration or other funny business. If you are okay with locking down root but find that you cannot live without bash, then you can split the difference by just setting this db variable to true. (tmos)# modify /sys db systemauth.disablerootlogin value true (tmos)# save /sys config 6. Next, if you haven’t done this already, configure the BIG-IP for remote authentication against, say, the enterprise Active Directory repository. Make this happen from the System > Users > Authentication screen and ensure that the default role is Application Editor or less. You can use the /auth remote-role command to provide somewhat granular authorization to each user group. (tmos)# help /auth remote-role 7. Ensure that the oft-forgotten ‘admin’ user has no terminal access. (tmos)# modify /sys auth user admin shell none (tmos)# save /sys config With steps 5-7, you have significantly hardened the BIG-IP device. Neither of the special accounts, root and admin, will be able to login to the shell and that should eliminate both the SSH key issue and the automated brute force risk. Keep Up to Date on Security News, Hotfixes and Patches 8. If you haven’t done so already, subscribe to the F5 security alert mailing list at f5.com/about-us/preferences. This will ensure that you receive timely security notices. 9. Check your configuration against F5’s heuristic system iHealth. When you upload your diagnostics to iHealth, it will inform you of any missing or suggested security upgrades. Using iHealth is easy. Generating the support file is as simple as pressing a couple buttons on the GUI. Then point your browser at ihealth.f5.com, login and upload the support file. iHealth will tell you what else to look to help you lock down your system. There you have it, nine steps to lock down a BIG-IP and keep on top of infrastructure security… Wait, what, I promised you 10? 10. Follow me (@dholmesf5) and @f5security on Twitter. There that was easy. If you take anything away from this blog post (and congratulations for getting this far), it is be sure you install the SSH key hotfix and protect your management interfaces. And then, fun aside; remember that securing the infrastructure really is serious business.5.9KViews0likes3Comments