big-ip
12004 TopicsF5 BIG-IP Multi-Site Dashboard
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. A comprehensive real-time monitoring dashboard for F5 BIG-IP Application Delivery Controllers featuring multi-site support, DNS hostname resolution, member state tracking, and advanced filtering capabilities. A 170KB modular JavaScript application runs entirely in your browser, served directly from the F5's high-speed operational dataplane. One or more sites operate as Dashboard Front-Ends serving the dashboard interface (HTML, JavaScript, CSS) via iFiles, while other sites operate as API Hosts providing pool data through optimized JSON-based dashboard API calls. This provides unified visibility across multiple sites from a single interface without requiring even a read-only account on any of the BIG-IPs, allowing you to switch between locations and see consistent pool, member, and health status data with almost no latency and very little overhead. Think of it as an extension of the F5 GUI: near real-time state tracking, DNS hostname resolution (if configured), advanced search/filtering, and the ability to see exactly what changed and when. It gives application teams and operations teams direct visibility into application pool state without needing to wait for answers from F5 engineers, eliminating the organizational bottleneck that slows down troubleshooting when every minute counts. https://github.com/hauptem/F5-Multisite-Dashboard233Views4likes1CommentRemove alerts showing r-series LCD/Dashboard.
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Especially R5+ with 1.8.0+, may you will see unexpected "interface down" alerts. You could use this one-liner in rseries Bash. Hope it helps :p [root@appliance-1 ~]# date;docker exec system_manager /confd/scripts/f5_confd_run_cmd 'show system alarms alarm state | csv' Mon Sep 1 14:32:24 JST 2025 # show system alarms alarm state | csv ID,RESOURCE,SEVERITY,TEXT,TIME CREATED 263169,interface-1.0,WARNING,Interface down,2025-06-11 02:24:43.512626724 UTC 263169,interface-10.0,WARNING,Interface down,2025-06-11 02:24:43.514048068 UTC 263169,interface-2.0,WARNING,Interface down,2025-06-11 02:24:43.517935094 UTC 263169,interface-5.0,WARNING,Interface down,2025-06-11 02:24:43.526179871 UTC 263169,interface-6.0,WARNING,Interface down,2025-06-11 02:24:43.528668180 UTC 263169,interface-7.0,WARNING,Interface down,2025-06-11 02:24:43.530864483 UTC 263169,interface-8.0,WARNING,Interface down,2025-06-11 02:24:43.533197062 UTC 263169,interface-9.0,WARNING,Interface down,2025-06-11 02:24:43.535438297 UTC [root@appliance-1 ~]# date;docker exec system_manager /confd/scripts/f5_confd_run_cmd 'show system alarms alarm state | csv'|grep 263169|cut -f 2 -d ,|xargs -I{} docker exec alert-service /confd/test/sendAlert -i 263169 -r clear-all -s {} Mon Sep 1 14:33:27 JST 2025 Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent Alert Sent [root@appliance-1 ~]# date;docker exec system_manager /confd/scripts/f5_confd_run_cmd 'show system alarms alarm state | csv' Mon Sep 1 14:33:54 JST 2025 # show system alarms alarm state | csv % No entries found. <---------- REMOVED. [root@appliance-1 ~]# Ideas came from: https://cdn.f5.com/product/bugtracker/ID1644293.html https://my.f5.com/manage/s/article/K000150155168Views2likes1CommentCan't change sync type or failover after tenant upgrade.
I made a mistake that I didn't think in the end would matter, but here's what I did. I had previously upgraded this tenant pair to 17.1.3. Everything was fine, and I intended to install on another pair but I installed on the other boot location of one that I had already installed. I didn't think this was an issue as I would just not activate that boot location. However, I couldn't force the Active member to Standby. It was greyed out. I thought that maybe I should boot to that new location because maybe there was something that needed to complete to allow me to fail over between the members. That made it worse because I couldn't change the sync type back to Automatic with incremental sync. So naturally, I booted to the previous partition because it seemed to be at least better, but now I seem to be digging a hole I can't get out of. Where it stands now: The pair is set to sync type "Manual with Incremental Sync" Member1 is standby and says "Not All Devices Synced" Member2 is active and says "Changes Pending" On the Standby Member1, I can change the sync type, but I haven't. On the Active Member2, I can't change the sync type or force it to standby. I have a ticket open but as this is a live system, I pursuing all avenues.Solved59Views0likes7CommentsHow to log HTTP/2 reset_stream
Hello, We are currently in a meeting to prepare for HTTP/2 DDoS attacks. What we would like to do is log the client’s IP address (either local or remote) whenever an HTTP/2 RESET_STREAM is received. Is there any way to achieve this? Would it be possible to implement using an iRule? Thank you.37Views0likes1CommentVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.1KViews5likes2Comments10 Settings to Lock Down your BIG-IP
EDITORS NOTE, Oct 16, 2025: This article was originally written in 2012 and may contain guidance that is out of date. Please see the new Security Best Practices for F5 Products article, updated in Oct 2025. Earlier this year, F5 notified its customers about a severe vulnerability in F5 products. This vulnerability had to do with SSH keys and you may have heard it called “the SSH key issue” and documented as CVE-2012-1493. The severity of this vulnerability cannot be overstated. F5 has gone above and beyond its normal process for customer notification but there is evidence that there are still BIG-IP devices with the exposed SSH keys accessible from the internet. There are several options available to reduce your organization’s exposure to this issue. Here are 10 mitigation techniques that you can implement today to secure your F5 infrastructure. 1. Install the hotfix. Do it. Do it now. The hotfix is non-invasive and requires little testing since it has no impact on the F5 data processing functionality. It simply edits the authorized key file to remove access for the offending key. Control Network Access to the F5 2. Audit your BIG-IP management ports and Self-IPs. Of course you should pay special attention to public addresses (non-RFC-1918), but don’t forget that even private addresses can be vulnerable to internal threats such as malware, malicious employees, and rogue wireless access points. By default, Self-IPs have many ports open – lock these down to just the ones that you know you need. 3. If you absolutely need to have routable addresses on your Self-IPs, at least lock down access to the networks that need it. To lock-down SSH and the GUI for a Self-IP from a specific network: (tmos)# modify /sys sshd allow replace-all-with { 192.168.2.* } (tmos)# modify /sys httpd allow replace-all-with { 192.168.2.* } (tmos)# save /sys config 4. By definition, machines within the network DMZ are at higher risk. If a DMZ machine is compromised, a hacker can use it as a jumping point to penetrate deeper into the network. Use access controls to restrict access to and from the DMZ. See Solution 13309 for more information about restricting access to the management interface. Lock down User Access with Appliance Mode F5’s iHealth system reports consistently that many systems have default passwords for the root and admin accounts and weak passwords for the other users. After controlling access to the management interfaces (see above), this is the most critical part of securing your F5 infrastructure. Here are three easy steps to lock down user access on the BIG-IP. 5. The Appliance Mode license option is simple. When enabled, Appliance Mode locks down the root user and removes the Unix bash shell as a command-line option– Permitting root login is a historical artifact that many F5 power users cherish. But when root logs in, you don’t know who that user really is, do you? This can be an audit issue if there’s a penetration or other funny business. If you are okay with locking down root but find that you cannot live without bash, then you can split the difference by just setting this db variable to true. (tmos)# modify /sys db systemauth.disablerootlogin value true (tmos)# save /sys config 6. Next, if you haven’t done this already, configure the BIG-IP for remote authentication against, say, the enterprise Active Directory repository. Make this happen from the System > Users > Authentication screen and ensure that the default role is Application Editor or less. You can use the /auth remote-role command to provide somewhat granular authorization to each user group. (tmos)# help /auth remote-role 7. Ensure that the oft-forgotten ‘admin’ user has no terminal access. (tmos)# modify /sys auth user admin shell none (tmos)# save /sys config With steps 5-7, you have significantly hardened the BIG-IP device. Neither of the special accounts, root and admin, will be able to login to the shell and that should eliminate both the SSH key issue and the automated brute force risk. Keep Up to Date on Security News, Hotfixes and Patches 8. If you haven’t done so already, subscribe to the F5 security alert mailing list at f5.com/about-us/preferences. This will ensure that you receive timely security notices. 9. Check your configuration against F5’s heuristic system iHealth. When you upload your diagnostics to iHealth, it will inform you of any missing or suggested security upgrades. Using iHealth is easy. Generating the support file is as simple as pressing a couple buttons on the GUI. Then point your browser at ihealth.f5.com, login and upload the support file. iHealth will tell you what else to look to help you lock down your system. There you have it, nine steps to lock down a BIG-IP and keep on top of infrastructure security… Wait, what, I promised you 10? 10. Follow me (@dholmesf5) and @f5security on Twitter. There that was easy. If you take anything away from this blog post (and congratulations for getting this far), it is be sure you install the SSH key hotfix and protect your management interfaces. And then, fun aside; remember that securing the infrastructure really is serious business.6.2KViews0likes3CommentsModernizing F5 Platforms with Ansible
I’ve been meaning to publish this article for some time now. Over the past few months, I’ve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. In the demo video below, you’ll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0—demonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community—so feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible.
849Views4likes1CommentF5 upgrades
We are upgrading F5 tenants from 17.1 to 17.5. We have Two R-series pairs at each data center ( ex:main and colo) Within the data center, they are in HA active standby and the 4 are in a GSLB group . Each host has one tenant During the upgrade process, I disabled GTM Sync on the F5 that is going to be upgraded. Is it recommended? I plan on having traffic moved to this active box at ex colo from the other data center main, I won't be making any config changes . After the applications move to this side, LTM pools show up on this side and global availability will have the upgraded side up. just want to make sure, if that is disabled, do we need to leave them disabled and sync them after all the 4 F5s are upgraded? during this process, can we make changes with the data center on LTM pools? Thank you140Views0likes2CommentsA Simple One-way Generic MRF Implementation to load balance syslog message
The BIG-IP Generic Message Protocol implements a protocol filter compatible with MRF (Message Routing Framework). MRF is designed to implement the most complex use cases, but it can be daunting if you need to create a simple configuration. This article provides a simple baseline to understand the relationships of the MRF components and how they can be combined for a simple one way implementation . A production implementation will in most case be more complex. The following virtual, profiles and iRules load balances a one way stream of new line delimited messages (in this case syslog) to a pool of message consumers. The messages will be parsed and distributed with a simple MLB protocol. Return traffic will not be returned to the client with this configuration. To implement this we will need these configuration objects: Virtual Server - Accepts incoming traffic and configure the Generic Protocol Generic Protocol - Defines message parsing. Generic Router - Configures message routing and point to the Generic Route Generic Route - Points to a Generic Peer Generic Peer - Defines an LTM pool members and points to the Generic Transport Config Generic Transport Config - Defines the server side protocol and server side irule iRule - Defines the message peers (Connections in the message streams) In this case we have a single client that is sending messages to a virtual server that will then be distributed to 3 pool members. Each message will be sent to one pool member only. This can only be configured from the CLI and the official F5 recommendation is to not make any changes in the web GUI to the virtual server. This was tested with BIG-IP 12.1.3.5 and 14.1.2.6. Here is the virtual with a tcp profile and required protocol and routing profiles along with an iRule to setup the connection peer on the client side. ltm virtual /Common/mrftest_simple { destination /Common/10.10.20.201:515 ip-protocol tcp mask 255.255.255.255 profiles { /Common/simple_syslog_protocol { } /Common/simple_syslog_router { } /Common/tcp { } } rules { /Common/mrf_simple } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled } The first profile is the protocol. The only difference between the default protocol (genericmsg) is the field no-response must be configured to yes if this is a one way stream. Otherwise the server side will allocate buffers for return traffic that will cause severe free memory depletion. ltm message-routing generic protocol simple_syslog_protocol { app-service none defaults-from genericmsg description none disable-parser no max-egress-buffer 32768 max-message-size 32768 message-terminator %0a no-response yes } The Generic Router profile points to a generic route ltm message-routing generic router simple_syslog_router { app-service none defaults-from messagerouter description none ignore-client-port no max-pending-bytes 23768 max-pending-messages 64 mirror disabled mirrored-message-sweeper-interval 1000 routes { simple_syslog_route } traffic-group traffic-group-1 use-local-connection yes } The Generic Route points to the Generic Peer: ltm message-routing generic route simple_syslog_route { peers { simple_syslog_peer } } The Generic Peer configures the server pool and points to the Generic Transport Config. Note the pool is configured here instead of the more common configuration in the virtual server. ltm message-routing generic peer simple_syslog_peer { pool mrfpool transport-config simple_syslog_tcp_tc } The Generic Transport Config also has the Generic Protocol configured along with the iRule to setup the server side peers. ltm message-routing generic transport-config simple_syslog_tcp_tc { ip-protocol tcp profiles { simple_syslog_protocol { } tcp { } } rules { mrf_simple } } An iRule must be configured on both the Virtual Server and Generic Transport Config. This iRule must be linked as a profile in both the virtual server and generic transport configuration. ltm rule /Common/mrf_simple { when CLIENT_ACCEPTED { GENERICMESSAGE::peer name "[IP::local_addr]:[TCP::local_port]_[IP::remote_addr]:[TCP::remote_port]" } when SERVER_CONNECTED { GENERICMESSAGE::peer name "[IP::local_addr]:[TCP::local_port]_[IP::remote_addr]:[TCP::remote_port]" } } This example is from a user case where a single syslog client was load balanced to multiple syslog server pool members. Messages are parsed with the newline (0x0a) character as configured in the generic protocol, but this can easily be adapted to other message types.2.3KViews2likes4Comments