big-ip
12002 TopicsModernizing F5 Platforms with Ansible
I’ve been meaning to publish this article for some time now. Over the past few months, I’ve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. In the demo video below, you’ll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0—demonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community—so feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible.727Views4likes1CommentF5 upgrades
We are upgrading F5 tenants from 17.1 to 17.5. We have Two R-series pairs at each data center ( ex:main and colo) Within the data center, they are in HA active standby and the 4 are in a GSLB group . Each host has one tenant During the upgrade process, I disabled GTM Sync on the F5 that is going to be upgraded. Is it recommended? I plan on having traffic moved to this active box at ex colo from the other data center main, I won't be making any config changes . After the applications move to this side, LTM pools show up on this side and global availability will have the upgraded side up. just want to make sure, if that is disabled, do we need to leave them disabled and sync them after all the 4 F5s are upgraded? during this process, can we make changes with the data center on LTM pools? Thank you84Views0likes2CommentsF5 BIG-IP Multi-Site Dashboard
A comprehensive real-time monitoring dashboard for F5 BIG-IP Application Delivery Controllers featuring multi-site support, DNS hostname resolution, member state tracking, and advanced filtering capabilities. A 170KB modular JavaScript application runs entirely in your browser, served directly from the F5's high-speed operational dataplane. One or more sites operate as Dashboard Front-Ends serving the dashboard interface (HTML, JavaScript, CSS) via iFiles, while other sites operate as API Hosts providing pool data through optimized JSON-based dashboard API calls. This provides unified visibility across multiple sites from a single interface without requiring even a read-only account on any of the BIG-IPs, allowing you to switch between locations and see consistent pool, member, and health status data with almost no latency and very little overhead. Think of it as an extension of the F5 GUI: near real-time state tracking, DNS hostname resolution (if configured), advanced search/filtering, and the ability to see exactly what changed and when. It gives application teams and operations teams direct visibility into application pool state without needing to wait for answers from F5 engineers, eliminating the organizational bottleneck that slows down troubleshooting when every minute counts. https://github.com/hauptem/F5-Multisite-Dashboard157Views2likes0CommentsA Simple One-way Generic MRF Implementation to load balance syslog message
The BIG-IP Generic Message Protocol implements a protocol filter compatible with MRF (Message Routing Framework). MRF is designed to implement the most complex use cases, but it can be daunting if you need to create a simple configuration. This article provides a simple baseline to understand the relationships of the MRF components and how they can be combined for a simple one way implementation . A production implementation will in most case be more complex. The following virtual, profiles and iRules load balances a one way stream of new line delimited messages (in this case syslog) to a pool of message consumers. The messages will be parsed and distributed with a simple MLB protocol. Return traffic will not be returned to the client with this configuration. To implement this we will need these configuration objects: Virtual Server - Accepts incoming traffic and configure the Generic Protocol Generic Protocol - Defines message parsing. Generic Router - Configures message routing and point to the Generic Route Generic Route - Points to a Generic Peer Generic Peer - Defines an LTM pool members and points to the Generic Transport Config Generic Transport Config - Defines the server side protocol and server side irule iRule - Defines the message peers (Connections in the message streams) In this case we have a single client that is sending messages to a virtual server that will then be distributed to 3 pool members. Each message will be sent to one pool member only. This can only be configured from the CLI and the official F5 recommendation is to not make any changes in the web GUI to the virtual server. This was tested with BIG-IP 12.1.3.5 and 14.1.2.6. Here is the virtual with a tcp profile and required protocol and routing profiles along with an iRule to setup the connection peer on the client side. ltm virtual /Common/mrftest_simple { destination /Common/10.10.20.201:515 ip-protocol tcp mask 255.255.255.255 profiles { /Common/simple_syslog_protocol { } /Common/simple_syslog_router { } /Common/tcp { } } rules { /Common/mrf_simple } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled } The first profile is the protocol. The only difference between the default protocol (genericmsg) is the field no-response must be configured to yes if this is a one way stream. Otherwise the server side will allocate buffers for return traffic that will cause severe free memory depletion. ltm message-routing generic protocol simple_syslog_protocol { app-service none defaults-from genericmsg description none disable-parser no max-egress-buffer 32768 max-message-size 32768 message-terminator %0a no-response yes } The Generic Router profile points to a generic route ltm message-routing generic router simple_syslog_router { app-service none defaults-from messagerouter description none ignore-client-port no max-pending-bytes 23768 max-pending-messages 64 mirror disabled mirrored-message-sweeper-interval 1000 routes { simple_syslog_route } traffic-group traffic-group-1 use-local-connection yes } The Generic Route points to the Generic Peer: ltm message-routing generic route simple_syslog_route { peers { simple_syslog_peer } } The Generic Peer configures the server pool and points to the Generic Transport Config. Note the pool is configured here instead of the more common configuration in the virtual server. ltm message-routing generic peer simple_syslog_peer { pool mrfpool transport-config simple_syslog_tcp_tc } The Generic Transport Config also has the Generic Protocol configured along with the iRule to setup the server side peers. ltm message-routing generic transport-config simple_syslog_tcp_tc { ip-protocol tcp profiles { simple_syslog_protocol { } tcp { } } rules { mrf_simple } } An iRule must be configured on both the Virtual Server and Generic Transport Config. This iRule must be linked as a profile in both the virtual server and generic transport configuration. ltm rule /Common/mrf_simple { when CLIENT_ACCEPTED { GENERICMESSAGE::peer name "[IP::local_addr]:[TCP::local_port]_[IP::remote_addr]:[TCP::remote_port]" } when SERVER_CONNECTED { GENERICMESSAGE::peer name "[IP::local_addr]:[TCP::local_port]_[IP::remote_addr]:[TCP::remote_port]" } } This example is from a user case where a single syslog client was load balanced to multiple syslog server pool members. Messages are parsed with the newline (0x0a) character as configured in the generic protocol, but this can easily be adapted to other message types.2.2KViews2likes4CommentsOWASP Automated Threats - Credential Stuffing (OAT-008)
Introduction: In this OWASP Automated Threat Article we'll be highlighting OAT-008 Credentials Stuffing with some basic threat information as well as a recorded demo to dive into the concepts deeper. In our demo we'll show how Credential Stuffing works with Automation Tools to validate lists of stolen credentials leading to manual Account Takeover and Fraud. We'll wrap it up by highlighting F5 Bot Defense to show how we solve this problem for our customers. Credential Stuffing Description: Lists of authentication credentials stolen from elsewhere are tested against the application’s authentication mechanisms to identify whether users have re-used the same login credentials. The stolen usernames (often email addresses) and password pairs could have been sourced directly from another application by the attacker, purchased in a criminal marketplace, or obtained from publicly available breach data dumps. Unlike OAT-007 Credential Cracking, Credential Stuffing does not involve any bruteforcing or guessing of values; instead credentials used in other applications are being tested for validity Likelihood & Severity Credential stuffing is one of the most common techniques used to take-over user accounts. Credential stuffing is dangerous to both consumers and enterprises because of the ripple effects of these breaches. Anatomy of Attack The attacker acquires usernames and passwords from a website breach, phishing attack, password dump site. The attacker uses automated tools to test the stolen credentials against many websites (for instance, social media sites, online marketplaces, or web apps). If the login is successful, the attacker knows they have a set of valid credentials. Now the attacker knows they have access to an account. Potential next steps include: Draining stolen accounts of stored value or making purchases. Accessing sensitive information such as credit card numbers, private messages, pictures, or documents. Using the account to send phishing messages or spam. Selling known-valid credentials to one or more of the compromised sites for other attackers to use. OWASP Automated Threat (OAT) Identity Number OAT-008 Threat Event Name Credential Stuffing Summary Defining Characteristics Mass log in attempts used to verify the validity of stolen username/password pairs. OAT-008 Attack Demographics: Sectors Targeted Parties Affected Data Commonly Misused Other Names and Examples Possible Symptoms Entertainment Many Users Authentication Credentials Account Checker Attack Sequential login attempts with different credentials from the same HTTP client (based on IP, User Agent, device, fingerprint, patterns in HTTP headers, etc.) Financial Application Owner Account Checking High number of failed login attempts Government Account Takeover Increased customer complaints of account hijacking through help center or social media outlets Retail Login Stuffing Social Networking Password List Attack Password re-use Use of Stolen Credentials Credential Stuffing Demo: In this demo we will be showing how attackers leverage automation tools with increasing sophistication to execute credential stuffing against the sign in page of a web application. We'll then have a look at the same attack with F5 Distributed Cloud Bot Defense protecting the application. In Conclusion: A common truism in the security industry says that there are two types of companies—those that have been breached, and those that just don’t know it yet. As of 2022, we should be updating that to something like “There are two types of companies—those that acknowledge the threat of credential stuffing and those that will be its victims.” Credential stuffing will be a threat so long as we require users to log in to accounts online. The most comprehensive way to prevent credential stuffing is to use an anti-automation platform. OWASP Links OWASP Automated Threats to Web Applications Home Page OWASP Automated Threats Identification Chart OWASP Automated Threats to Web Applications Handbook F5 Related Content Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) F5 Bot Defense Solutions F5 Labs "I Was a Human CATPCHA Solver" The OWASP Automated Threats Project OWASP Automated Threats - CAPTCHA Defeat (OAT-009) How Attacks Evolve From Bots to Fraud Part: 1 How Attacks Evolve From Bots to Fraud Part: 2 F5 Distributed Cloud Bot Defense F5 Labs 2021 Credential Stuffing Report4.1KViews5likes0CommentsS3 Traffic Optimization with F5 BIG-IP & NetApp StorageGRID
The S3 protocol is seeing tremendous growth, often in projects involving AI where storage is a critical component, ranging from model training to inference with RAG. Model training projects usually need large, readily available datasets to keep GPUs operating efficiently. Additionally, storing sizable checkpoints—detailed snapshots of the model—is essential for these tasks. These are vital for resuming training after interruptions that could otherwise imperil weeks or months-long projects. Regardless of the AI ecosystem, S3 increasingly comes into play in some manner. For example, it is common for one tier of the storage involved, perhaps an outer tier, to be tasked with network loads to accrue knowledge sources. This data acquisition role is normally achieved today using S3 protocol transactions. Why does a F5 BIG-IP, an industry-leading ADC solution, work so well for optimizing S3 flows that are directed at a S3 storage solution such as StorageGRID? An interesting aspect of S3 is just how easily extensible it is, to a degree other protocols may not be. Take for example, routing protocols, like OSPF or BGP-4; that are governed by RFC’s controlled and published by the IETF (Internet Engineering Task Force). Unwavering compliance to RFC specifications is often non-negotiable with customers. Similarly, storage protocols are often governed by SNIA (Storage Networking Industry Association) and extensibility may not have been top of mind when standards were initially released. S3, as opposed to these examples, is proactively steered by Amazon. In fact, S3 (Simple Storage Service) was one of the earliest AWS services, launched in 2006. When referring to “S3” today, many times, the reference is normally to the set of S3 API commands that the industry has adopted in general, not specifically the storage service of AWS. Amazon’s documentation for S3, the starting point is found here, is easily digested, not clinical sets of clauses and archaic “must” or “should” directives. An entire category of user defined metadata is made available, and encourages unbounded extensibility: “User-defined metadata is metadata that you can choose to set at the time that you upload an object. This user-defined metadata is a set of name-value pairs” Why S3’s Flexibility Amplifies the Possibilities of BIG-IP and StorageGRID The extensibility baked into S3 is tailor made for the sophisticated and unique capabilities of BIG-IP. Specifically, BIG-IP ships with what has become an industry standard in Data Plane programmability: F5 iRules. For years, iRules have let IT administrators define specific, real-time actions based on the content and characteristics of network traffic. This enables them to do advanced tasks like content steering, protocol manipulation, customized persistence rules, and security policy enforcement. StorageGRID from NetApp allows a site of clustered storage nodes to use S3 protocol to both ingest content and deliver requested objects. Automatic backend synchronization allows any node to be offered up as a target by a server load balancer like BIG-IP. This allows overall storage node utilization to be optimized across the node set and scaled performance to reach the highest S3 API bandwidth levels, all while offering high availability to S3 API consumers. Should any node go off-line, or be taken out of service for maintenance, laser precise health checks from BIG-IP will remove that node from the pool used by F5, and customer S3 traffic will flow unimpeded. A previous article available here dove into the setup details and value delivered out of the box by BIG-IP for S3 clusters, such as NetApp StorageGRID. Sample BIG-IP and StorageGRID Configuration – S3 Storage QoS Steering One potential use case for BIG-IP is to expedite S3 writes such that the initial transaction is directed to the best storage node. Consider a workload that requires perhaps the latest in SSD technology, reflected by the lowest latency and highest IOPS. This will drive the S3 write (or read) towards the fastest S3 transactional completion time possible. A user can add this nuance, the need for this differentiated service level, easily as S3 metadata. The BIG-IP will make this custom field actionable, directing traffic to the backend storage node type required. As per normal StorageGRID behavior, any node can handle subsequent reads due to backend synchronization that occurs post-S3 write. Here is a sample scenario where storage nodes have been grouped into QoS levels, gold, silver and bronze to reflect the performance of the media in use. To demonstrate this use case, a lab environment was configured with three pools. This simulated a production solution where each pool could have differing hardware vintages and performance characteristics. To understand the use of S3 metadata fields, one may look at a simple packet trace of a S3 download (GET) directed towards an StorageGRID solution. Out of the box, a few header fields will indicate that the traffic is S3. By disabling TLS, a packet analyzer on the client, Wireshark, can display the User-Agent field value as highlighted above. Often S3 traffic will originate with specific S3 graphical utilities like S3 Browser, Cyberduck, or FileZilla. The indicator of S3 is also found with the presence of common HTTP X-headers, in the example we observe x-amz-content-sha256, and x-amz-date highlighted. The guidance for adding one’s own headers is to preface the new headers with “x-amz-meta-“. In this exercise, we have chosen to include the header “x-amz-meta-object-storage-qos” as a header and have BIG-IP search out the values gold, silver, and bronze to select the appropriate storage node pool. Traffic without this header, including S3 control plane actions, will see traffic proxied by default to the bronze pool. To exercise the lab setup, we will use the “s3cmd”, a free command-line utility available for Linux and macOS, with about 60 command line options; s3cmd easily allows for header inclusions. Here is our example, which will move a 1-megabyte file using S3 into a StorageGRID hosted bucket on the “Silver” pool: s3cmd put large1megabytefile.txt s3://mybucket001/large1megabytefile.txt --host=10.150.92.75:443 --no-check-certificate --add-header="x-amz-meta-object-storage-qos:silver" BIG-IP Setup and Validation of QoS Steering The setup of a virtual server on BIG-IP is straightforward in our case. S3 traffic will be accepted on TCP port 443 of the server address, 10.150.92.75, and independent TLS sessions will face both the client and the backend storage nodes. As seen in the following BIG-IP screenshot, three pools have been defined. An iRule can be written from scratch or, alternatively, can easily be created using AI. The F5 AI Assistant, a chat interface found within the Distributed Cloud console, is extensively trained on iRules. It was used to create the following (double-click image to enlarge). Upon issuing the client’s s3cmd command to upload the 1-megabyte file with “silver” QoS requirement, we immediately observe the following traffic to the pools, having just zeroed all counters. Converting the proxied 8.4 million bits to megabytes confirms that the 1,024 KB file indeed was sent to the correct pool. Other S3 control plane interactions make up the small amount of traffic proxied to the bronze, spinning disks pool. BIG-IP and XML Tag Monitoring for Security Purposes Beyond HTTP header metadata, S3 user consoles can be harnessed to apply content tags to StorageGRID buckets and objects within buckets. In the following simple example, a bucket “bucketaugust002” has been tied to various XML user-defined tags, such as “corporate-viewer-permissions-level: restricted”. When a S3 client lists objects within a bucket tagged as restricted, it may be prudent to log this access. One approach would be iRules, but it’s not the optimal path, as the entirety of the HTTP response payload would need to be scoured for bucket listings. iRules could then apply REGEX scanning for “restricted”, as one simple example. A better approach is to use the Data Guard feature of the BIG-IP Advanced WAF module. Data Guard is built into the TMM (Traffic Management Microkernel) of BIG-IP at great optimization, whereas the TCL engine of iRules is not. Thus, iRules is fine when the exact offset of a pattern in known, and really good with request and response header manipulation, but for launching scans throughout payloads Data Guard is even better. Within AWAF, one need simply build a policy and under the “Advanced” option put in Data Guard strings of interest in a REGEX format, in our example (?:^|\W)restricted(?:$|\W), as seen below, double-click to see image in sharper detail. Although the screenshot indicates “blocking” is the enforcement mode, in our use case, alerting was only sought. As a S3Browser user explores the contents of a bucket flagged as “restricted” the following log is seen by an authorized BIG-IP administrator (double-click). Other use cases for Data Guard and S3 would include scanning of textual data objects being retrieved, specifically for the presence of fields that might be classified as PII, such as credit card numbers, US social security numbers, or any other custom value an organization is interested in. A multitude of online resources provide REGEX expressions matching the presence of any of a variety of potentially sensitive information mandated through international standards. Occurrences within retrieved data, such as data formats matching drivers’ licenses, national health care values, and passport IDs are all quickly keyed in on. The ability of Data Guard to obfuscate sensitive information, beyond blocking or alerting, is a core reason to use BIG-IP in line with your data. BIG-IP Advanced Traffic Management - Preserve Customer Performance The BIG-IP offers a myriad of safeguards and enforcement options around the issue of high-rate traffic management. The risks today include any one traffic source maliciously or inadvertently overwhelming the StorageGRID cluster with unreasonable loads. The possible protections are simply touched upon in this section, starting with some simple features, enabled per virtual server, with a few clicks. A BIG-IP deployment can be as simple as one virtual server directing all traffic to a backend cluster of StorageGRID nodes. However, just as easily, it could be another extreme: a virtual server entry reserved for one company, one department, even one specific high value client machine. In the above screenshot, one sees a few options. A virtual server may have a cap set on how many connections are permitted, normally TCP port 443-based for S3. This is a primitive but effective safeguard against denial-of-service attacks that attempt to overwhelm with bulk connection loads. As seen, an eviction policy can be enabled that closes connections that exhibit negative characteristics, like clients who become unresponsive. Lastly, it is seen that connection rate safeguards are also available and can be applied to consider source IP addresses. This can counteract singular, rogue clients in larger deployments. Another set of advanced features serve to protect the StorageGRID infrastructure in terms of clamping down on the total traffic that the BIG-IP will forward to a cluster. These features are interesting as many can apply not just at the per virtual server level, but some controls can be applied to the entirety of traffic being sent on particular VLANs or network interfaces that might interconnect to backend pools. BIG-IP’s advanced features include the aptly named Bandwidth Controllers, and are made up of two varieties: static and dynamic. Static is likely the correct choice for limiting the total bandwidth a given virtual server can direct at the backend pool of Storage Nodes. The setup is trivial, just visit the “Acceleration” tab of BIG-IP and create a static bandwidth controller of the magnitude desired. At this point, nothing further is required beyond simply choosing the policy from the virtual server advanced configuration screen. For even more agile behavior, one might choose to use a dynamic bandwidth controller. As seen below, the options now extend into individual user traffic-level metering. With ancillary BIG-IP modules like Access Policy Manager (APM) traffic can be tied back to individual authenticated users by integration with common Identity Providers (IdP) solutions like Microsoft Active Directory (AD) servers or using the SSO framework standard SAML 2.0, others exist as well. However, with LTM by itself, we can consider users as sessions consisting of source IP and source (ephemeral) TCP port pairs and can apply dynamic bandwidth controls upon this definition of a user. The following provides an example of the degree to which traffic management can be imposed by BIG-IP with dynamic controllers, including per user bandwidth. The resulting dynamic bandwidth solution can be tied to any virtual server with the following few lines of a simple iRule: when CLIENT_ACCEPTED { set mycookie [IP::remote_addr]:[TCP:: remote_port] BWC::policy attach No_user_to_exceed_5Mbps $mycookie} Summary The NetApp StorageGRID multi-node S3 compatible object storage solution fits well with a high-performance server load balancer, thus making the F5 BIG-IP a good fit. There are various features within the BIG-IP toolset that can open up a broad set of StorageGRID’s practical use cases. As documented, the ability exists for a BIG-IP iRule to isolate in upon any HTTP-level S3 header, including customized user meta-data fields. This S3 metadata now becomes actionable – the resulting options are bounded only by what can be imagined. One potential option, selecting a specific pool of Storage Nodes based upon a signaled QoS value was successfully tested. Other use cases include using the advanced WAF capabilities of BIG-IP, such as the Data Guard feature, analyzing real-time response content and alerting upon xml tags or potentially obfuscating or entirely blocking transfers with sensitive data fields. A rich set of traffic safeguards were also discussed, including per virtual server connection limits and advanced bandwidth controls. When combined with optimizations discussed in other articles, such as the OneConnect profile that can drastically suppress the number of concurrent TCP sessions handled by StorageGrid nodes, one quickly sees that the performance and security improvements attainable make the suggested architecture compelling.315Views0likes0Comments