application delivery
43101 TopicsF5 upgrades
We are upgrading F5 tenants from 17.1 to 17.5. We have Two R-series pairs at each data center ( ex:main and colo) Within the data center, they are in HA active standby and the 4 are in a GSLB group . Each host has one tenant During the upgrade process, I disabled GTM Sync on the F5 that is going to be upgraded. Is it recommended? I plan on having traffic moved to this active box at ex colo from the other data center main, I won't be making any config changes . After the applications move to this side, LTM pools show up on this side and global availability will have the upgraded side up. just want to make sure, if that is disabled, do we need to leave them disabled and sync them after all the 4 F5s are upgraded? during this process, can we make changes with the data center on LTM pools? Thank you71Views0likes2CommentsDnssec deployment
Step to configur dnssec 1.create ksk and zsk 2.create zone sec 3. Register ds record to registrar i have config sync gtm and intermitten, anybody know what the problem is because i have dc and drc device with config sync ? And how to rolleover ksk with automatic, should i register again ds on step311Views0likes0CommentsR2600: how to deal with vlans in tenant's partition's
Hey all, I'm in the process of replacing an old 4200 with a new R-series 2600 for a customer. The current config in the 4200 consists of several partitions, and in the to be deployed tenant we want to keep using partitions. When deploying a tenant in the R2600, I can select which vlans I want to "pass on" towards the tenant. They will then automatically show up in the /Common of the tenant. So far it all works nice. However, I need some of these vlans to end up in a specific partition. So how do I go about that? I can manually create these vlans under the desired partition, but I don't think it's possible if they already exist in /Common. So, catch-22? I won't return to clients site for a while, and currently don't have acces to a home lab or something of the like. So I was hoping to get some clarity on this, to better prepare myself. regards,278Views0likes8Commentsf5 r5600 appliance issue with adding trunk to vlan
So i get this error when I'm trying to add trunk to the ha vlan that i created- ERROR: Unable to find interface object for configured trunk member 3.0 What does this mean? When i look under the interfaces for the tenant (17.5.0) i do not see the actual interfaces which are supposed to be 1.0, 2.0, 3.0 - - - - 10.0 instead i only see 0.1, 0.2, 0.3 - - 0.6 is that the reason why it shows the error? Also why does it even show 0.1, 0.2 - - - 0.6 instead of the 1.0, 2.0, 3.0, - - - - 10.0?? This makes no sense to me. Thank You245Views0likes5CommentsF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Interest stuff that I just wanted to share as this was just testing things out😄38Views1like0CommentsF5 Kubernetes CNF/BNK GSLB functionality ?
Hello everyone is there F5 CNF/BNK GSLB functionality ? I see the containers gslb-engine (probably the main GTM/DNS module) and gslb-probe-agent (probably the big3d in a container/pod ) but no CR/CRD definitions about it and and can this data be shared between F5 TMM in different clusters (something like DNS sync groups) or probing normal F5 BIG-IP devices (not in kubernetes). https://clouddocs.f5.com/cnfs/robin/latest/cnf-software-install.html https://clouddocs.f5.com/cnfs/robin/latest/intro.html70Views0likes3CommentsDealing with iRule $variables for HTTP2 workload while HTTP MRF Router is enabled
Hi Folks, I like to start a discussion on how to deal with iRule $variables, which are traversing independed iRule events, in combination with HTTP2 workload and HTTP MRF Router enabled (e.g. HTTP2 Full-Proxy mode or optionally for HTTP2 Gateway mode). In detail I like to discuss the observation and some solutions while doing those things within an iRule: Analyse a received HTTP2 request Switch Pools based on the requested HTTP2 request Switch Server-Side SSL Profiles based on the HTTP2 request Inject a SNI-Value to the Server-Side CLIENTHELLO based on the free-text value (e.g. [HTTP::host]). For HTTP/1.x workload this was basically bread and butter stuff and didn't required me to understand any rocket science. But in HTTP2 those rather simple tasks have somehow changed... Some background information on HTTP2 iRules In HTTP2 a single TCP connection will be used to carry multiple HTTP2 streams in parallel. - Client-Side TCP Connection - HTTP2 stream ID 1 (Request for /) - HTTP2 stream ID 3 (Request for /favicon.ico) - HTTP2 stream ID 5 (Request for /backgroung.jpg) - ... To handle such parallel HTTP2 streams on a single TCP connection, LTM got some interesting changes for HTTP2 workloads, which affects the way $variables are seperated and used within iRules. The iRule snipped below will ilustrate the change... when CLIENT_ACCEPTED { set var_conn 1 } when HTTP_REQUEST { incr var_conn log local0.debug $var_conn } In the traditional HTTP/1.x world, the number stored in the $var_conn variable would have been increased with every keep-alive'ed HTTP request send through the same TCP connection. But this is not the case when dealing with HTTP2 workload while the HTTP MRF Router is enabled. The iRule snipped above will always log "2" independently of the total number of HTTP2 streams already send over the very same TCP connection. When HTTP MRF Router is enabled, an individual HTTP2 stream will always create its own "child" $varibale environment while using the TCP connection environment as a carbon copy to inherit its $variables. The $variables within a given HTTP2 stream are staying isolated for the entire duration of the HTTP2 stream (including its Response) and wont mix up with $varibales of other HTTP2 streams or the $variables of the underlying TCP connection. So far so good... Info: The text above is only valid for HTTP MRF Router enabled HTTP2 workload. HTTP2 gateway mode without HTTP MRF Router enabled behaves slighly different. When HTTP MRF Router is disabled, an individual HTTP2 stream will be assigned to a child environment based on a bunch of "concurrently active HTTP2 streams" . The observation and problems outlined below are not valid for scenarios where HTTP MRF Router is disabled. Without using the HTTP MRF Router you wont run into the issues discussed below... The "awkwardness" of the HTTP MRF Router A challeging scenario (which has caused me sleepless nights) is show below... when CLIENT_ACCEPTED { set conn_var 1 } when HTTP_REQUEST { set ssl_profile "MyCustomSSLProfile" } when SERVER_CONNECTED { if { [info exists ssl_profile] } then { catch { SSL::profile $ssl_profile } } else { log local0.debug "No SSL Profile | Vars: [info vars]" } } The log output will be always "No SSL Profile | Vars: conn_var". The SERVER_CONNECTED event does somehow not have access to the $ssl_profile variable, which was just set by the HTTP_REQUEST event. This implicitly means, that the SERVER_CONNECTED event is not getting executed in the carbon copied $variable environment of the HTTP2 stream which has triggered the LB decission to open a new connection.. Lets try to figure this out in which $variable environment the SERVER_CONNECTED event is executed... when CLIENT_ACCEPTED { set x "Hello" } when HTTP_REQUEST { set logstring "" if { [info exists x] } then { lappend logstring $x } if { [info exists y] } then { lappend logstring $y } log local0.debug $logstring } when SERVER_CONNECTED { set y "World" } The log output of the iRule above will be "Hello" on the first HTTP2 stream request and "Hello World" on consecutive HTTP2 streams which are received after the SERVER_CONNECTED event has been executed. I immediately thought: "Okay, we must be back on the TCP connection layer environment during the SERVER_CONNECTED event!?!" The first HTTP2 stream gets a carbon copy of the TCP connection environment after the CLIENT_ACCEPTED was executed with only $x set, then the SERVER_CONNECTED event adds $y to the TCP connection environment and subsequent HTTP2 streams getting then a carbon copy of the TCP connection environment with $x and $y set. Sounds somehow reasonable.... but then I tried this... when HTTP_REQUEST { if { [info exists y] } then { log local0.debug $y } else { log local0.debug "???" } } when SERVER_CONNECTED { set y "Hello World" } The log output will be "???" on the initial HTTP2 stream (as expected). But after the SERVER_CONNECTED has been executed, the log will still continue output "???" on every subsequent HTTP2 stream (duh! wait? what?). This behavior would basically mean that the SERVER_CONNECTED event is (in contrast to what I initially thought) not executed in the original $variable environment of the underlying TCP connection. At this point, I can only assume what is happening behind the scenes: The SERVER_CONNECTED event is also running a carbon copy environment, where the orginal $variable environment of our TCP connection gets copied/ref-linked into (but ONLY if any $varibales where set) and any changes made to the carbon copy environment would become copied/ref-linked back to the orginal $variable environment of our TCP connection (but only if any $varibale where initially set). With enough imagination this sounds at least explainable... but seriously... is it really working like that? Note: At least SERVER_CONNECTED and all SERVERSSL_* related events are affected by this behavior. I did not tested if other events are affected too. Question 1: If someone has insights what is happening here, or have other creative ideas to explain the outcome of my observations, I would be more than happy to get some feedback? Its driving me nuts to get fouled by the iRules I've shown... Lets discuss some solutions... Disclaimer: Don't use any of the iRule code below in your production environment unless you understand the impact. I crashed my dev environment more than once while exploring possiblities. Right now its way too experimental to recommend anything to use in production environment. And please dont blame me if you missed to read this warning... 😉 I already reached the point where I've simply accepted the fact, that you can't pass a regular $variable between a given HTTP2 stream and the SERVER_CONNECTED or SERVERSSL_* events. But i still need my job done and started to explore possibilities to interact between a HTTP2 stream and the server-side events. My favorite solution so far exports an $array() varibale from the HTTP2 stream during LB_SELECTED to a [table -subtable] stored on the local TMM core. The SERVER_CONNECTED event will then lookup the [table -subtable] and restore the $array() variable. when CLIENT_ACCEPTED { ################################################# # Define a connection specific label set conn(label) "[IP::client_addr]|TCP::client_port|[IP::local_addr]:[TCP::local_port]" } when HTTP_REQUEST { ################################################# # Clean vars from previous requests unset -nocomplain temp conf ################################################# # Define values for export set conf(SSL_Profile) "/Common/serverssl" set conf(SNI_Value) [HTTP::host] } when LB_SELECTED { ################################################# # Export conf() to local TMM subtable if { [info exists conf] } then { if { [catch { ################################################# # Try to export conf() to local TMM subtable table set -subtable $static::local_tmm_subtable \ "$conn(label)|[HTTP2::stream]" \ [array get conf] \ indef \ 30 }] } then { ################################################# # Discover subtable on local TMM core (once after reboot) set tmm(table_iterations) [expr { [TMM::cmp_count] * 7 }] for { set tmm(x) 0 } { $tmm(x) < $tmm(table_iterations) } { incr tmm(x) } { set tmm(start_timestamp) [clock clicks] table lookup -subtable "tmm_local_$tmm(x)" [clock clicks] set tmm(stop_timestamp) [clock clicks] set tmm_times([expr { $tmm(stop_timestamp) - $tmm(start_timestamp) }]) $tmm(x) } set static::local_tmm_subtable "tmm_local_$tmm_times([lindex [lsort -increasing -integer [array names tmm_times]] 0])" ################################################# # Restart export of conf() to local TMM subtable table set -subtable $static::local_tmm_subtable \ "$conn(label)|[HTTP2::stream]" \ [array get conf] \ indef \ 30 } } } when SERVER_CONNECTED { ################################################# # Import conf() from local TMM subtable clientside { catch { array set conf [table lookup \ -subtable $static::local_tmm_subtable \ "$conn(label)|[HTTP2::stream]"] } } ################################################# # Import conf() from local TMM subtable if { [info exists conf(SSL_Profile)] } then { catch { SSL::profile $conf(SSL_Profile) } } else { SSL::profile disable } } when SERVERSSL_CLIENTHELLO_SEND { ################################################# # Inject SNI Value based on conf() variable if { [info exists conf(SNI_Value)] } then { SSL::extensions insert [binary format SSScSa* \ 0 \ [expr { [set temp(sni_length) [string length $conf(SNI_Value)]] + 5 }] \ [expr { $temp(sni_length) + 3 }] \ 0 \ $temp(sni_length) \ $conf(SNI_Value)] } } Beside of the slightly awkward approach to store things in a [table -subtable] to interact between the iRule events. An error message will be raised everytime you save the iRule. Dec 15 22:17:44 kw-f5-dev.itacs.de warning mcpd[5551]: 01071859:4: Warning generated : /Common/HTTP2_FrontEnd:167: warning: [The following errors were not caught before. Please correct the script in order to avoid future disruption. "{badEventContext {command is not valid in current event context (SERVER_CONNECTED)} {5213 13}}"5105 132][clientside { During iRule execution it seems to be absolutely fine to call the [clientside] command during SERVER_CONNECTED event to access the [HTTP2::stream] id which tiggered the LB_SELECTED event. Question 2: Do you know other approches to deal with the outlinied MRF HTTP Router awkwardness? Do you have any doubts that the approach above may run stable? Do you have any tips how to improve the code? Should I be concerned that MCPD complaining about the syntax, or simply wrap [clientside] into an [eval $cmd] to trick out MCPD? I know the post got basically "tl;dr" long, but this problem bothers me pretty much. A customer is already waiting for a stable solution... 😞 Cheers, KaiSolved3.3KViews0likes16CommentsF5 BIG-IP Multi-Site Dashboard
A comprehensive real-time monitoring dashboard for F5 BIG-IP Application Delivery Controllers featuring multi-site support, DNS hostname resolution, member state tracking, and advanced filtering capabilities. A 170KB modular JavaScript application runs entirely in your browser, served directly from the F5's high-speed operational dataplane. One or more sites operate as Dashboard Front-Ends serving the dashboard interface (HTML, JavaScript, CSS) via iFiles, while other sites operate as API Hosts providing pool data through optimized JSON-based dashboard API calls. This provides unified visibility across multiple sites from a single interface without requiring even a read-only account on any of the BIG-IPs, allowing you to switch between locations and see consistent pool, member, and health status data with almost no latency and very little overhead. Think of it as an extension of the F5 GUI: near real-time state tracking, DNS hostname resolution (if configured), advanced search/filtering, and the ability to see exactly what changed and when. It gives application teams and operations teams direct visibility into application pool state without needing to wait for answers from F5 engineers, eliminating the organizational bottleneck that slows down troubleshooting when every minute counts. https://github.com/hauptem/F5-Multisite-Dashboard153Views2likes0CommentsF5 MCP(Model Context Protocol) Server
This project is a MCP( Model Context Protocol ) server designed to interact with F5 devices using the iControl REST API. It provides a set of tools to manage F5 objects such as virtual servers (VIPs), pools, iRules, and profiles. The server is implemented using the FastMCP framework and exposes functionalities for creating, updating, listing, and deleting F5 objects.1.3KViews1like1CommentXC Distributed Cloud and how to keep the Source IP from changing with customer edges(CE)!
The best will always be the application to stop tracking users based on something primitive as an ip address and sometimes the issue is in the Load Balancer or ADC after the XC RE and then if the persistence is based on source IP address on the ADC to be changed in case it is BIG-IP to Cookie or Universal or SSL session based if the Load Balancer is doing no decryption and it is just TCP/UDP layer . As an XC Regional Edge (RE) has many ip addresses it can connect to the origin servers adding a CE for the legacy apps is a good option to keep the source IP from changing for the same client HTTP requests during the session/transaction. Before going through this article I recommend reading the links below: F5 Distributed Cloud – CE High Availability Options: A Comparative Exploration | DevCentral F5 Distributed Cloud - Customer Edge | F5 Distributed Cloud Technical Knowledge Create Two Node HA Infrastructure for Load Balancing Using Virtual Sites with Customer Edges | F5 Distributed Cloud Technical Knowledge RE to CE cluster of 3 nodes The new SNAT prefix option under the origin pool allows no mater what CE connects to the origin pool the same IP address to be seen by the origin. Be careful as if you have more than a single IP with /32 then again the client may get each time different IP address. This may cause "inet port exhaustion " ( that is what it is called in F5BIG-IP) if there are too many connections to the origin server, so be careful as the SNAT option was added primary for that use case. There was an older option called "LB source IP persistence" but better not use it as it was not so optimized and clean as this one. RE to 2 CE nodes in a virtual site The same option with SNAT pool is not allowed for a virtual site made of 2 standalone CE. For this we can use the ring hash algorithm. Why this works? Well as Kayvan​ explained to me the hashing of the origin is taking into account the CE name, so the same origin under 2 different CE will get the same ring hash and the same source IP address will be send to the same CE to access the Origin Server. This will not work for a single 3-node CE cluster as it all 3 nodes have the same name. I have seen 503 errors when ring hash is enabled under the HTTP LB so enable it only under the XC route object and the attached origin pool to it! CE hosted HTTP LB with Advertise policy In XC with CE you can do do HA with 3-cluster CE that can be layer2 HA based on VRRP and arp or Layer 3 persistence based bgp that can work 3 node CE cluster or 2 CE in a virtual site and it's control options like weight, as prepend or local preference options at the router level. For the Layer 2 I will just mention that you need to allow 224.0.0.8 for the VRRP if you are migrating from BIG-IP HA and that XC selects 1 CE to hold active IP that is seen in the XC logs and at the moment the selection for some reason can't be controlled. if a CE can't reach the origin servers in the origin pool it should stop advertising the HTTP LB IP address through BGP. For those options Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment is a great example as it shows ECMP BGP but with the BGP attributes you can easily select one CE to be active and processing connections, so that just one ip address is seen by the origin server. When a CE gets traffic by default it does prefer to send it to the origin as by default "Local Preferred" is enabled under the origin pool. In the clouds like AWS/Azure just a cloud native LB is added In front of the 3 CE cluster and the solution there is simple as to just modify the LB to have a persistence. Public Clouds do not support ARP, so forget about Layer 2 and play with the native LB that load balances between the CE 😉 Hope you enjoyed this article!67Views2likes0Comments