Converting a BIG-IP Maintenance Page iRule to Distributed Cloud using App Stack
If you are familiar with BIG-IP, you are probably also familiar with its flexible and robust iRule functionality. In fact, I would argue that iRules makes BIG-IP the swiss-army knife that it is. If there is ever a need for advanced traffic manipulation, you can usually come up with an iRule to solve the problem. F5 Distributed Cloud (XC) has its own suite of tools to help in this regard. If you need to do some sort of traffic manipulation/routing you can usually handle that with Service Policies or simply using Routes. Even with these features, however, there are going to be some cases where iRule functionality from the BIG-IP cannot be reproduced directly in XC. When this happens, we switch to using App Stack, which is XC’s version of a swiss army knife. In this article, I wanted to walk through an example of how you can leverage XC's App Stack for a specific iRule conversion use case: Displaying a Custom Maintenance Page when all pool members are down. For reference, here is the iRule: when LB_FAILED { if { [active_members [LB::server pool]] == 0 } { if { ([string tolower [HTTP::host]] contains "example.com")} { if { [HTTP::uri] ends_with "SystemMaintenance.jpg" } { HTTP::respond 200 content [ifile get "SystemMaintenance.jpg"] "Content-Type" "image/jpg" } else { HTTP::respond 200 content "<!DOCTYPE html> <html lang="en"> <head> <title>System Maintenance</title> <style type="text/css"> .base { font-family: 'Tahoma'; font-size: large; } </style> </head> <body> <br> <center><img alt="sad" height="200" src="SystemMaintenance.jpg" width="200" /></center><br> <center><span class="base">This application is currently under system maintenance.</span></center> <br> <center><span class="base">All services will be back online in a few mintues.</span> </body> </html>" } } } } When dissecting this iRule, you can see we have to solve for the following: Trigger the maintenance page when all pool members are down Serve local files (images, css, etc.) Display the static HTML page So, how do we do this? Well, App Stack allows us to deploy and host a container in Distributed Cloud. So we can easily create a simple container (using NGINX for bonus points!) that contains all these images, stylesheets, HTML files, etc. and manipulate our pools so that it uses this container when required! Let’s deep dive into the step-by-step process… Step by Step Walk-through: Container Creation First, we have to create our container. I'm not going to go too deep into how to create a container in this article, but I will highlight the main steps I took. To start, I simply extracted the HTML from the iRule above and saved all the required files (images, stylesheets, etc.) in one directory. Since I am adding NGINX to the container, I must also create and include a nginx.conf file in this directory. Below was my configuration: worker_processes 1; error_log /var/log/nginx/error.log warn; pid /tmp/nginx.pid; events { worker_connections 1024; } http { client_body_temp_path /tmp/client_temp; proxy_temp_path /tmp/proxy_temp_path; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; include /etc/nginx/mime.types; server { listen 8080; location / { root /usr/share/nginx/html/; index index.html; } location ~* \.(js|jpg|png|css)$ { root /usr/share/nginx/html/; } } sendfile on; keepalive_timeout 65; } There really isn’t much to the NGINX configuration for this example, but keep in mind that you can expand on this and make it much more robust for other use cases. (One note about the configuration above is that you will see /tmp paths mentioned. These are required since our container will run as a non-root user. For more information, see the NGINX documentation here: https://hub.docker.com/_/nginx) Finally, I included a Dockerfile with my requirements for NGINX and exposing port 8080. Once that was all set, I built my container and pushed it Docker Hub as a private repository. App Stack Deployment Now that we have the container created and uploaded to Docker Hub, we are ready to bring it to XC. Start by opening up the F5 XC Console and navigate to the Distributed Apps tile. Navigate to Applications -> Container Registries, then click Add Container Registry. Here we just have to add a name for the Container Registry, our Docker Hub Username, “docker.io” for the Server FQDN, and then blindfold our password for Docker Hub. After saving, we are now ready to configure our workload To do so, we have to navigate over to Applications -> Virtual K8s. I already had a Virtual Site and Virtual K8s created, but you'll need to create those if you don't already have them. For your reference, here are some links to a walk-through on each of these: Virtual Site Creation:https://docs.cloud.f5.com/docs/how-to/fleets-vsites/create-virtual-site Virtual K8s Creation:https://docs.cloud.f5.com/docs/how-to/app-management/create-vk8s-obj Select your Virtual K8s cluster: After selecting your cluster, navigate to theWorkloads tab.Under Workloads, click on Add VK8s Workload. Give your workload a name and then change the Type of Workload toService instead of Simple Service. Your configuration should look something like below: You'll notice we now have to configure the Service. ClickConfigure. The first step is to tell XC which container we want to deploy for this service. Under Containers, select Add Item: Give the container a name, and then input your Image Name. The format for the image name is "registry/image:tagname". If you leave the tag name blank, it defaults to “latest”. Under the Select Container Registry drop down, selectPrivate Registry. This will bring up another drop-down where we will select the container registry we created earlier. Your configuration should end up looking similar to below: For this simple use case, we can skip the Configuration Parameters and move to our Deploy Options. Here, we have some flexibility on where we want to deploy our workload. You can choose All Regional Edges (F5 PoPs), specific REs, or even custom CEs and Virtual Sites. In my basic example, I chose Regional Edge Sites and picked the ny8-nyc RE for now: Next, we have to configure where we want to advertise this workload. We have the option to keep it internal and only advertise in the vK8s Cluster or we could advertise this workload directly on the Internet. Since we only want this maintenance page to be seen when the pool members are all down, we are going to keep this to Advertise In Cluster. After selecting the advertisement, we have to configure our Port Information. Click Configure. Under the advertisement configuration, you’ll see we are simply choosing our ports. If you toggle “Show Advanced fields” you can see we have some flexibility on the port we want to advertise and the actual target port for the container. In my case, I am going to use 8080 for both, but you may want to have a different combination (i.e. 80:8080). Click Apply once finished. Now that we have the ports defined, we can simply hit Apply on the Service configuration and Save and Exit the workload to kick off the deployment. We should now see our new maintenance-page workload in the list. You’ll notice that after refreshing a couple times, the Running/Completed Pods and Total Pods fields will be populated with the number of REs/CEs you chose to deploy the workload to. After a few minutes, you should have a matching number of Running/Completed Pods to your Total Pods. This gives us an indication that the workload is ready to be used for our application. (Note: you can click on the pod numbers in this list to see a more detailed status of the pods. This helps when troubleshooting) Pool Creation With our workload live and advertised in the cluster, it is time to create our pool. In the top left of the platform we’ll need to Select Service and change to Mulitcloud App Connect: Under Mulit-Cloud App Connect, navigate to Manage -> Load Balancers -> Origin Pools and SelectAdd Origin Pool. Here, we’ll give our origin pool a name and then go directly to Origin Servers. Under Origin Servers, clickAdd Item. Change the Type of the Origin Server to be K8s Service Name of Origin Server on given Sites. Under Service Name, we have to use the format "servicename.namespace:cluster-id" to point to our workload. In my case, it was "maintenance-page.bohanson:bohanson-test" since I had the following: Service Name: maintenance-page Namespace: bo-hanson VK8s Cluster: bohanson-test Under Site or Virtual Site, I chose the Virtual Site I already had created. The last step is to change the network to vK8s Networks on Site and Click Apply. The result should look like the below: We now need to change our Origin Server port to be the port we defined in the workload advertisement configuration. In my case, I chose port 8080. The rest of the configuration of the origin server is up to you, but I chose to include a simple http health check to monitor the service. Once the configuration finished, click Save and Exit. The final pool configuration should look like this: Application Deployment: With our maintenance container up and running and our pool all set, it is time to finally deploy our solution. In this case, we can select any existing Load Balancer configuration where we want to add the maintenance page. You could also create a new Load Balancer from scratch, of course, but for this example I am deploying to an existing configuration. Under Manage -> Load Balancers, find the load balancer of your choosing and then select Manage Configuration. Once in the Load Balancer view, select Edit Configuration in the top right. To deploy the solution, we just need to navigate to our Origins section and add our new maintenance pool. SelectAdd Item. At this point, you may be thinking, “Well that is great, but how am I going to get the pool to only show when all other pool members are down?” That is the beauty of the F5 Distributed Cloud pool configuration. We have two options that we can set when adding a pool: Weight and Priority. Both of those options are pretty self-explanatory if you have used a load balancer before, but what is interesting here is when you give these options a value of zero. Giving a pool a weight of zero would disable the pool. For a maintenance pool use case, that could be helpful since we can manually go into the Load Balancer configuration during a maintenance window, disable the main pool, and then bring up the maintenance pool until our change window is closed when we could then reverse the weights and bring the main pool back online. That ALMOST solves our iRule use case, but it would be manual. Alternatively, we can give a pool a Priority of zero. Doing so would mean that all other pools take priority and will be used unless they go down. In the event of the main pool going down, it would default to the lowest priority pool (zero). Now that is more like it! This means we can set our maintenance pool to a Priority of zero and it will automatically be used when the health of all our other pool members go down – which completely fulfills the original iRule requirement. So in our configuration, let's add our new maintenance pool and set: Weight: 1 Priority: 0 After clicking save, the final pool configuration should look something like this: Testing To test, we can simply switch our health check on the main pool to something that would fail. In my case, I just changed the expected status code on the health check to something arbitrary that I knew would fail, but this could be different in your case. After changing the health check, we can navigate to our application in a browser, and see our maintenance page dynamically appear! Changing the health check on the main pool back to a working one should dynamically turn off the maintenance page as well: Summary This is just one example of how you can use App Stack to convert some more advanced/dynamic iRules over to F5 Distributed Cloud. I only used a basic NGINX configuration in this example, but you can start to see how leveraging NGINX in App Stack can give us even more flexibility. Hopefully this helps!68Views0likes0CommentsASM Policy in "Blocking" Mode switch to "Transparent" for some IP's
I have a policy that I need to switch to blocking but the business want to have a phased approach. Only the testing team should be in Blocking, while the rest of the business (a different IP range) remains in transparent. I need to keep the same policy so that I can "proof" that everything is running fine. Is there a method to do that ? Was thinking about an iRule but dont know how. I know how to disable ASM with an iRule but, that's something I don't want because I need to keep the learning suggestions. Bye St.397Views0likes6CommentsF5 WAF/ASM block users that trigger too many violations by source ip/device id using the correlation logs
Hello to All, I was thinking of using the iRule tables command to write when a user ip/device id makes too many violations for a time perioud and to get blocked for some time but I see that the F5 ASM has correlation logs that trigger incidents but there is not a lot info if this can be used in iRules or to block user ip addresses / deviceid. https://support.f5.com/csp/article/K92532922Solved1.6KViews1like7CommentsBig-IP Next 20.2.0-2.375.1+0.0.43 iRule count problem
I have very simple iRule to show the problem: when HTTP_REQUEST { set Client_IP [IP::client_addr] if { ($Client_IP starts_with "x.x.x.x") && ([HTTP::uri] equals "/seed") } { table set -subtable TABLE "key1" "value1" 30 table set -subtable TABLE "key2" "value2" 15 table set -subtable TABLE "key3" "value3" 45 HTTP::respond 200 content "Done" TCP::close return } set key_value "key1" set key_value2 "key2" set key_value3 "key3" set count [table keys -subtable TABLE -count] HTTP::respond 200 content " Remaining timeout / defined timeout for ${key_value} => [table lookup -notouch -subtable TABLE ${key_value}] [table timeout -subtable TABLE -remaining ${key_value}]/[table timeout -subtable TABLE ${key_value}] Remaining timeout / defined timeout for ${key_value2} => [table lookup -notouch -subtable TABLE ${key_value2}] [table timeout -subtable TABLE -remaining ${key_value2}]/[table timeout -subtable TABLE ${key_value2}] Remaining timeout / defined timeout for ${key_value3} => [table lookup -notouch -subtable TABLE ${key_value3}] [table timeout -subtable TABLE -remaining ${key_value3}]/[table timeout -subtable TABLE ${key_value3}] Count TABLE ${count}" } It looks like table -keys -subtable <tablename> -count don't work properly: Remaining timeout / defined timeout for key1 => value1 27/30 Remaining timeout / defined timeout for key2 => value2 12/15 Remaining timeout / defined timeout for key3 => value3 42/45 Count TABLE 0 My expected output would be 3 (as it is not timeouted), not 0. Can someone check if I am correct? Or tell me how I can count not expired entries in table.110Views0likes4CommentsUri-based client cert authentication question
Hi, I need to configure a virtual server with selective client cert authentication based on URI. In case user select cert auth the uri changes to /myweb/secure/, F5 should request client cert, renegotiate SSL and insert client cert into HTTP header so the back-end server can read client cert. There is quite a lot of info and posts about this feature, which I've readed. I've config VS, SSL profile (client) and irule but I just can't make this work. SSL profile client: renegotiation enabled client authentication client certificate: ignore frequency: once trusted certificate authorities & advertised cert: bundle of client cert CA irule: when CLIENTSSL_CLIENTCERT { HTTP::release if { [SSL::cert count] < 1 } { reject } } when HTTP_REQUEST { if { [HTTP::uri] starts_with "/myweb/secure/" } { if { [SSL::cert count] == 0 } { HTTP::collect SSL::authenticate always SSL::authenticate depth 9 SSL::cert mode require SSL::renegotiate } } } when HTTP_REQUEST_SEND { clientside { if { [SSL::cert count] > 0 } { HTTP::header insert "x-clientcert" [X509::whole [SSL::cert 0]] } } } I'm not sure whether /myweb/secure/ path is ever accesible, since there is no browser pop-up requesting the client certificate. I really can't figure this out, any hints would be most appreciated. Thanks a lot for your time and help.181Views0likes1CommentFingerprinting TLS Clients with JA4 on F5 BIG-IP
JA4+ is a set of simple network fingerprints thatare both human and machine readable to facilitate more effective threat-hunting and analysis. In this article you will learn how you can use F5 iRules to gerenate JA4 TLS fingerprints.2.2KViews10likes0CommentsBlock IP Addresses With Data Group And Log Requests On ASM Event Log
Problem this snippet solves: This is Irule which will block IP Addresses that are not allowed in your organization. instead of adding each IP Address in Security ›› Application Security : IP Addresses : IP Address Exceptions you can create a data group and use a simple IRULE to block hundreds of Addressess. Also,createing a unique signature to specify the request of the illigile IP Address. First, You will need to create Data Group under Local Traffic ›› iRules : Data Group List and add your illigile IP Addresses to the list. If you have hundreds of IP's that you want to block, you can to it in TMSH using this command: TMSH/modify ltm data-group internal <Data-Group-Name> { records add {IP-ADDRESS} } Now, We are ready to create the IRULE under Local Traffic ›› iRules : iRule List Last, Create violation list under Security ›› Options : Application Security : Advanced Configuration : Violations List Create -> Name:Illegal_IP_Address -> Type:Access Violation -> Severity:Critical -> Update Don't forgat to enable trigger ASM IRULE events with "Normal Mode" How to use this snippet: Code : when HTTP_REQUEST { set reqBlock 0 if { [class match [IP::remote_addr] equals ] } { set reqBlock 1 # log local0. "HTTP_REQUEST [IP::client_addr]" } } when ASM_REQUEST_DONE { if { $reqBlock == 1} { ASM::raise "Illegal_IP_Address" # log local0. "ASM_REQUEST_DONE [IP::client_addr]" } } Tested this on version: 13.01.4KViews1like5CommentsDNS Query Name Parsing iRule
Problem this snippet solves: This iRule will extract the DNS Query Name in the absence of a DNS profile being applied to a Virtual Server. How to use this snippet: # This is a shameless rip from an old Devcentral post DNS Hostname Parsing iRule that, to the best of my knowledge, never made it to a Code Share. To use this code, simply apply this to a UDP Virtual Server that processes DNS traffic. (No DNS Profile necessary). Code : when FLOW_INIT { #extract QNAME from QUESTION header #${i} is a sanity check so this logic won't spin on invalid QNAMEs set i 0 #initialize our position in the QNAME parsing and the text QNAME set offset 12 set length 1 set endlength 1 set name "" #/extract QNAME from QUESTION header while {${length} > 0 && ${i} < 10} { #length contains the first part length binary scan [string range [DATAGRAM::udp payload] ${offset} ${offset}]] c foo #make the length an unsigned integer set length [expr {${foo} & 0xff}] if {${length} > 0} { #grab a part and put it in our text QNAME section append name [string range [DATAGRAM::udp payload] [expr {${offset} + 1}] [expr {${offset} + ${length}}]] #Watch the DNS QNAME get built during the loop. Remove the following line for production use. log local0.info "BUILDING DNS NAME: [IP::client_addr] queried ${name} offset ${offset} length ${length}" #grab a part and put it in our text QNAME section set offset [expr {${offset} + ${length} +1}] #endlength contains the Last part length binary scan [string range [DATAGRAM::udp payload] ${offset} ${offset}]] c foo #make the length an unsigned integer set endlength [expr {${foo} & 0xff}] if { ${endlength} > 0} { #put a dot between parts like a normal DNS name append name "." } incr i } } #/extract QNAME from QUESTION header #Input the required action here, where "${name}" is the variable that is reviewed for decision making. #Sample action would be a pool statement. The below log statement should be removed for production use. log local0.info "FINAL DNS NAME: [IP::client_addr] queried ${name}" } Tested this on version: 12.1612Views2likes1CommentGTM return LDNS IP to client
Problem this snippet solves: We do a lot of our load balancing based on topology rules, so it's often very useful to know where the DNS request is actually coming from rather than just the client's IP and the DNS servers they have configured. Especially if they're behind an ADSL router doing NAT or some other similar set up. This rule simply returns the IP address of the LDNS that eventually made the query to the GTM device in the response to a lookup for the WideIP using the rule, as well as logging the response and perceived location. Code : rule "DNS_debug" partition "Common" { when DNS_REQUEST { host [IP::client_addr] log local0.err "Debug address : [IP::client_addr] [whereis [IP::client_addr]]" } }730Views1like1CommentiRule for Logging a HTTP::header
Hello, we have a VS that services multiple FQDNs to one VS and then chooses the pool using an LTM policy that checks the incoming http host from the client and then associates to the pool. I have a working irule that looks like: when HTTP_REQUEST_RELEASE { foreach x [HTTP::header names] { log local0. "$x: [HTTP::header $x]" } } the problem is, this returns all the headers for every HTTP_request for this VS, for ALL The FQDNs, when i really only need it for one specific one. I know in the header there is a Host field for the fqdn. is there a way to modify my irule above to only log the header for the HTTP::headers that contain the url lets say webserverA.com? Note: I tried to do the logging in the LTM policy for when it chooses the WebserverA pool, but while it says it accepts TCL, I don't know what to put in there.Solved50Views0likes1Comment