development
145 TopicsiCall - All New Event-Based Automation System
The community has long requested the ability to affect change to the BIG-IP configuration by some external factor, be it iRules trigger, process or system failure event, or even monitor results. Well, rest easy folks, among the many features arriving with BIG-IP version 11.4 is iCall, a completely new event-based granular internal automation system. iCall gives you comprehensive control over BIG-IP configuration, leveraging the TMSH control plane and seamlessly integrating the data plane as well. Components The iCall system has three components: events, handlers, and scripts. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. A handler initiates a script and is the decision mechanism for event data. There are three types of handlers: Triggered - reacts to a specific event Periodic - reacts to a timer Perpetual - runs under the control of a daemon Finally, there are scripts. Scripts perform the action as a result of event and handler. The scripts are TMSH Tcl scripts organized under the /sys icall section of the system. Flow Basic flows for iCall configurations start with an event followed by a handler kicking off a script. A more complex example might start with a periodic handler that kicks off a script that generates an event that another handler picks up and generates another script. These flows are shown in the image below. A Brief Example We'll release a few tech tips on the development aspect of iCall in the coming weeks, but in the interim here's a prime use case. Often an event will happen that an operator will want to grab a tcpdump on the interesting traffic occurring during that event, but the reaction time isn't quick enough. Enter iCall! First, configure an alert in /config/user_alert.conf for a pool member down: alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } You'll need one of these stanzas for each pool member you want to monitor in this way. Next, Create the iCall script: modify script tcpdump { app-service none definition { set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"] foreach var { ip port count vlan } { set $var $EVENT::context($var) } exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port } description none events none } Finally, create the iCall handler to trigger the script: sys icall handler triggered tcpdump { script tcpdump subscriptions { tcpdump { event-name tcpdump } } } Ready. Set. Go! That's one example of a triggered handler. We have many more examples of perpetual and periodic handlers in the codeshare with several use cases for your immediate use and testing. Get ready to jump aboard the iCall automation/orchestration train!2.8KViews0likes4CommentsexpandsSubcollections and filtering in F5 SDK
Hello, I need to use the F5 BIGIP SDK to pull all the firewall policies from a partition along with their respective expanded rules. I wrote the following code: policies = f5_manager.mgmt.tm.security.firewall.policy_s.get_collection( requests_params={ "params": f"$filter=partition eq {partition_name}&expandSubcollections=true" }, ) If i only want to use expandSubcollections=true or only the filter it works as expected. However, when I try to combine them it doesn't expand the results anymore. Has anyone encountered this before? Thanks!60Views0likes2CommentsBIG-IP LTM VE: Transfer your iRules in style with the iRule Editor
The new LTM VE has opened up the possibilities for writing, testing and deploying iRules in a big way. It’s easier than ever to get a test environment set up in which you can break things develop to your heart’s content. This is fantastic news for us iRulers that want to be doing the newest, coolest stuff without having to worry about breaking a production system. That’s all well and good, but what the heck do you do to get all of your current stuff onto your test system? There are several options, ranging from copy and paste (shudder) to actual config copies and the like, which all work fine. Assuming all you’re looking for though is to transfer over your iRules, like me, the easiest way I’ve found is to use the iRule editor’s export and import features. It makes it literally a few clicks and super easy to get back up and running in the new environment. First, log into your existing LTM system with your iRule editor (you are using the editor, right? Of course you are…just making sure). You’ll see a screen something like this (right) with a list of a bagillionty iRules on the left and their cool, color coded awesomeness on the right. You can go through and select iRules and start moving them manually, but there’s really no need. All you need to do is go up to the File –> Archive –> Export option and let it do its magic. All it’s doing is saving text files to your local system to archive off all of your iRuley goodness. Once that’s done, you can then spin up your new LTM VE and get logged in via the iRule editor over there. Connect via the iRule editor, and go to File –> Archive –> Import, shown below. Once you choose the import option you’ll start seeing your iRules popping up in the left-hand column, just like you’re used to. This will take a minute depending on how many iRules you have archived (okay, so I may have more than a few iRules in my collection…) but it’s generally pretty snappy. One important thing to note at his point, however, is that all of your iRules are bolded with an asterisk next to them. This means they are not saved in their current state on the LTM. If you exit at this point, you’ll still be iRuleless, and no one wants that. Luckily Joe thought of that when building the iRule editor, so all you need to do is select File –> Save All, and you’ll be most of the way home. I say most of the way because there will undoubtedly be some errors that you’ll need to clean up. These will be config based errors, like pools that used to exist on your old system and don’t now, etc. You can either go create the pools in the config or comment out those lines. I tend to try and keep my iRules as config agnostic as possible while testing things, so there aren’t a ton of these but some of them always crop up. The editor makes these easy to spot and fix though. The name of the iRule that’s having a problem will stay bolded and any errors in that particular code will be called out (assuming you have that feature turned on) so you can pretty quickly spot them and fix them. This entire process took me about 15 minutes, including cleaning up the code in certain iRules to at least save properly on the new system, and I have a bunch of iRules, so that’s a pretty generous estimate. It really is quick, easy and painless to get your code onto an LTM VE and get hacking coding. An added side benefit, but a cool one, is that you now have your iRules backed up locally. Not only does this mean you’re double plus sure that they won’t be lost, but it means the next time you want to deploy them somewhere, all you have to do is import from the editor. So if you haven’t yet, go download your BIG-IP LTM VE and get started. I can’t recommend it enough. Also make sure to check out some of the really handy DC content that shows you how to tweak it for more interfaces or Joe’s supremely helpful guide on how to use a single VM to run an entire client/LTM/server setup. Wicked cool stuff. Happy iRuling. #Colin1.4KViews0likes2CommentsiRule checking multiple "active_members" help
Hi All, I'm currently building on-top of a couple of iRules I developed a while ago for our maintenance page - I'm extending the behaviour to also trigger a site failure page if everything fails (non-controlled take-down) To do this I need to check the state of TWO pools -I would have assumed that this would be quite trivial with a simple logic operator... however I get the following error when attempting to Apply the iRule to a Virtual-server (so its already passed the first set of validation when saving the irule initially): **_01070151:3: Rule [/Common/OopsPage2] error: Unable to find pool (Pool2) referenced at line 3: [active_members "Pool2"]_** I've tried a couple of variations with different parenthesise combinations: if { [active_members "Pool1" ] < 1 and [active_members "Pool2" ] > 0 } { AND if { ([active_members "Pool1" ] < 1) and ([active_members "Pool2" ] > 0) } { Can someone spot the rookie mistake that I'm making? This works if I specify a single Pool to check. I am developing on LTM 11.1.0... I might also try this on 10.2.3 Thanks in advance for your help... Regards, Patrick405Views0likes4Comments11.4 iapp namespace
Hi, I'm developing some iApp templates based on the f5.http. I need to be able to let the user decide if a specific pool member is enabled or disabled when the iApp is deployed. I already added in the presentation section a choice field to enable or disable the member: table members { editchoice addr display "large" tcl { package require iapp 1.0.0 return [iapp::get_items ltm node] } string port display "small" required default "80" validator "PortNumber" string connection_limit display "small" required default "0" validator "NonNegativeNumber" optional ( lb_method == "ratio-member" || lb_method == "ratio-node" || lb_method == "ratio-session" || lb_method == "ratio-least-connections-member" || lb_method == "ratio-least-connections-node" || lb_method == "dynamic-ratio-member" || lb_method == "dynamic-ratio-node" ) { string ratio default "1" validator "NonNegativeNumber" display "small" } optional ( options.advanced == "yes" && use_pga == "yes" ) { string priority default "0" required validator "NonNegativeNumber" display "small" } optional ( options.advanced == "yes" ) { choice state display "xlarge" default "enabled" } } The pool is configured with this statement in the template: array set pool_arr { 1,0 { [iapp::conf create ltm pool ${app}_pool \ [iapp::substa pool_ramp_pga_arr($advanced,$do_slow_ramp,$do_pga)] \ [iapp::substa pool_lb_queue_arr($advanced)] \ [iapp::substa monitor_arr($new_pool,$new_monitor,$advanced)] \ [iapp::pool_members $::pool__members]] \ [iapp::conf modify ltm pool ${app}_pool \ ] } 0,0 { [expr { $::net__server_mode ne "tunnel" ? \ $::pool__pool_to_use : $::pool__pool_to_use_wom }] } * { none translate-address disabled } } As the pool members are configured with the "iapp::pool_members" routine, it would be best if this configures the state of the member too. I haven't found the source of this routine so i don't know if it is capable of doing this. Is there any documentation on the iapp:: namespace and it's source code? If the routine is not capable of setting the state - any ideas on how to configure the member state besides iterating over the $::pool__members variable? Greetings, Eric303Views0likes2CommentsProtecting against DDoS attack
Dear Community, I need help from application security experts and seasoned web developers. We are getting DDoS attacks on the following requests. This attack is targetting our SMS gateway; resulting in triggerig thousands of SMSs. Please inform which kind of protections we can introduce in application level / application code level to protect against this DDoS attack. DDoS Request Sample: POST xyz.com/api/otp/asdf HTTP/1.1 Host: xyz.com Content-Length: 32 Sec-Ch-Ua: " Not A;Brand";v="99", "Chromium";v="90" Accept: application/json, text/plain, */* Authorization: *********** Accept-Language: ar Sec-Ch-Ua-Mobile: ?0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36 Content-Type: application/json Origin: http://abc.com Sec-Fetch-Site: same-site Sec-Fetch-Mode: cors Sec-Fetch-Dest: empty Referer: http://abc.com Accept-Encoding: gzip, deflate Connection: close {"mobileNumber":"123456789"} Warm Regards989Views0likes1CommentPreventing DDoS attacks on SMS URL
Dear Community, I am facing DDoS attacks on one of our application. The attacker is sending hundred of requests to a URL, which is consuming all of our SMS quota. The attack is originating from multiple IPs. Please inform how I can protect this application API from this kind of DDoS attack from appliation code level. I need help from application security experts and web developers. https://abc.com is frontend & xyz.com is backend api Sample of DDoS reqeust: POST /asdf/service/sendmobilecode HTTP/1.1 Host: xyz.com Authorization: *********** User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36 Content-Type: application/json Origin: https://abc.com Referer: https://abc.com/ {"number":"91234567890"} Kind Regards1.3KViews0likes3CommentsProgrammability in the Network: Canary Deployments
#devops The canary deployment pattern is another means of enabling continuous delivery. Deployment patterns (or as I like to call them of late, devops patterns) are good examples of how devops can put into place systems and tools that enable continuous delivery to be, well, continuous. The goal of these patterns is, for the most part, to make sure operations can smoothly move features, functions, releases or applications into production. We've previously looked at the Blue Green deployment pattern and today we're going to look at a variation: Canary deployments. Canary deployments are applicable when you're running a cluster of servers. In other words, you've got lots and lots of (probably active right now while you're considering pushing that next release) users. What you don't want is to do the traditional "we're sorry, we're down for maintenance, here's a picture of a funny squirrel to amuse you while you wait" maintenance page. You want to be able to roll out the new release without disruption. Yeah, that's quite the ask, isn't it? The Canary deployment pattern is an incremental upgrade methodology. First, the build is pushed to a small set of servers to which only a select group of users are directed. If that goes well, the release is pushed to a larger set of servers with a limited set of users. Finally, if that goes well, then the release is pushed out to all servers and all users. If issues occur at any stage, the release is halted - it goes no further. Hence the naming of the pattern - after the miner's canary, used because "its demise provided a warning of dangerous levels of toxic gases". The trick to implementing this pattern is two fold: first, being able to group the servers used in each step into discrete pools and second, the ability to direct specific sets of users to the appropriate pools. Both capabilities requires the ability to execute some logic to perform user-based load balancing. Nolio, in its first Devops Best Practices video, implements Canary deployments by manipulating the pools of servers at the load balancing tier, removing them to upgrade and then reinserting them for testing before moving onto the next phase. If your load balancing solution is programmable, there's no need to actually remove them as you can simply insert logic to remove them from being selected until they've been upgraded. You can also then insert the logic to determine which users are directed to which pool of servers. If the load balancing platform is really programmable, you can even extend that to determination to querying a database to determine user inclusion in certain groups, such as those you might use to perform AB testing. Such logic might base the decision on IP address (not the best option but an option) or later, when you're actually rolling out to a percentage of users you can write logic that randomly selects users based on location or their user name - like sharding, only in reverse - or pretty much anything you can think of. You can even split that further if you're rolling out an update to an API that's used by both mobile and traditional clients, to catch both or neither or specific types in an orderly fashion so you can test methodically - because you want to test methodically when you're using live users as test subjects. The beauty of this pattern is that allows continuous delivery. Users are never disrupted (if you do it right) and the upgrade occurs in a safely staged, incremental fashion. That enables you to back out quickly if necessary, because you do have a back button plan, right? Right?924Views1like1CommentHow to develop a second factor authentication plugin/extension?
Very new to BIG-IP I am trying to port an extension for second factor authentication written for PingFederate. There I have to create a jar and deploy it in PF. Then I can login as admin and configure it as a policy: Login using AD, on success, trigger my plugin which does the OTP and then allow access to the resource. How do I do something similar in BIG-IP? Is APM > AAA Servers the right way to do this?679Views0likes7CommentsiControl REST 101: Modifying Objects
So far in this series we’ve shown you how to connect to iControl REST via cURL, how to list objects on the device (which is probably the most used command for most APIs, including ours), and how to add objects. So you’re currently able to get a system up and running and configure new services on your BIG-IP. What if, however, you want to modify things that are already running? For that, you need a modify command. Enter PUT. Remember that in REST based architectures the API is able to determine what type of action you’re performing, and thereby the arguments and structure it should expect to parse coming in, based on the type of HTTP(S) transaction. For us a GET is a list, a POST is an add, and to modify things, you use PUT. This allows the system to understand that you’re going to reference an object that already exists, and modify some of the contents. It is important that the API back end can tell the difference. If you just tried to do another POST with only the arguments you wanted to change, it would fail because you wouldn’t meet the minimum requirements. Without a PUT, you’d have to do a list on the object, save all the configuration options already on the system for that object, modify just the one you wanted to change in your memory structure, delete the object on the system, and then POST all of that data, including the one or two modified fields, back to the device. That’s way too much work. Instead, let’s just learn how to modify, shall we? Fortunately it’s simple. All you need to do is format your cURL request with a PUT and the data you want to modify, and you’re all set. First let’s list the object we want to modify, so you can see what it looks like as it sits on the box currently. For that we go back to our list command: curl -sk -u admin:admin https://dev.rest.box/mgmt/tm/net/self/cw_test2 | jq . So that’s what the object looks like Now what if we want to simply make that existing self IP address available for more than one port? We can use the “allowService” flag to do this. By setting it to “all” we’ll allow that IP to answer on any port, rather than just a specific one, thereby opening up our config a bit. So we have the object we want to modify as well as the attribute we want to modify for that object. All we need now is to send the command to do the work. Fortunately this is pretty easy, as I mentioned before. You use the same cURL structure as always with the –sk and user:pass supplied. This will look nearly identical to the add command with the changes being that we use the “PUT” method instead of “POST” and we only supply the one item we’re modifying about the object. It looks like this: curl -sk -u admin:admin https://dev.rest.box/mgmt/tm/net/self/cw_test2 -H "Content-Type: application/json" -X PUT -d "{\"allowService\":\"all\"}" | jq . Notice that I’m still piping the output to jq here. This is because the response from the API when running most commands such as POST and PUT is actually quite useful. Here’s what I get when I run the above command: As you can see this returns the entire, new output of the object, including the piece that got modified. In this case it’s the sixth line of output, including the brace lines. What used to be the line defining the vlan is now "allowService": "all", just like we wanted. So there you have it, modifying objects on the BIG-IP remotely doesn’t get much easier than that. Armed with this knowledge on top of what we’ve covered in previous installments you’ll be able to tackle 90% of the challenges you might encounter. For our next and penultimate edition of this series we’ll cover the delete command, which should round out the methods you’ll need for the general application of the new iControl REST API.1.1KViews1like8Comments