BIG-IP
11971 TopicsWhat's New with F5 Certification?
Big changes are coming to F5 Certification, and we’re here to break it all down! On this episode of DevCentral Connects, Jason welcomes back Dr. Ken Salchow to discuss the education services portal and exciting changes to the certification exam structure and how this shift will benefit you. Join us live Feb 13th at 8am pacific with your questions!69Views2likes0CommentsSyn-Flood protection in F5 LTM BIG-IP 17.1.1.3
HI Guys Sorry maybe i have not been so clear. I've ben searching for information about syn-flood protection of f5 LTM. I know there is the this feature (i saw the command on the CLI "syn-flood protection not active) but i could not find many information. I searched in the : techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-system-syn-flood-attacks-13-0-0/1.html page but it seems all these pages of f5.com are no longer there. Can anyone explain how to activate this feature or send some exhaustive link ? device is BIG-IP 17.1.1.3 Build 0.0.5 Point Release 3 Thank You B.R Mario32Views0likes1CommentF5 BIG-IP deployment with Red Hat OpenShift - keeping client IP addresses and egress flows
Controlling the egress traffic in OpenShift allows to use the BIG-IP for several use cases: Keeping the source IP of the ingress clients Providing highly scalable SNAT for egress flows Providing security functionalities for egress flows369Views1like0CommentsExternal Monitor Information and Templates
This article is written by, and published on behalf of, DevCentral MVP Leonardo Souza. --- Introduction External monitors have been in use on F5 devices for a long time, and you can find a lot of code and articles about them here on DevCentral. However, most of those articles and code samples are very old and outdated. So, I think is a good time to update that with new information and new templates. In this article, I will provide some information about external monitors and templates you can use to build your own external monitor script and how you setup an external monitor. External Monitor The external monitor is a script that the F5 device will run to determine the status of a pool member (note: you can’t assign an external monitor to a node). The script runs the code and indicates if it was successful or not. The script indicates success outputting something to the standard output, and failure if nothing was outputted. The external monitor is your last resource in relation to monitoring, only to be used if there is no built-in monitoring that could be used to perform the monitoring. Built-in monitors are generally faster and easier to support as they do not require programming skills. External monitors run on Linux, while some built-in monitors can run on Linux or TMM. Linux is the kernel that CentOS uses, and CenOS is the base system that F5 uses, also known as the host system. TMM is the Kernel that F5 uses on TMOS, which is the F5 system. Running a built-in monitor in TMM is faster than in Linux. TMM monitors were introduced in version 13.1.0, for more information read this solution: https://support.f5.com/csp/article/K11323537 Configuration Steps Note: Instructions are for version 15.1.0, but should be similar in other versions, and you only need to setup this configuration in one device if you have an HA pair, as the configuration is shared during the config sync. Let’s create a simple external monitor to ping a pool member. You can do via tmsh, but it is simpler via GUI. Download the following file: https://raw.githubusercontent.com/leonardobdes/External_Monitor_Templates/master/bash_external_monitor_template.sh In the GUI go to System > File Management > External Monitor Program File List > Click Import. Select the file and give a name “simple_script” to the script, and import. That will import the file to the system, and make it available in the GUI. Now create a new monitor that points to the script file. In the GUI go to Local Traffic > Monitor > Click Create. Give the name “simple_monitor”, and select type as External. In the External Program select simple_script, click Finished. Now apply the new monitor to a pool member or pool. Here is the full configuration created: sys file external-monitor simple_script { checksum SHA1:817:01e135449c710e21dd090fdd688d940e319f97f5 create-time 2020-04-23:17:45:25 created-by admin last-update-time 2020-04-23:17:45:25 mode 33261 revision 1 size 817 source-path none updated-by admin } ltm monitor external simple_monitor { defaults-from external interval 5 run /Common/simple_script time-until-up 0 timeout 16 } ltm pool pool_asm { members { LABServerPHP:http { address 172.16.0.14 session monitor-enabled state up } test_server:http { address 172.16.0.19 session monitor-enabled state down } } monitor simple_monitor } The first part is the script imported to the system. Next, the monitor that was created. Lastly, a pool using that monitor, also showing that one pool member is up and another one is down. Templates You can write the script to be used in the external monitor in any language that is supported by the F5 devices. I created a template for each of the following languages: Bash Perl Python Tcl For the templates go to this codeshare link: External Monitor Templates | DevCentral On my speed tests, I got these times with the templates: Bash - 0.014 seconds Perl - 0.020 seconds Tcl - 0.021 seconds Python - 0.084 seconds As you can see, Bash is definitely the fastest. Perl and Tcl are very close, and Python is the slowest. But, you may need to use something that works very well in Python, so it might be your only option. Bash Template I will explain the Bash template, but the logic is the same for other programming languages. Each script has 2 versions, one with debug and one without, as indicated in their filename. I will explain the one with debug. This tells the system what to use to run the script. #!/bin/bash The script will be called with 2 arguments, first is the IP, and second the port. The IP is passed in IPv6 format, example “::ffff:172.16.0.14”, so you need to remove “::ffff:” Old scripts, and also the sample_monitor script that comes in the system, use “sed 's/::ffff://'`” that are very slow compared with the above. ip=${1:7} port=$2 This will create a log line in the file /var/log/ltm. logger -p local0.notice "$MON_TMPL_NAME - PID: $$ Name: $NODE_NAME IP: $ip Port: $port" This is where you add your code to test the pool member. This example is just a ping against the IP, and the “-c1” means a single packet. The script then saves the result to know if it failed or not. Next, it will log another line to /var/log/ltm. ping -c1 $ip &> /dev/null result=$? logger -p local0.notice "$MON_TMPL_NAME - PID: $$ Result: $result" Lastly, the scripts check if the result was successful. If successful, it will send up to the standard output. [ $result -eq 0 ] && echo "up" The log lines will be similar to these ones. Apr 24 13:59:15 LABBIGIP1.lab.local notice logger[5178]: /Common/simple_monitor - PID: 5177 Name: /Common/test_server IP: 172.16.0.19 Port: 80 Apr 24 13:59:18 LABBIGIP1.lab.local notice logger[5201]: /Common/simple_monitor - PID: 5200 Name: /Common/LABServerPHP IP: 172.16.0.14 Port: 80 Apr 24 13:59:18 LABBIGIP1.lab.local notice logger[5203]: /Common/simple_monitor - PID: 5200 Result: 0 Apr 24 13:59:18 LABBIGIP1.lab.local notice logger[5209]: /Common/simple_monitor - PID: 5177 Result: 1 Note that the monitoring tests can run in parallel, so the lines will not be sequential. Use the PID number (Process ID), to match the lines. Old scripts, and also the sample_monitor script that comes in the system, have multiple lines of code to check if the same monitor is still running. If running, the new process kills the old one, before performing the monitoring. I tested this behavior in version 11.3.0 and 15.1.0, and this is not necessary. Before starting the new process for the script, the system will kill the old one, so any code in the script to control that is unnecessary as it will not be used. I assume this was necessary for old versions, as I never tested, but is not necessary for currently supported versions. Arguments and Variables The external monitor has multiple settings, including the ones you will find in any other monitor, like interval and timeout. However, it has 2 extra settings, those are arguments and variables. Arguments are passed when calling the script, for example: bash simple_script.sh IP port argument3 Variables are set as environment variables. You can then access that variable in the script. Be aware that the system will add the variables from the monitor first, and then add some extra variables. The environment variables for version 11.3.0 are, with values as an example: MON_TMPL_NAME=/Common/bash_external_monitor_template ARGS_I=2 3 abc X=b PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/contrib/bin:/usr/local/bin:/usr/contrib/sbin:/usr/local/sbin:/usr/libexec NODE_PORT=80 PWD=/var/run SHLVL=1 NODE_IP=::ffff:172.16.0.19 NODE_NAME=/Common/test_server RUN_I=/Common/bash_external_monitor_template _=/bin/env ARGS_I will include the arguments you set in the monitor configuration, in this case, I have “2 3 abc”. X is the variable I added in the monitor configuration, with value “b”. That means you can use those variables in your script. Also, if you set the variable $PATH in your script as an example, it will get reset by the system, as the system variables are set after the monitor configuration variables. Let’s expand that with examples. Here is a monitor configuration with one argument and one variable. ltm monitor external simple_monitor { args 2 defaults-from external interval 5 run /Common/simple_script time-until-up 0 timeout 16 user-defined debug 1 } Here is the part of the script that changed: ping -c${3} $ip &> /dev/null result=$? [[ $debug -eq 1 ]] && logger -p local0.notice "$MON_TMPL_NAME - PID: $$ Result: $result Ping: ${3}" Now the script is using the argument 3 for the ping command, and the variable debug to define if log should occur. Let’s see the logs created: Apr 24 12:18:04 LABBIGIP1.lab.local notice logger[8179]: /Common/simple_monitor - PID: 8178 Name: /Common/test_server IP: 172.16.0.19 Port: 80 Apr 24 12:18:06 LABBIGIP1.lab.local notice logger[8196]: /Common/simple_monitor - PID: 8195 Name: /Common/LABServerPHP IP: 172.16.0.14 Port: 80 Apr 24 12:18:07 LABBIGIP1.lab.local notice logger[8206]: /Common/simple_monitor - PID: 8178 Result: 1 Ping: 2 Apr 24 12:18:07 LABBIGIP1.lab.local notice logger[8208]: /Common/simple_monitor - PID: 8195 Result: 0 Ping: 2 Ping is done with 2 packets, and the second log line still logged. Testing the Script External scripts are saved in this location: /config/filestore/files_d/Common_d/external_monitor_d That is for the Common partition, replace Common to the name of the partition if you import the script in another partition. The file will have an ID number (100479), and also a version number that changes when you update the file (19): :Common:simple_script_100479_19 That means you can go to that directory and run the script from there, and troubleshoot the errors. Don’t forget that when testing this way, you will not have the environment variables the system setup, you will have the environment variables from the shell where you run the command. Let’s test the simple simple_monitor discussed above. First, create the environment variables: export debug=1 MON_TMPL_NAME='/Common/simple_monitor' NODE_NAME='/Common/test_server' Run the script: ./\:Common\:simple_script_100479_22 '::ffff:172.16.0.14' 80 2 You will get the following log lines: Apr 24 16:03:48 LABBIGIP1.lab.local notice root[12206]: /Common/simple_monitor - PID: 12205 Name: /Common/test_server IP: 172.16.0.14 Port: 80 Apr 24 16:03:49 LABBIGIP1.lab.local notice root[12224]: /Common/simple_monitor - PID: 12205 Result: 0 Ping: 2 You need to escape the “:” with \. After you do the tests, delete the environment variables created: unset debug MON_TMPL_NAME NODE_NAME Conclusion I hope this article provides you all the information you need to create your own external monitor. After you understand how it works, it is very simple to use. Just don’t forget, no “echo down”, only output something when the monitor was successful, and make sure you suppress any command output.4.2KViews1like0CommentsHelp F5 Transform the BIG-IP Administrator Certification
Many of you received a copy of the BIG-IP Administrator Certification beta exam email announcement earlier this week. We hope you can carve out some time to participate in the beta exams. For anyone who missed this F5 Certified message sent to candidates earlier this week, you can check it out below. If you’re seeing this for the first time, it probably means you’re not a part of the F5 Certified community yet. Now is an ideal time to join! The How do I enroll in the F5 Certified Professionals program? article will guide you through the process. And finally, for those of you who wrote and reviewed items for the new BIG-IP Administrator exams— THANK YOU! You did an incredible job. Thanks to your contributions, we can confidently say that the certifications our candidates attain are in line with the high standards and integrity on which our certification program was established. Please let me know if you have any questions or if you would like to volunteer for additional certification exam development activities. We always need SMEs! Cheers! Heidi The F5 Certification team is excited to announce some exciting changes in our program and invite you to help us transform it by participating in the BIG-IP Administrator beta exams. When considering what changes needed to be made, we asked our candidate community, “What could we do to increase the value of being F5 Certified for you?” Your feedback was clear. You value the F5 BIG-IP Administrator certification, but you want updated, more relevant exams. You want the exams to be easier to take while still providing the same quality items that legitimately test the knowledge, skills and abilities of those who achieve certification. You want the same level of quality and integrity in the program with more options to maintain your certifications. We listened, and are excited to share what has changed, and provide you with a glimpse into what will be changing in the future, in the F5 Certified Program Updates article. Here is how you can help us with a vital step in the transformation. Before we can publish the final version of the new BIG-IP Administrator certification exams, we need you to take the beta versions of these exams. Here are the necessary steps and helpful information in the FAQ: F5 Certified Administrator, BIG-IP BETA exams article, to get you started. Existing candidates, login to the new Education Services Portal. If you are new to the program, login using your F5 SSO credentials to complete registration. For detailed login instructions, see How to Log Into the Education Services Portal article. ALL candidates are eligible to take the BIG-IP Administrator beta exams. Even if you have achieved a higher level of F5 certification, you can participate! We want your input. Schedule the BIG-IP Administrator beta exams by following the instructions in the How to Schedule a Beta Exam article. The beta exams are live today through February 28, 2025. Each beta exam is 60-minutes with up to 60 items. The beta exams are delivered exclusively online at Certiverse. The cost is $20 USD for each exam with promo code F5CABBETA There are five beta exams: BIG-IP Administration Install, Initial Configuration, and Upgrade (F5CAB1-B) BIG-IP Administration Data Plane Concepts (F5CAB2-B) BIG-IP Administration Data Plane Configuration (F5CAB3-B) BIG-IP Administration Control Plane Administration (F5CAB4-B) BIG-IP Administration Support and Troubleshooting (F5CAB5-B) To prepare for all five of the beta exams, refer to the Certified Administrator, BIG-IP Certification blueprint. The beta exams will be scored AFTER the beta period closes. Candidates who have passed all five exams, will achieve Certified Administrator, BIG-IP Certification. For more information about these beta exams, see the FAQ: F5 Certified Administrator, BIG-IP BETA exams article. Complete all five of the beta exams and provide us with the data and feedback necessary to create the final version of the BIG-IP Administrator exams. Thank you for being a valued member of the F5 Certified Community! Please email us at support@mail.education.com with any questions or feedback.297Views1like0CommentsExport Virtual Server Configuration in CSV - tmsh cli script
Problem this snippet solves: This is a simple cli script used to collect all the virtuals name, its VIP details, Pool names, members, all Profiles, Irules, persistence associated to each, in all partitions. A sample output would be like below, One can customize the code to extract other fields available too. The same logic can be allowed to pull information's from profiles stats, certificates etc. Update: 5th Oct 2020 Added Pool members capture in the code. After the Pool-Name, Pool-Members column will be found. If a pool does not have members - field not present: "members" will shown in the respective Pool-Members column. If a pool itself is not bound to the VS, then Pool-Name, Pool-Members will have none in the respective columns. Update: 21st Jan 2021 Added logic to look for multiple partitions & collect configs Update: 12th Feb 2021 Added logic to add persistence to sheet. Update: 26th May 2021 Added logic to add state & status to sheet. Update: 24th Oct 2023 Added logic to add hostname, Pool Status, Total-Connections & Current-Connections. Note: The codeshare has multiple version, use the latest version alone. The reason to keep the other versions is for end users to understand & compare, thus helping them to modify to their own requirements. Hope it helps. How to use this snippet: Login to the LTM, create your script by running the below commands and paste the code provided in snippet tmsh create cli script virtual-details So when you list it, it should look something like below, [admin@labltm:Active:Standalone] ~ # tmsh list cli script virtual-details cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],[tmsh::get_field_value $obj "pool"],$profilelist,[tmsh::get_field_value $obj "rules"]" } } total-signing-status not-all-signed } [admin@labltm:Active:Standalone] ~ # And you can run the script like below, tmsh run cli script virtual-details > /var/tmp/virtual-details.csv And get the output from the saved file, cat /var/tmp/virtual-details.csv Old Codes: cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],[tmsh::get_field_value $obj "pool"],$profilelist,[tmsh::get_field_value $obj "rules"]" } } total-signing-status not-all-signed } ###=================================================== ###2.0 ###UPDATED CODE BELOW ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } else { puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } total-signing-status not-all-signed } ###=================================================== ### Version 3.0 ### UPDATED CODE BELOW FOR MULTIPLE PARTITION ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } } total-signing-status not-all-signed } ###=================================================== ### Version 4.0 ### UPDATED CODE BELOW FOR CAPTURING PERSISTENCE ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules,Persist" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } } } } total-signing-status not-all-signed } ###=================================================== ### 5.0 ### UPDATED CODE BELOW ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules,Persist,Status,State" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { foreach { status } [tmsh::get_status ltm virtual [tmsh::get_name $obj]] { set vipstatus [tmsh::get_field_value $status "status.availability-state"] set vipstate [tmsh::get_field_value $status "status.enabled-state"] } set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } } } } total-signing-status not-all-signed } Latest Code: cli script virtual-details { proc script::run {} { set hostconf [tmsh::get_config /sys global-settings hostname] set hostname [tmsh::get_field_value [lindex $hostconf 0] hostname] puts "Hostname,Partition,Virtual Server,Destination,Pool-Name,Pool-Status,Pool-Members,Profiles,Rules,Persist,Status,State,Total-Conn,Current-Conn" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { foreach { status } [tmsh::get_status ltm virtual [tmsh::get_name $obj]] { set vipstatus [tmsh::get_field_value $status "status.availability-state"] set vipstate [tmsh::get_field_value $status "status.enabled-state"] set total_conn [tmsh::get_field_value $status "clientside.tot-conns"] set curr_conn [tmsh::get_field_value $status "clientside.cur-conns"] } set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ foreach { p_status } [tmsh::get_status ltm pool $poolname] { set pool_status [tmsh::get_field_value $p_status "status.availability-state"] } set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_status,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_status,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } } } else { puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } } } } } Tested this on version: 13.08.9KViews9likes26CommentsF5 BIG-IP and NetApp StorageGRID - Providing Fast and Scalable S3 API for AI apps
F5 BIG-IP, an industry-leading ADC solution, can provide load balancing services for HTTPS servers, with full security applied in-flight and performance levels to meet any enterprise’s capacity targets. Specific to S3 API, the object storage and retrieval protocol that rides upon HTTPS, an aligned partnering solution exists from NetApp, which allows a large-scale set of S3 API targets to ingest and provide objects. Automatic backend synchronization allows any node to be offered up at a target by a server load balancer like BIG-IP. This allows overall storage node utilization to be optimized across the node set, and scaled performance to reach the highest S3 API bandwidth levels, all while offering high availability to S3 API consumers. S3 compatible storage is becoming popular for AI applications due to its superior performance over traditional protocols such as NFS or CIFS, as well as enabling repatriation of data from the cloud to on-prem. These are scenarios where the amount of data faced is large, this drives the requirement for new levels of scalability and performance; S3 compatible object storages such as NetApp StorageGRID are purpose-built to reach such levels. Sample BIG-IP and StorageGRID Configuration This document is based upon tests and measurements using the following lab configuration. All devices in the lab were virtual machine-based offerings. The S3 service to be projected to the outside world, depicted in the above diagram and delivered to the client via the external network, will use a BIG-IP virtual server (VS) which is tied to an origin pool of three large-capacity StorageGRID nodes. The BIG-IP maintains the integrity of the NetApp nodes by frequent HTTP-based health checks. Should an unhealthy node be detected, it will be dropped from the list of active pool members. When content is written via the S3 protocol to any node in the pool, the other members are synchronized to serve up content should they be selected by BIG-IP for future read requests. The key recommendations and observations in building the lab include: Setup a local certificate authority such that all nodes can be trusted by the BIG-IP. Typically the local CA-signed certificate will incorporate every node’s FQDN and IP address within the listed subject alternate names (SAN) to make the backend solution streamlined with one single certificate. Different F5 profiles, such as FastL4 or FastHTTP, can be selected to reach the right tradeoff between the absolute capacity of stateful traffic load-balanced versus rich layer 7 functions like iRules or authentication. Modern techniques such as multi-part uploads or using HTTP Ranges for downloads can take large objects, and concurrently move smaller pieces across the load balancer, lowering total transaction times, and spreading work over more CPU cores. The S3 protocol, at its core, is a set of REST API calls. To facilitate testing, the widely used S3Browser (www.s3browser.com) was used to quickly and intuitively create S3 buckets on the NetApp offering and send/retrieve objects (files) through the BIG-IP load balancer. Setup the BIG-IP and StorageGrid Systems The StorageGrid solution is an array of storage nodes, provisioned with the help of an administrative host, the “Grid Manager”. For interactive users, no thick client is required as on-board web services allow a streamlined experience all through an Internet browser. The following is an example of Grid Manager, taken from a Chrome browser; one sees the three Storage Nodes setup have been successfully added. The load balancer, in our case, the BIG-IP, is set up with a virtual server to support HTTPS traffic and distributed that traffic, which is S3 object storage traffic, to the three StorageGRID nodes. The following screenshot demonstrates that the BIG-IP is setup in a standard HA (active-passive pair) configuration and the three pool members are healthy (green, health checks are fine) and receiving/sending S3 traffic, as the byte counts are seen in the image to be non-zero. On the internal side of the BIG-IP, TCP port 18082 is being used for S3 traffic. To do testing of the solution, including features such as multi-part uploads and downloads, a popular S3 tool, S3Browser, was downloaded and used. The following shows the entirety of the S3Browser setup. Simply create an account (StorageGRID-Account-01 in our example) and point the REST API endpoint at the BIG-IP Virtual Server that is acting as the secure front door for our pool of NetApp nodes. The S3 Access Key ID and Secret values are generated at turn-up time of the NetApp appliances. All S3 traffic will, of course, be SSL/TLS encrypted. BIG-IP will intercept the SSL traffic (high-speed decrypt) and then re-encrypt when proxying the traffic to a selected origin pool member. Other valid load balancer setups exist; one might include an “off load” approach to SSL, whereby the S3 nodes safely co-located in a data center may prefer to receive non-SSL HTTP S3 traffic. This may see an overall performance improvement in terms of peak bandwidth per storage node, but this comes at the tradeoff of security considerations. Experimenting with S3 Protocol and Load Balancing With all the elements in place to start understanding the behavior of S3 and spreading traffic across NetApp nodes, a quick test involved creating a S3 bucket and placing some objects in that new bucket. Buckets are logical collections of objects, conceptually not that different from folders or directories in file systems. In fact, a S3 bucket could even be mounted as a folder in an operating system such as Linux. In their simplest form, most commonly, buckets can simply serve as high-capacity, performant storage and retrieval targets for similarly themed structured or unstructured data. In the first test, we created a new bucket (“audio-clip-bucket”) and uploaded four sample files to the new bucket using S3Browser. We then zeroed the statistics for each pool member on the BIG-IP, to see if even this small upload would spread S3 traffic across more than a single NetApp device. Immediately after the upload, the counters reflect that two StorageGRID nodes were selected to receive S3 transactions. Richly detailed, per-transaction visibility can be obtained by leveraging the F5 SSL Orchestrator (SSLO) feature on the BIG-IP, whereby copies of the bi-directional S3 traffic decrypted within the load balancer can be sent to packet loggers, analytics tools, or even protocol analyzers like Wireshark. The BIG-IP also has an onboard analytics tool, Application Visibility and Reporting (AVR) which can provide some details on the nuances of the S3 traffic being proxied. AVR demonstrates the following characteristics of the above traffic, a simple bucket creation and upload of 4 objects. With AVR, one can see the URL values used by S3, which include the bucket name itself as well as transactions incorporating the object names as URLs. Also, the HTTP methods used included both GETS and PUTS. The use of HTTP PUT is expected when creating a new bucket. S3 is not governed by a typical standards body document, such as an IETF Request for Comment (RFC), but rather has evolved out of AWS and their use of S3 since 2006. For details around S3 API characteristics and nomenclature, this site can be referenced. For example, the expected syntax for creating a bucket is provided, including the fact that it should be an HTTP PUT to the root (/) URL target, with the bucket configuration parameters including name provided within the HTTP transaction body. Achieving High Performance S3 with BIG-IP and StorageGRID A common concern with protocols, such as HTTP, is head-of-line blocking, where one large, lengthy transaction blocks subsequent desired, queued transactions. This is one of the reasons for parallelism in HTTP, where loading 30 or more objects to paint a web page will often utilize two, four, or even more concurrent TCP sessions. Another performance issue when dealing with very large transactions is, without parallelism, even those most performant networks will see an established TCP session reach a maximum congestion window (CWND) where no more segments may be in flight until new TCP ACKs arrive back. Advanced TCP options like TCP exponential windowing or TCP SACK can help, but regardless of this, the achievable bandwidth of any one TCP session is bounded and may also frequently task only one core in multi-core CPUs. With the BIG-IP serving as the intermediary, large S3 transactions may default to “multi-part” uploads and downloads. The larger objects become a series of smaller objects that conveniently can be load-balanced by BIG-IP across the entire cluster of NetApp nodes. As displayed in the following diagram, we are asking for multi-part uploads to kick in for objects larger than 5 megabytes. After uploading a 20-megabyte file (technically, 20,000,000 bytes) the BIG-IP shows the traffic distributed across multiple NetApp nodes to the tune of 160.9 million bits. The incoming bits, incoming from the perspective of the origin pool members, confirm the delivery of the object with a small amount of protocol overhead (bits divided by eight to reach bytes). The value of load balancing manageable chunks of very large objects will pay dividends over time with faster overall transaction completion times due to the spreading of traffic across NetApp nodes, more TCP sessions reaching high congestion window values, and no single-core bottle necks in multicore equipment. Tuning BIG-IP for High Performance S3 Service Delivery The F5 BIG-IP offers a set of different profiles it can run its Local Traffic Manager (LTM) module in accordance with; LTM is the heart of the server load balancing function. The most performant profile in terms of attainable traffic load is the “FastL4” profile. This, and other profiles such as “OneConnect” or “FastHTTP”, can be tied to a virtual server, and details around each profile can be found here within the BIG-IP GUI: The FastL4 profile can increase virtual server performance and throughput for supported platforms by using the embedded Packet Velocity Acceleration (ePVA) chip to accelerate traffic. The ePVA chip is a hardware acceleration field programmable gate array (FPGA) that delivers high-performance L4 throughput by offloading traffic processing to the hardware acceleration chip. The BIG-IP makes flow acceleration decisions in software and then offloads eligible flows to the ePVA chip for that acceleration. For platforms that do not contain the ePVA chip, the system performs acceleration actions in software. Software-only solutions can increase performance in direct relationship to the hardware offered by the underlying host. As examples of BIG-IP virtual edition (VE) software running on mid-grade hardware platforms, results with Dell can be found here and similar experiences with HPE Proliant platforms are here. One thing to note about FastL4 as the profile to underpin a performance mode BIG-IP virtual server is that it is layer 4 oriented. For certain features that involve layer 7 HTTP related fields, such as using iRules to swap HTTP headers or perform HTTP authentication, a different profile might be more suitable. A bonus of FastL4 are some interesting specific performance features catering to it. In the BIG-IP version 17 release train, there is a feature to quickly tear down, with no delay, TCP sessions no longer required. Most TCP stacks implement TCP “2MSL” rules, where upon receiving and sending TCP FIN messages, the socket enters a lengthy TCP “TIME_WAIT” state, often minutes long. This stems back to historically bad packet loss environments of the very early Internet. A concern was high latency and packet loss might see incoming packets arrive at a target very late, and the TCP state machine would be confused if no record of the socket still existed. As such, the lengthy TIME_WAIT period was adopted even though this is consuming on-board resources to maintain the state. With FastL4, the “fast” close with TCP reset option now exists, such that any incoming TCP FIN message observed by BIG-IP will result in TCP RESETS being sent to both endpoints, normally bypassing TIME_WAIT penalties. OneConnect and FastHTTP Profiles As mentioned, other traffic profiles on BIG-IP are directed towards Layer 7 and HTTP features. One interesting profile is F5’s “OneConnect”. The OneConnect feature set works with HTTP Keep-Alives, which allows the BIG-IP system to minimize the number of server-side TCP connections by making existing connections available for reuse by other clients. This reduces, among other things, excessive TCP 3-way handshakes (Syn, Syn-Ack, Ack) and mitigates the small TCP congestion windows that new TCP sessions start with and only increases with successful traffic delivery. Persistent server-side TCP connections ameliorate this. When a new connection is initiated to the virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP system applies the OneConnect source mask to the IP address in the request to determine whether it is eligible to reuse the existing idle connection. If it is eligible, the BIG-IP system marks the connection as non-idle and sends a client request over it. If the request is not eligible for reuse, or an idle server-side flow is not found, the BIG-IP system creates a new server-side TCP connection and sends client requests over it. The last profile considered is the “Fast HTTP” profile. The Fast HTTP profile is designed to speed up certain types of HTTP connections and again strives to reduce the number of connections opened to the back-end HTTP servers. This is accomplished by combining features from the TCP, HTTP, and OneConnect profiles into a single profile that is optimized for network performance. A resulting high performance HTTP virtual server processes connections on a packet-by-packet basis and buffers only enough data to parse packet headers. The performance HTTP virtual server TCP behavior operates as follows: the BIG-IP system establishes server-side flows by opening TCP connections to pool members. When a client makes a connection to the performance HTTP virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP LTM system marks the connection as non-idle and sends a client request over the connection. Summary The NetApp StorageGRID multi-node S3 compatible object storage solution fits well with a high-performance server load balancer, thus making the F5 BIG-IP a good fit. S3 protocol can itself be adjusted to improve transaction response times, such as through the use of multi-part uploads and downloads, amplifying the default load balancing to now spread even more traffic chunks over many NetApp nodes. BIG-IP has numerous approaches to configuring virtual servers, from highest performance L4-focused profiles to similar offerings that retain L7 HTTP awareness. Lab testing was accomplished using the S3Browser utility and results of traffic flows were confirmed with both the standard BIG-IP GUI and the additional AVR analytics module, which provides additional protocol insight.662Views3likes0CommentsiRule to Force Source IP to Specific Backend Node
Hi everyone, Hope someone could help me with this kind of setup. We need an iRule to force specific IPs to connect on specific backend server of the VS. Please see flow below. Client (1.1.1.1) when connecting to VS1 traffic should go to Node1 Client (2.2.2.2) when connecting to VS1 traffic should go to Node2 I saw this discussion but I think there's something to add? Instead of deny. Thank you so much. https://community.f5.com/discussions/technicalforum/f5-whitelisting-allowing-a-specific-range-of-traffic-to-vs/195967Solved95Views0likes4CommentsTACACS+ External Monitor (Python)
Problem this snippet solves: This script is an external monitor for TACACS+ that simulates a TACACS+ client authenticating a test user, and marks the status of a pool member as up if the authentication is successful. If the connection is down/times out, or the authentication fails due to invalid account settings, the script marks the pool member status as down. This is heavily inspired by the Radius External Monitor (Python) by AlanTen. How to use this snippet: Prerequisite This script uses the TACACS+ Python client by Ansible (tested on version 2.6). Create the directory /config/eav/tacacs_plus on BIG-IP Copy all contents from tacacs_plus package into /config/eav/tacacs_plus. You may also need to download six.py from https://raw.githubusercontent.com/benjaminp/six/master/six.py and place it in /config/eav/tacacs_plus. You will need to have a test account provisioned on the TACACS+ server for the script to perform authentication. Installation On BIG-IP, import the code snippet below as an External Monitor Program File. Monitor Configuration Set up an External monitor with the imported file, and configure it with the following environment variables: KEY: TACACS+ server secret USER: Username for test account PASSWORD: Password for test account MOD_PATH: Path to location of Python package tacacs_plus, default: /config/eav TIMEOUT: Duration to wait for connectivity to TACACS server to be established, default: 3 Troubleshooting SSH to BIG-IP and run the script locally $ cd /config/filestore/files_d/Common_d/external_monitor_d/ # Get name of uploaded file, e.g.: $ ls -la ... -rwxr-xr-x. 1 tomcat tomcat 1883 2021-09-17 04:05 :Common:tacacs-monitor_39568_7 # Run the script with the corresponding variables $ KEY=<my_tacacs_key> USER=<testuser> PASSWORD=<supersecure> python <external program file, e.g.:Common:tacacs-monitor_39568_7> <TACACS+ server IP> <TACACS+ server port> Code : #!/usr/bin/env python # # Filename : tacacs_plus_mon.py # Author : Leon Seng # Version : 1.2 # Date : 2021/09/21 # Python ver: 2.6+ # F5 version: 12.1+ # # ========== Installation # Import this script via GUI: # System > File Management > External Monitor Program File List > Import... # Name it however you want. # Get, modify and copy the following modules: # ========== Required modules # -- six -- # https://pypi.org/project/six/ # Copy six.py into /config/eav # # -- tacacs_plus -- # https://pypi.org/project/tacacs_plus/ | https://github.com/ansible/tacacs_plus # Copy tacacs_plus directory into /config/eav # ========== Environment Variables # NODE_IP - Supplied by F5 monitor as first argument # NODE_PORT - Supplied by F5 monitor as second argument # KEY - TACACS+ server secret # USER - Username for test account # PASSWORD - Password for test account # MOD_PATH - Path to location of Python package tacacs_plus, default: /config/eav # TIMEOUT - Duration to wait for connectivity to TACACS server to be established, default: 3 import os import socket import sys if os.environ.get('MOD_PATH'): sys.path.append(os.environ.get('MOD_PATH')) else: sys.path.append('/config/eav') # https://github.com/ansible/tacacs_plus from tacacs_plus.client import TACACSClient node_ip = sys.argv[1] node_port = int(sys.argv[2]) key = os.environ.get("KEY") user = os.environ.get("USER") password = os.environ.get("PASSWORD") timeout = int(os.environ.get("TIMEOUT", 3)) # Determine if node IP is IPv4 or IPv6 family = None try: socket.inet_pton(socket.AF_INET, node_ip) family = socket.AF_INET except socket.error: # not a valid address try: socket.inet_pton(socket.AF_INET6, node_ip) family = socket.AF_INET6 except socket.error: sys.exit(1) # Authenticate against TACACS server client = TACACSClient(node_ip, node_port, key, timeout=timeout, family=family) try: auth = client.authenticate(user, password) if auth.valid: print "up" except socket.error: # EAV script marks node as DOWN when no output is present pass Tested this on version: 12.11.2KViews1like0Comments