application delivery
561 TopicsPerformance optimalization of message based load balancing.
There are several benefits to message based load balancing (mblb) rather than the traditional connection based load balancing, as described in Load-Balancing Syslog Messages Part 1. This implementation works well, however in a high volume logging environment it may be beneficial to consider multi-message based load balancing (mmblb) to enhance performance. At the core of the message-based load balancing there is an iRule which carves out new-line separated messages from the TCP stream and submits them to a load balancing decision. The "original" iRule: when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { set minimum_message_length 1 while { [TCP::payload] contains "\x0a" && [TCP::payload length] > $minimum_message_length } { set m [getfield [TCP::payload] "\x0a" 1] set length [expr [string length $m] + 1] TCP::release $length TCP::notify request } TCP::collect } This iRule will carve out log messages one by one and for each log message perform a load balancing decision. With respect to performance this does not scale well. By modifying the "original" iRule, a multi-message based load balancing (mmblb) iRule has been developed. The main changes are: 1) Expanding the TCP collect frame size to fit more messages for processing. 2) Instead of carving message by message, finding the last new-line character and load balance all messages up to this point. The "modified" iRule (mmblb) using collection size of 150KB (configurable): when CLIENT_ACCEPTED { TCP::collect 150000 } when CLIENT_DATA { set mess [TCP::payload] # string last function will return -1 if no new-line character found in $mess set messlength [expr [string last "\x0a" $mess] + 1] if { $messlength > 0 } { TCP::release $messlength TCP::notify request } TCP::collect 150000 } This iRule (mmblb) allows for multiple log messages, determined by the TCP collection size, to be carved out in one chunk and then one load balancing decision before collecting more data from the TCP stream. Note that the pool member data volume statistics will be more uneven for smaller data volumes (KB/MB) until larger volume sizes are displayed (GB+). For a specific set of test messages, the modified iRule (mmblb) significantly reduced the number of Executions and number of Instruction Cycles according to internal resource statistics. This iRule is successfully deployed in a 5B+ log messages per day environment using a F5 cluster and 5 virtual Rsyslog backend servers.12Views0likes0Commentssnmp-check external monitor
Problem this snippet solves: This external monitor script runs an snmpget to pool members and marks the members up or down based upon the result. Specifically created for this GTM/APM use case, but can be modified as needed. How to use this snippet: copy the contents of this file into /config/monitors/snmp-check, and then in the external monitor configuration, reference the monitor and provide the following variable key/value pairs: result=<result> community=<community> OID=<oid> Code : #!/bin/sh # # (c) Copyright 1996-2005 F5 Networks, Inc. # # This software is confidential and may contain trade secrets that are the # property of F5 Networks, Inc. No part of the software may be disclosed # to other parties without the express written consent of F5 Networks, Inc. # It is against the law to copy the software. No part of the software may # be reproduced, transmitted, or distributed in any form or by any means, # electronic or mechanical, including photocopying, recording, or information # storage and retrieval systems, for any purpose without the express written # permission of F5 Networks, Inc. Our services are only available for legal # users of the program, for instance in the event that we extend our services # by offering the updating of files via the Internet. # # @(#) $Id: sample_monitor,v 1.3 2005/02/04 18:47:17 saxon Exp $ # # # these arguments supplied automatically for all external pingers: # $1 = IP (nnn.nnn.nnn.nnn notation or hostname) # $2 = port (decimal, host byte order) # $3 and higher = additional arguments # # $MONITOR_NAME = name of the monitor # # In this sample script, $3 is the regular expression # #These lines are required to control the process ID of the monitor pidfile="/var/run/$MONITOR_NAME.$1..$2.pid" if [ -f $pidfile ] then kill -9 `cat $pidfile` > /dev/null 2>&1 fi echo "$$" > $pidfile #Since version9 uses the ipv6 native version of the IP address, parse that down #for usage node_ip=`echo $1 | sed 's/::ffff://'` #Log the variables for debugging #echo IP= $node_ip Port =$2 OID= $OID comm= $community result= $result >> /var/tmp/test #Create a variable called answer that contains the result of the snmpwalk. answer=`snmpget $node_ip -c $community -O v $OID | awk '{print $2}'` #Log the answer for debugging #echo Answer= $answer >> /var/tmp/test if [ $answer -lt $result ] then echo "up" fi rm -f $pidfile Tested this on version: No Version Found2KViews2likes5CommentsVIPRION external monitor
Problem this snippet solves: This VIPRION specific external monitor script is written in bash and utilizes TMSH to extend the built-in monitoring functionality of BIG-IP version 10.2.3. This write-up assumes the reader has working knowledge writing BIG-IP LTM external monitors. The following link is a great starting point LTM External Monitors: The Basics | DevCentral Logical network diagram: NOTE: The monitor is written to meet very specific environmental requirements. Therefore, your implementation may vary greatly. This post is inteded to show you some requirements for writing external monitors on the VIPRION platform while offering some creative ways to extend the functionality of external monitors using TMSH. The VIPRION acts as a hop in the default path of traffic destined for the Internet. Specific application flows are vectored to optimization servers and all other traffic is passed to the next hop router (Router C) toward the Internet. Router A and Router C are BGP neighbors through the VIPRION. Router B is a BGP neighbor with the VIPRION via ZebOS. A virtual address has route health injection enabled. The script monitors a user defined (agrument to the script) pool and transitions into the failed state when the available pool member count drops below a threshold value (argument to the script). In the failed state the following actions are performed once, effectively stopping client traffic flow through the VIPRION. Two virtual servers (arguments to the script) are disable to stop traffic through VIPRION. A virtual address (argument to the script) is disabled to disable route health injection of the address. All non Self-IP BGP connections are found in the connection table and deleted. NOTE: Manual intervention is required to enable virtual servers and virtual address when the monitor transitions from failed state to successful state before normal traffic flows will proceed. How to use this snippet: The monitor definition: monitor eavbgpv3 { defaults from external interval 20 timeout 61 args "poolhttp 32 vsforward1 vsforward2 10.10.10.1"v DEBUG "0"v run "rhi_v3.bsh" } This external monitor is configured to check for available members in the pool "poolhttp". When the available members falls below 32 the monitor transistions into the failed state and disables the virtual servers "vsforward1" and "vs_forward2" and disables the virtual address "10.10.10.1". When the available pool members increases above 32 neither the virtuals servers nor the virtual address is enabled. This will require manual intervention. The external monitor is assigned to a phantom pool with a single member "1.1.1.1:4353". No traffic is sent to the pool member. This pool and pool member are in place so the operator can see the current status of the external monitor. The Pool definition: pool bgpmonitor { monitor all eavbgp_v3 members 1.1.1.1:f5-iquery {} } You can download the script here: rhi_v3.bsh CODE: #!/bin/bash # (c) Copyright 1996-2007 F5 Networks, Inc. # # This software is confidential and may contain trade secrets that are the # property of F5 Networks, Inc. No part of the software may be disclosed # to other parties without the express written consent of F5 Networks, Inc. # It is against the law to copy the software. No part of the software may # be reproduced, transmitted, or distributed in any form or by any means, # electronic or mechanical, including photocopying, recording, or information # storage and retrieval systems, for any purpose without the express written # permission of F5 Networks, Inc. Our services are only available for legal # users of the program, for instance in the event that we extend our services # by offering the updating of files via the Internet. # # author: Paul DeHerrera pauld@f5.com # # these arguments supplied automatically for all external monitors: # $1 = IP (nnn.nnn.nnn.nnn notation or hostname) # $2 = port (decimal, host byte order) -- not used in this monitor, assumes default port 53 # # these arguments must be supplied in the monitor configuration: # $3 = name of pool to monitor # $4 = threshold value of the pool. If the available pool member count drops below this value the monitor will respond in 'failed' state # $5 = first Virtual server to disable # $6 = second Virtual server to disable # $7 = first Virtual address to disable # $8 = second Virtual address to disable ### Check for the 'DEBUG' variable, set it here if not present. # is the DEBUG variable passed as a variable? if [ -z "$DEBUG" ] then # If the monitor config didn't specify debug as a variable then enable/disable it here DEBUG=0 fi ### If Debug is on, output the script start time to /var/log/ltm # capture and log (when debug is on) a timestamp when this eav starts export ST=`date +%Y%m%d-%H:%M:%S` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): started at $ST" | logger -p local0.debug; fi ### Do not execute this script within the first 300 seconds after BIG-IP boot. This is a customer specific requirement # this section is used to introduce a delay of 300 seconds after system boot before executing this eav for the first time BOOT_DATE=`who -b | grep -i 'system boot' | awk {'print $3 " " $4 " " $5'}` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): boot_date: ($BOOT_DATE)" | logger -p local0.debug; fi EPOCH_DATE=`date -d "$BOOT_DATE" +%s` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): epoch_date: ($EPOCH_DATE)" | logger -p local0.debug; fi EPOCH_DATE=$((${EPOCH_DATE}+300)) if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): epoch_date +300: ($EPOCH_DATE)" | logger -p local0.debug; fi CUR_DATE=`date +%s` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): current_date: ($CUR_DATE)" | logger -p local0.debug; fi if [ $CUR_DATE -ge $EPOCH_DATE ] then ### Assign a value to variables. The VIPRION requires some commands to be executed on the Primary slot as you will see later in this script # export some variables if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): exporting variables..." | logger -p local0.debug; fi export REMOTEUSER="root" export HOME="/root" export IP=`echo $1 | sed 's/::ffff://'` export PORT=$2 export POOL=$3 export MEMBER_THRESHOLD=$4 export VIRTUAL_SERVER1=$5 export VIRTUAL_SERVER2=$6 export VIRTUAL_ADDRESS1=$7 export VIRTUAL_ADDRESS2=$8 export PIDFILE="/var/run/`basename $0`.$IP.$PORT.pid" export TRACKING_FILENAME=/var/tmp/rhi_bsh_monitor_status export PRIMARY_SLOT=`tmsh list sys db cluster.primary.slot | grep -i 'value' | sed -e 's/\"//g' | awk {'print $NF'}` ### Output the Primary slot to /var/log/ltm if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): the primary blade is in slot number: ($PRIMARY_SLOT)..." | logger -p local0.debug; fi ### This section is for debugging only. Check to see if this script is executing on the Primary blade and output to /var/log/ltm if [ $DEBUG -eq 1 ]; then export PRIMARY_BLADE=`tmsh list sys db cluster.primary | grep -i "value" | sed -e 's/\"//g' | awk {'print $NF'}`; fi if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): is this monitor executing on the primary blade: ($PRIMARY_BLADE)" | logger -p local0.debug; fi ### Standard EAV check to see if an instance of this script is already running for the memeber. If so, kill the previous instance and output to /var/log/ltm # is there already an instance of this EAV running for this member? if [ -f $PIDFILE ] then if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): pid file is present, killing process..." | logger -p local0.debug; fi kill -9 `cat $PIDFILE` > /dev/null 2>&1 echo "EAV `basename $0` ($$): exceeded monitor interval, needed to kill ${IP}:${PORT} with PID `cat $PIDFILE`" | logger -p local0.error fi ### Create a new pid file to track this instance of the monitor for the current member # create a pidfile if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): creating new pid file..." | logger -p local0.debug; fi echo "$$" > $PIDFILE ### Export variables for available pool members and total pool members # export more variables (these require tmsh) export AVAILABLE=`tmsh show /ltm pool $POOL members all-properties | grep -i "Availability" | awk {'print $NF'} | grep -ic "available"` export TOTAL_POOL_MEMBERS=`tmsh show /ltm pool $POOL members all-properties | grep -c "Pool Member"` let "AVAILABLE-=1" ### If Debug is on, output some variables to /var/log/ltm - helps with troubleshooting if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): Pool ($POOL) has ($AVAILABLE) available of ($TOTAL_POOL_MEMBERS) total members." | logger -p local0.debug; fi if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): Pool ($POOL) threshold = ($MEMBER_THRESHOLD) members. Virtual server1 ($VIRTUAL_SERVER1) and Virtual server2 ($VIRTUAL_SERVER2)" | logger -p local0.debug; fi if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): Member Threshold ($MEMBER_THRESHOLD)" | logger -p local0.debug; fi ### If the available members is less than the threshold then we are in a 'failed' state. # main monitor logic if [ "$AVAILABLE" -lt "$MEMBER_THRESHOLD" ] then ### If Debug is on, output status to /var/log/ltm ### notify log - below threshold and disabling virtual server1 if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): AVAILABLE < MEMBER_THRESHOLD, disabling the virtual server..." | logger -p local0.debug; fi if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disabling Virtual Server 1 ($VIRTUAL_SERVER1)" | logger -p local0.debug; fi ### Disable the first virtual server, which may exist in an administrative partition. For version 10.2.3 (possibly others) the script is required to change the 'update-partition' before disabling the virtual server. To accomplish this we first determine the administrative partition name where the virtual is configured then we build a list construct to execute both commands consecutively. ### disable virtual server 1 ### obtain the administrative partition for the virtual. if no administrative partition is found, assume common export VS1_PART=`tmsh list ltm virtual $VIRTUAL_SERVER1 | grep 'partition' | awk {'print $NF'}` if [ -z ${VS1_PART} ]; then ### no administrative partition was found so execute a list construct to change the update-partition to Common and disable the virtual server consecutively export DISABLE1=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "tmsh modify cli admin-partitions update-partition Common && tmsh modify /ltm virtual $VIRTUAL_SERVER1 disabled"` ### If Debug is on, output the command to /var/log/ltm if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disable cmd1: ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT 'tmsh modify cli admin-partitions update-partition Common && tmsh modify /ltm virtual $VIRTUAL_SERVER1 disabled'" | logger -p local0.debug; fi else ### the administrative partition was found so execute a list construct to change the update-partition and disable the virtual server consecutively. The command is sent to the primary slot via SSH export DISABLE1=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "tmsh modify cli admin-partitions update-partition $VS1_PART && tmsh modify /ltm virtual $VIRTUAL_SERVER1 disabled"` ### If Debug is on, output the command to /var/log/ltm if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disable cmd1: ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT 'tmsh modify cli admin-partitions update-partition $VS1_PART && tmsh modify /ltm virtual $VIRTUAL_SERVER1 disabled'" | logger -p local0.debug; fi fi ### If Debug is on, output status to /var/log/ltm if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disabling Virtual Server 2 ($VIRTUAL_SERVER2)" | logger -p local0.debug; fi ### Disable the second virtual server. This section is the same as above, so I will skip the detailed comments here. ### disable virtual server 2 export VS2_PART=`tmsh list ltm virtual $VIRTUAL_SERVER2 | grep 'partition' | awk {'print $NF'}` if [ -z ${VS2_PART} ]; then export DISABLE2=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "tmsh modify cli admin-partitions update-partition Common && tmsh modify /ltm virtual $VIRTUAL_SERVER2 disabled"` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disable cmd2: ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT 'tmsh modify cli admin-partitions update-partition Common && tmsh modify /ltm virtual $VIRTUAL_SERVER2 disabled'" | logger -p local0.debug; fi else export DISABLE2=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "tmsh modify cli admin-partitions update-partition $VS2_PART && tmsh modify /ltm virtual $VIRTUAL_SERVER2 disabled"` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disable cmd2: ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT 'tmsh modify cli admin-partitions update-partition $VS2_PART && tmsh modify ltm virtual $VIRTUAL_SERVER2 disabled'" | logger -p local0.debug; fi fi ### notify log - disconnecting all BGP connection if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): Pool ($POOL) disconnecting all BGP connections..." | logger -p local0.debug; fi ### acquire a list of self IPs SELF_IPS=(`tmsh list net self | grep 'net self' | sed -e 's/\//\ /g' | awk {'print $3'}`) ### start to build our TMSH command excluding self IPs BGP_CONNS="tmsh show sys conn cs-server-port 179 | sed -e 's/\:/\ /g' | egrep -v '" COUNT=1 if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 1 - ${BGP_CONNS}" | logger -p local0.debug; fi ### loop through the self IPs for ip in "${SELF_IPS[@]}" do if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 2 - ${ip}" | logger -p local0.debug; fi ### continue to build our TMSH command - append self IPs to ignore if [ ${COUNT} -gt 1 ] then BGP_CONNS=${BGP_CONNS}"|${ip}" else BGP_CONNS=${BGP_CONNS}"${ip}" fi (( COUNT++ )) done ### if debug is on log a message with the TMSH command up until this point if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 3 - ${BGP_CONNS}" | logger -p local0.debug; fi ### finish the TMSH command to show BGP connections not including self IPs BGP_CONNS=${BGP_CONNS}"' | egrep -v 'Sys|Total' | awk {'print \$1'}" if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 4 - ${BGP_CONNS}" | logger -p local0.debug; fi ### gather all BGP connection not including those to self IPs DISCONNS=(`eval $BGP_CONNS`) DISCMD='' NEWCOUNT=1 if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 5 - ${DISCONNS}" | logger -p local0.debug; fi ### loop through the resulting BGP connections and build another TMSH command to delete these connections from the connection table for newip in "${DISCONNS[@]}" do if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 6" | logger -p local0.debug; fi if [ ${NEWCOUNT} -gt 1 ] then DISCMD=${DISCMD}" && tmsh delete sys connection cs-client-addr ${newip} cs-server-port 179" else DISCMD=${DISCMD}"tmsh delete sys connection cs-client-addr ${newip} cs-server-port 179" fi (( NEWCOUNT++ )) done ### if debug is on log the command we just assembled if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 7 - ${DISCMD}" | logger -p local0.debug; fi ### One the primary slot execute the command to delete the non self IP BGP connections. export CONNECTIONS=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "${DISCMD}"` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): BGP Step 8 - $CONNECTIONS" | logger -p local0.debug; fi ### disable virtual address 1 if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): VA1 ($VIRTUAL_ADDRESS1)" | logger -p local0.debug; fi if [ ! -z "$VIRTUAL_ADDRESS1" ]; then if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disabling Virtual Address 1 ($VIRTUAL_ADDRESS1)" | logger -p local0.debug; fi export VA1_PART=`tmsh list ltm virtual-address $VIRTUAL_ADDRESS1 | grep 'partition' | awk {'print $NF'}` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): cmd: ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT tmsh modify cli admin-partitions update-partition $VA1_PART && tmsh modify /ltm virtual-address $VIRTUAL_ADDRESS1 enabled no " | logger -p local0.debug; fi export VA2_UPCMD=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "tmsh modify cli admin-partitions update-partition $VA1_PART && tmsh modify /ltm virtual-address $VIRTUAL_ADDRESS1 enabled no"` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): virtual address 1 disabled?" | logger -p local0.debug; fi fi ### disable virtual address 2 if [ ! -z "$VIRTUAL_ADDRESS2" ]; then if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): disabling Virtual Address 2 ($VIRTUAL_ADDRESS2)" | logger -p local0.debug; fi export VA2_PART=`tmsh list ltm virtual-address $VIRTUAL_ADDRESS2 | grep 'partition' | awk {'print $NF'}` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): update-partition - $VA2_PART" | logger -p local0.debug; fi export VA2_UPCMD=`ssh -o StrictHostKeyChecking=no root\@slot$PRIMARY_SLOT "tmsh modify cli admin-partitions update-partition $VA2_PART && tmsh modify /ltm virtual-address $VIRTUAL_ADDRESS2 enabled no"` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): cmd: virtual address 2 disabled?" | logger -p local0.debug; fi fi ### track number of times this monitor has failed if [ -e "$TRACKING_FILENAME" ] then export COUNT=`cat $TRACKING_FILENAME` export NEW_COUNT=$((${COUNT}+1)) echo $NEW_COUNT > $TRACKING_FILENAME else echo 1 > $TRACKING_FILENAME export NEW_COUNT=1 fi ### notify log - failure count echo "EAV `basename $0` ($$): Pool $POOL only has $AVAILABLE available of $TOTAL_POOL_MEMBERS total members, failing site. Virtual servers ($VIRTUAL_SERVER1 and $VIRTUAL_SERVER2) will be disabled and all connections with destination port 179 will be terminated. Virtual servers must be manually enabled after pool $MEMBER_THRESHOLD or more pool members are available. This monitor has failed $NEW_COUNT times." | logger -p local0.debug # remove the pidfile if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): removing the pidfile..." | logger -p local0.debug; fi export PIDBGONE=`rm -f $PIDFILE` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): pidfile has been removed ($PIDBGONE)" | logger -p local0.debug; fi export END=`date +%Y%m%d-%H:%M:%S` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): stopped at $END" | logger -p local0.debug; fi else if [ -e "$TRACKING_FILENAME" ] then ### log the status echo "EAV `basename $0` ($$): Pool $POOL has $AVAILABLE members of $TOTAL_POOL_MEMBERS total members. No change to virtual servers ($VIRTUAL_SERVER1 and $VIRTUAL_SERVER2). No change to port 179 connections. Virtual servers must be manually enabled to pass traffic if they are disabled." | logger -p local0.debug rm -f $TRACKING_FILENAME fi ### remove the pidfile if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): removing the pidfile..." | logger -p local0.debug; fi export PIDBGONE=`rm -f $PIDFILE` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): pidfile has been removed ($PIDBGONE)" | logger -p local0.debug; fi export END=`date +%Y%m%d-%H:%M:%S` if [ $DEBUG -eq 1 ]; then echo "EAV `basename $0` ($$): stopped at $END" | logger -p local0.debug; fi echo "UP" fi fi303Views0likes0CommentsExport Virtual Server Configuration in CSV - tmsh cli script
Problem this snippet solves: This is a simple cli script used to collect all the virtuals name, its VIP details, Pool names, members, all Profiles, Irules, persistence associated to each, in all partitions. A sample output would be like below, One can customize the code to extract other fields available too. The same logic can be allowed to pull information's from profiles stats, certificates etc. Update: 5th Oct 2020 Added Pool members capture in the code. After the Pool-Name, Pool-Members column will be found. If a pool does not have members - field not present: "members" will shown in the respective Pool-Members column. If a pool itself is not bound to the VS, then Pool-Name, Pool-Members will have none in the respective columns. Update: 21st Jan 2021 Added logic to look for multiple partitions & collect configs Update: 12th Feb 2021 Added logic to add persistence to sheet. Update: 26th May 2021 Added logic to add state & status to sheet. Update: 24th Oct 2023 Added logic to add hostname, Pool Status, Total-Connections & Current-Connections. Note: The codeshare has multiple version, use the latest version alone. The reason to keep the other versions is for end users to understand & compare, thus helping them to modify to their own requirements. Hope it helps. How to use this snippet: Login to the LTM, create your script by running the below commands and paste the code provided in snippet tmsh create cli script virtual-details So when you list it, it should look something like below, [admin@labltm:Active:Standalone] ~ # tmsh list cli script virtual-details cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],[tmsh::get_field_value $obj "pool"],$profilelist,[tmsh::get_field_value $obj "rules"]" } } total-signing-status not-all-signed } [admin@labltm:Active:Standalone] ~ # And you can run the script like below, tmsh run cli script virtual-details > /var/tmp/virtual-details.csv And get the output from the saved file, cat /var/tmp/virtual-details.csv Old Codes: cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],[tmsh::get_field_value $obj "pool"],$profilelist,[tmsh::get_field_value $obj "rules"]" } } total-signing-status not-all-signed } ###=================================================== ###2.0 ###UPDATED CODE BELOW ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } else { puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } total-signing-status not-all-signed } ###=================================================== ### Version 3.0 ### UPDATED CODE BELOW FOR MULTIPLE PARTITION ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } } total-signing-status not-all-signed } ###=================================================== ### Version 4.0 ### UPDATED CODE BELOW FOR CAPTURING PERSISTENCE ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules,Persist" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } } } } total-signing-status not-all-signed } ###=================================================== ### 5.0 ### UPDATED CODE BELOW ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules,Persist,Status,State" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { foreach { status } [tmsh::get_status ltm virtual [tmsh::get_name $obj]] { set vipstatus [tmsh::get_field_value $status "status.availability-state"] set vipstate [tmsh::get_field_value $status "status.enabled-state"] } set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } } } } total-signing-status not-all-signed } Latest Code: cli script virtual-details { proc script::run {} { set hostconf [tmsh::get_config /sys global-settings hostname] set hostname [tmsh::get_field_value [lindex $hostconf 0] hostname] puts "Hostname,Partition,Virtual Server,Destination,Pool-Name,Pool-Status,Pool-Members,Profiles,Rules,Persist,Status,State,Total-Conn,Current-Conn" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { foreach { status } [tmsh::get_status ltm virtual [tmsh::get_name $obj]] { set vipstatus [tmsh::get_field_value $status "status.availability-state"] set vipstate [tmsh::get_field_value $status "status.enabled-state"] set total_conn [tmsh::get_field_value $status "clientside.tot-conns"] set curr_conn [tmsh::get_field_value $status "clientside.cur-conns"] } set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ foreach { p_status } [tmsh::get_status ltm pool $poolname] { set pool_status [tmsh::get_field_value $p_status "status.availability-state"] } set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_status,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_status,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } } } else { puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } } } } } Tested this on version: 13.08.9KViews9likes26CommentsTACACS+ External Monitor (Python)
Problem this snippet solves: This script is an external monitor for TACACS+ that simulates a TACACS+ client authenticating a test user, and marks the status of a pool member as up if the authentication is successful. If the connection is down/times out, or the authentication fails due to invalid account settings, the script marks the pool member status as down. This is heavily inspired by the Radius External Monitor (Python) by AlanTen. How to use this snippet: Prerequisite This script uses the TACACS+ Python client by Ansible (tested on version 2.6). Create the directory /config/eav/tacacs_plus on BIG-IP Copy all contents from tacacs_plus package into /config/eav/tacacs_plus. You may also need to download six.py from https://raw.githubusercontent.com/benjaminp/six/master/six.py and place it in /config/eav/tacacs_plus. You will need to have a test account provisioned on the TACACS+ server for the script to perform authentication. Installation On BIG-IP, import the code snippet below as an External Monitor Program File. Monitor Configuration Set up an External monitor with the imported file, and configure it with the following environment variables: KEY: TACACS+ server secret USER: Username for test account PASSWORD: Password for test account MOD_PATH: Path to location of Python package tacacs_plus, default: /config/eav TIMEOUT: Duration to wait for connectivity to TACACS server to be established, default: 3 Troubleshooting SSH to BIG-IP and run the script locally $ cd /config/filestore/files_d/Common_d/external_monitor_d/ # Get name of uploaded file, e.g.: $ ls -la ... -rwxr-xr-x. 1 tomcat tomcat 1883 2021-09-17 04:05 :Common:tacacs-monitor_39568_7 # Run the script with the corresponding variables $ KEY=<my_tacacs_key> USER=<testuser> PASSWORD=<supersecure> python <external program file, e.g.:Common:tacacs-monitor_39568_7> <TACACS+ server IP> <TACACS+ server port> Code : #!/usr/bin/env python # # Filename : tacacs_plus_mon.py # Author : Leon Seng # Version : 1.2 # Date : 2021/09/21 # Python ver: 2.6+ # F5 version: 12.1+ # # ========== Installation # Import this script via GUI: # System > File Management > External Monitor Program File List > Import... # Name it however you want. # Get, modify and copy the following modules: # ========== Required modules # -- six -- # https://pypi.org/project/six/ # Copy six.py into /config/eav # # -- tacacs_plus -- # https://pypi.org/project/tacacs_plus/ | https://github.com/ansible/tacacs_plus # Copy tacacs_plus directory into /config/eav # ========== Environment Variables # NODE_IP - Supplied by F5 monitor as first argument # NODE_PORT - Supplied by F5 monitor as second argument # KEY - TACACS+ server secret # USER - Username for test account # PASSWORD - Password for test account # MOD_PATH - Path to location of Python package tacacs_plus, default: /config/eav # TIMEOUT - Duration to wait for connectivity to TACACS server to be established, default: 3 import os import socket import sys if os.environ.get('MOD_PATH'): sys.path.append(os.environ.get('MOD_PATH')) else: sys.path.append('/config/eav') # https://github.com/ansible/tacacs_plus from tacacs_plus.client import TACACSClient node_ip = sys.argv[1] node_port = int(sys.argv[2]) key = os.environ.get("KEY") user = os.environ.get("USER") password = os.environ.get("PASSWORD") timeout = int(os.environ.get("TIMEOUT", 3)) # Determine if node IP is IPv4 or IPv6 family = None try: socket.inet_pton(socket.AF_INET, node_ip) family = socket.AF_INET except socket.error: # not a valid address try: socket.inet_pton(socket.AF_INET6, node_ip) family = socket.AF_INET6 except socket.error: sys.exit(1) # Authenticate against TACACS server client = TACACSClient(node_ip, node_port, key, timeout=timeout, family=family) try: auth = client.authenticate(user, password) if auth.valid: print "up" except socket.error: # EAV script marks node as DOWN when no output is present pass Tested this on version: 12.11.2KViews1like0CommentsRequest Client Certificate And Pass To Application
Problem this snippet solves: We are using BigIP to dynamically request a client certificate. This example differs from the others available in that it actually passes the x509 certificate to the server for processing using a custom http header. The sequence of event listeners required to accomplish this feat is: HTTP_REQUEST, which invokes CLIENTSSL_HANDSHAKE, which is followed by HTTP_REQUEST_SEND The reason is that CLIENTSSL_HANDSHAKE occurs after HTTP_REQUEST event is processed entirely, but HTTP_REQUEST_SEND occurs after it. The certificate appears in PEM encoding and is slightly mangled; you need to emit newlines to get back into proper PEM format: -----BEGIN CERTIFICATE------ Mabcdefghj... -----END CERTIFICATE----- This certificate can be converted to DER encoding by jettisoning the BEGIN and END markers and doing base64 decode on the string. Code : # Initialize the variables on new client tcp session. when CLIENT_ACCEPTED { set collecting 0 set renegtried 0 } # Runs for each new http request when HTTP_REQUEST { # /_hst name and ?_hst=1 parameter triggers client cert renegotiation if { $renegtried == 0 and [SSL::cert count] == 0 and ([HTTP::uri] matches_regex {^[^?]*/_hst(\?|/|$)} or [HTTP::uri] matches_regex {[?&]_hst=1(&|$)}) } { # Collecting means buffering the request. The collection goes on # until SSL::renegotiate occurs, which happens after the HTTP # request has been received. The maximum data buffered by collect # is 1-4 MB. HTTP::collect set collecting 1 SSL::cert mode request SSL::renegotiate } } # After a handshake, we log that we have tried it. This is to prevent # constant attempts to renegotiate the SSL session. I'm not sure of this # feature; this may in fact be a mistake, but we can change it at any time. # It is transparent if we do: the connections only work slower. It would, # however, make BigIP detect inserted smartcards immediately. Right answer # depends on the way the feature is used by applications. when CLIENTSSL_HANDSHAKE { if { $collecting == 1 } { set renegtried 1 # Release allows the request processing to occur normally from this # point forwards. The next event to fire is HTTP_REQUEST_SEND. HTTP::release } } # Inject headers based on earlier renegotiations, if any. when HTTP_REQUEST_SEND { clientside { # Security: reject any user-submitted headers by our magic names. HTTP::header remove "X-ENV-SSL_CLIENT_CERTIFICATE" HTTP::header remove "X-ENV-SSL_CLIENT_CERTIFICATE_FAILED" # if certificate is available, send it. Otherwise, send a header # indicating a failure, if we have already attempted a renegotiate. if { [SSL::cert count] > 0 } { HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE" [X509::whole [SSL::cert 0]] } elseif { $renegtried == 1 } { # This header has some debug value: if the FAILED header is not # present, BigIP is probably not configured to do client certs # at all. HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE_FAILED" "true" } } }1.7KViews1like3CommentsEspecial Load Balancing Active-Passive Scenario (I)
Problem this snippet solves: This code was written to solve this issue REF - https://devcentral.f5.com/s/feed/0D51T00006i7jWpSAI Specification: 2 clusters with 2 nodes each one. each cluster will be served as active-passive method. each node in the cluster will be served as round robin. when a cluster changes to active, it will keep this status although the initial active cluster change back to up status. Only one BIG-IP device. There are many topics suggesting to use "Manual Resume" trying to goal this specifications, but this requires to manually restore each node when is back online. My initial idea was to have an unattended virtual server. To do so, I use a combination of persistence and an internal virtual server loadbalancing (Vip-targeting-Vip in the same device). How to use this snippet: This scenario is composed by the next set of objects: 4 nodes (Node1, Node2, Node3, Node4) 1 additional node called "internal_node" (which represents the VIP used on VIP-Targeting-VIP) 2 pools called "ClusterA_pool" and "ClusterB_pool" (which points to each pair of nodes) 1 additional pool called "MyPool" (which points the two internal VIP) 2 virtual servers called "ClusterA_vs" and "ClusterB_vs" (which use RoundRobin to the pools of the same name) 1 virtual server called "MyVS" (which is the visible VS and points to "MyPool") By the way, I use a "Slow Ramp Time" of 0 to reduce the failover time. Following you can find an example of configuration: ----------------- ltm virtual MyVS { destination 10.130.40.150:http ip-protocol tcp mask 255.255.255.255 persist { universal { default yes } } pool MyPool profiles { tcp { } } rules { MyRule } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 53 } ltm virtual ClusterA_vs { destination 10.130.40.150:1001 ip-protocol tcp mask 255.255.255.255 pool ClusterA_pool profiles { tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 54 } ltm virtual ClusterB_vs { destination 10.130.40.150:1002 ip-protocol tcp mask 255.255.255.255 pool ClusterB_pool profiles { tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 55 } ltm pool ClusterA_pool { members { Node1:http { address 10.130.40.201 session monitor-enabled state up } Node2:http { address 10.130.40.202 session monitor-enabled state up } } monitor tcp slow-ramp-time 0 } ltm pool ClusterB_pool { members { Node3:http { address 10.130.40.203 session monitor-enabled state up } Node4:http { address 10.130.40.204 session monitor-enabled state up } } monitor tcp slow-ramp-time 0 } ltm node local_node { address 10.130.40.150 } ----------------- Code : when CLIENT_ACCEPTED { set initial 0 set entry "" } when LB_SELECTED { incr initial # Checks if persistence entry exists catch { set entry [persist lookup uie [virtual name]] } # Loadbalancing selection base on persistence if { $entry eq "" } { set selection [LB::server port] } else { set selection [lindex [split $entry " "] 2] set status [LB::status pool MyPool member [LB::server addr] $selection] if { $status ne "up" } { catch { [persist delete uie [virtual name]] } set selection [LB::server port] } } # Adds a new persistence entry catch { persist add uie [virtual name] } # Applies the selection switch $selection { # This numbers represents the ports used at the VIP-targeting-VIP "1001" { LB::reselect virtual ClusterA_vs } "1002" { LB::reselect virtual ClusterB_vs } } } Tested this on version: 12.12.4KViews0likes1CommentBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6). Downloads: https://loadbalancing.se/downloads/bigipreport-v5.7.13.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1614KViews20likes96CommentsUse BIG-IP LTM Virtual Server & iRule for an internal "What's My IP" website
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description This article describes how to use an F5 BIG-IP LTM iRule attached to a virtual server as an internal "What's My IP" website. Various online information, people, and ChatGPT helped get various aspects of this iRule working so a big thanks to all who helped!! Problem solved by this Code Snippet Allows end-users to go to an internal website fully hosted on an F5 BIG-IP LTM appliance to determine their device's internal IP address as well as how they are connected to the network. How to use this Code Snippet This article assumes you have the knowledge to set up a basic F5 BIG-IP virtual server (http or https w/ client SSL profile), the correct DNS record(s) to access by fully qualified domain name, and allow the BIG-IP to query your DNS server(s). Configure BIG-IP DNS resolvers (reference: K12140128: Overview of the DNS resolver ) On your BIG-IP LTM, configure the "DNS Resolvers" by going to Network > DNS Resolvers > Create Name > Internal_DNS_Resolvers (name can be changed) Leave all other settings at default and optionally uncheck the "Use IPv6" setting if applicable. Finished From the DNS Resolvers List, click Internal_DNS_Resolvers, then go to the Forward Zones tab. Click Add In the Name field enter a period . In the address field enter each internal DNS server, then click add until all DNS servers are in the Nameservers field. Finished Create iRule from code snippet below, and then apply the iRule to the virtual server. Additional 'elseif' statements can be added to accomodate more granular responses and CIDR blocks changed to reflect your specific networks. Change the wording in between the quotation marks to your liking for the 'set locate_me' & 'set vpn_server' variables. Code Snippet Meta Information Version: BIG-IP 16.1 Coding Language: F5 BIG-IP iRule with HTML for the response. This code has only been tested with IPv4 and not IPv6. Full Code Snippet ############################################################################# ## ## ## Proc to reverse the IP octets to build the ptr record format ## ## Downwards compatibility to 8.4: https://wiki.tcl-lang.org/page/lreverse ## ## ## ############################################################################# proc lreverse list { set res {} set i [llength $list] while {$i} { lappend res [lindex $list [incr i -1]] } set res } when CLIENT_ACCEPTED { set client_ip [IP::client_addr] # Format the ptr record so the RESOLVER::name_lookup will work properly for a ptr lookup set ptr [join [call lreverse [split $client_ip .]] .].in-addr.arpa set result [RESOLVER::name_lookup "/Common/Internal_DNS_Resolvers" $ptr ptr] set response_record "<div class="\"paragraph\"">The internal DNS servers were unable to determine your device's hostname.<br> </div>" ;# Default message foreach record [RESOLVER::summarize $result] { set resolved_hostname [lindex $record 4] if {[string length $resolved_hostname] > 0} { # A fully qualified domain name is returned, set it as the response_record set response_record "<div class="\"paragraph\"">Your hostname resolved to <strong>$resolved_hostname</strong> by the internal DNS servers.<br> </div>" break ;# Exit the loop as we have a valid response } } #################################################################################################################### ## ## ## The if/elseif statements below are used to create the variable 'locate_me' and 'vpn_server'that is the used in ## ## the HTTP_REQUEST portion of the iRule to display which method of connectivity is being used by the end user ## ## ## #################################################################################################################### if { [IP::addr [IP::client_addr] equals 10.1.1.0/24] } then { set locate_me "You are connected to VPN appliance" set vpn_server "vpn1" } elseif { [IP::addr [IP::client_addr] equals 10.2.2.0/24] } then { set locate_me "You are connected to VPN appliance" set vpn_server "vpn2" } elseif { [IP::addr [IP::client_addr] equals 10.3.3.0/24] } then { set locate_me "You are connected to the Wireless network," set vpn_server "and not connected via VPN"} elseif { [IP::addr [IP::client_addr] equals 10.4.4.0/24] } then { set locate_me "You are connected to the wired network," set vpn_server "and not connected via VPN" } elseif { [IP::addr [IP::client_addr] equals 10.0.0.0/8] } then { set locate_me "You are connected to wireless network." set vpn_server ",and not connected via VPN"} elseif { [IP::addr [IP::client_addr] equals 172.16.0.0/12] } then { set locate_me "You are connected to a partner network." set vpn_server ",and not connected via VPN"} elseif { [IP::addr [IP::client_addr] equals 192.168.0.0/16] } then { set locate_me "You are connected to a wired network." set vpn_server ",and not connected via VPN"} } when HTTP_REQUEST { HTTP::respond 200 content " <title>What's My IP (NYC bigip01)</title> <style> body { background-color: #154733; font-family: Arial, sans-serif; text-align: center; color: white; } .container { margin: 50px auto; padding: 20px; max-width: 1000px; background-color: rgba(255, 255, 255, 0.1); border-radius: 10px; } .header { font-size: 36px; margin-bottom: 20px; } .paragraph { font-size: 28px; margin-bottom: 10px; } strong { color: yellow; font-size: 30px; /* Increased font size */ } </style> <div class="\"container\""> <div class="\"paragraph\"">Your IP address is <strong>[IP::client_addr]</strong></div> $response_record <div class="\"paragraph\"">$locate_me <strong>$vpn_server</strong>.</div> <div class="\"paragraph\"">F5 BIG-IP virtual server IP address <strong>[IP::local_addr]</strong> responded to this request.<br> </div> <div class="\"paragraph\"">This website is brought to you by YOUR TEAM NAME HERE.<br> </div> <div class="\"paragraph\"">Click the email link to send feedback <a href="\"mailto:email@yourdomain.com?subject=WhatsMyIP" website="" feedback\"=""><strong>email@yourdomain.com</strong></a>.</div> </div> " }733Views2likes1CommentMicrosoft 365 IP Steering python Script
Hello! Hola! I have created a small and rudimentary script that generates a datagroup with MS 365 IPv4 and v6 addresses to be used by an iRule or policy. There are other scripts that solve this same issue but either they were: based on iRulesLX which forces you to enable iRuleLX only for this, and made me run into issues when upgrading (memory table got filled with nonsense) based on the XML version of the list which MS changed to a JSON file. This script is a super simple bash script that calls another super simple python file, and a couple of helper files. The biggest To Do are: Add a more secure approach to password usage. Right now, it is stored in a parameters file locked away with permissions. There should be a better way. Add support for URLs. You can find the contents here: https://github.com/teoiovine-novared/fetch-office365/tree/main I appreciate advice, (constructive) criticism and questions all the same! Thank you for your time.48Views0likes0Comments