Technical Articles
F5 SMEs share good practice.
Showing results for 
Search instead for 
Did you mean: 
Custom Alert Banner
F5 Employee
F5 Employee

Can you believe it? It’s true, it’s true! There’s a part 5. What can I say? Times change; people change; software changes. Active Directory Federation Services, (ADFS) is no exception. While the BIG-IP with SAML 2.0 can alleviate the need for and ADFS infrastructure in many use cases, there are still organizations that need/want to continue utilizing ADFS. Fortunately, regardless of which way you go, F5 can help. So, in the spirit of free will, collaboration, and serving the greater good, (too much?), let’s talk about load balancing ADFS 3.0 with the BIG-IP.

As you may, or may not, recall the previous posts around BIG-IP and ADFS revolved around load balancing ADFS 2.0 and ADFS Proxy, replacing the ADFS Proxy with Access Policy Manager, and replacing the entire ADFS infrastructure with APM and SAML. The good news is that these posts are still relevant with regards to ADFS 3.0 and the ADFS proxy replacement, (WAP); well for the most part anyway.


While there are numerous differences between ADFS 3.0 and previous versions, the most significant change with respect to providing HA and scalability for the ADFS 3.0 infrastructure is its use of Server Name Indication, (SNI). To Successfully integrate a load balancing solution, ( including full reverse proxy), into the ADFS environment the device must support SNI. The load balancing device must be able to present the server name to the backend host as part of the initial Client Hello. Fortunately, the BIG-IP, (ver. 11.1.0 and later) supports this TLS protocol extension.

The rest of this post will provide guidance on enabling SNI support for ADFS 3.0 integration. For overall guidance refer to parts one thru three of this series as well as the recently published ADFS 2.0 Deployment Guide.


SNI and the Server Profile

The BIG-IP provides a virtual server, (listener), that receives client SSL connections and subsequently intelligently passes traffic into a pool of ADFS/WAP servers. Depending upon the organization’s infrastructure and security requirements, the BIG-IP can simply receive encrypted client connections and pass them through to the backend ADFS farm, (aka SSL tunneling). However, the preferred method, (SSL bridging), receives encrypted clients connections; terminates and decrypts the traffic. The traffic is then re-encrypted and sent to the backend application servers. This method adds an additional layer of security since external traffic never directly connects to the internal domain-joined machines as well affording the ability to perform additional deep packet inspection.

SSL bridging back to the ADFS farm requires associating a server SSL profile to the virtual server. Enabling SNI is simply a matter of specifying the server name on the associated server SSL profile, (see below).


1. Navigate to the appropriate profile;

This image is not available because: You don’t have the privileges to see it, or it has been removed from the system


2. Select ‘Advanced’ configuration and enter the FQDN of the backend ADFS service hostname. The hostname will now be provided during the TLS negotiation. In the example below, the server name is ‘’, (refer to the highlighted field). Like I said, simple!


Health Monitoring and SNI

Effectively monitoring the backend ADFS/WAP farm members is a little trickier but very doable. Since the built-in HTTP monitors do not provide the server name as part of the TLS negotiation, using them will result in the being backend servers being incorrectly marked as down, (not good).

You could simply use a non-HTTP monitor, (ICMP being the most common), but that doesn’t provide a reasonable guarantee that the actual ADFS service is functioning. Better than that, what we can do is create an external custom SNI enabled monitor that validates the service metadata and associate it to the pool. It’s as easy as 1,2,3,… 4,5,6.


1. Download the script:


2. Upload the previously downloaded file into the BIG-IP via the web interface. Navigate to ‘System’ –> ‘External Monitor Program List’ –> ‘Import’;




If ADFS proxy server is configured to accept SSL/TLS connections only using TLSv1.1 or better , the monitor will not work.

If have come up with this one-liner to replace the “curl” based command in the script. Thanks to Jerry Tower for helping fix the actual HTTP request as well as testing the script.

(echo -e "GET $URI HTTP/1.1\r\nHost: $HOST\r\nConnection: Close\r\n\r\n"; sleep 2) | openssl s_client –quiet –servername $HOST -connect $NODE:$PORT 2> /dev/null| grep -i "$RECV" 2>&1> /dev/null


The script line that this one-liner should replace is the following:

curl-apd -k -v -i --resolve $HOST:$PORT:$NODE https://$HOST$URI | grep -i "${RECV}" 2>&1 > /dev/null


3. Browse to and select the file. Provide a name for the file and select ‘Import’;



4. Create a new external monitor utilizing the associate external file. Navigate to ‘Local Traffic’ –> ‘+’ sign;



5. Provide a name and select ‘External’ for the type. Select the previously created external program. The script provided requires three, (3) variables entered as name/value pairs. The variables are listed below. Select ‘Finished’;


RECVHTTP/1.1 200




6. Associate the newly created monitor to the ADFS pool and/or the WAP pool. Select ‘Local Traffic’ –> ‘Pools’ –> ‘Pool List’. Move the monitor into the active pane and select ‘Update’.



Additional Links:

Big-IP and ADFS Part 1 – “Load balancing the ADFS Farm”

Big-IP and ADFS Part 2 – “APM–An Alternative to the ADFS Proxy”

Big-IP and ADFS Part 3 – “ADFS, APM, and the Office 365 Thick Clients”

Big-IP and ADFS Part 4 – “What about Single Sign-Out?”

BIG-IP Access Policy Manager (APM) Wiki Home - DevCentral Wiki

Active Directory Federation Services 3.0 Overview

In the last image of your example, shouldn't the monitor name be "ADFS3.0_Monitor", not SNI_EAV?
Two items to adjust: The first character in line 4 of the script should be instead of s The correct value for the RECV variable should be HTTP/1.1 200 There shouldn't be a period after the second 1 Once I made those adjustments it worked correctly. 🙂
Also, one more adjustment. There should be a space in the script, line: curl-apd -k -v --resolve $SNI:$PORT:$NODE https://$SNI$URI 2>&1 > /dev/null | grep -i "${RECV}" should have been with a space after "curl": curl -apd -k -v --resolve $SNI:$PORT:$NODE https://$SNI$URI 2>&1 > /dev/null | grep -i "${RECV}" :)
Well I tried to follow this very simple recipe and failed. I have even added the 3 corrections listed in the comments above. I am very surprised that the original page has not been fixed. I am confused on which fqdn I should add as a ssl profile and for the SNI. I have tried the FQDN of the adfs server and the FQDN of my external presence. In both cases it fails anyways. Does anyone have a more inclusive set of directions starting with settings for creating the external VS and the ADFS Pool?
the --resolve option in the script is only available in curl starting version 7.21.3. TMOS 11.5.1 hf4 is using 7.19.7 and does not have this option..... On what version of TMOS has this been tested???
where are the instructions for OS version 10.2.4?
Another possibility perhaps. Do you really need an external monitor? Never use an external monitor when a built-in one will work as well. Forking a shell and running even the simplest shell script takes a significant amount of system resources, so external monitors should be avoided whenever possible. If possible, have the server administrator script execution of the required transaction on the server itself (or locate/author an alternative script on the server) that reliably reflects its availability. Then, instead of an external monitor, you can define a built-in monitor that requests that dynamic script from the server, and let the server run the script locally and report results. For example, the simple request/response HTTP transaction in the sample script below would be much better implemented using the built in basic HTTP monitor. We can use the built in monitors until F5 add SNI capability without the performance hit. see links below There is another alternative. A very useful tool for manipulating http.sys Hope this provides another perspective.
btw, it looks like it should NOT be a 'space' in command 'curl -apd as stated by 'Dragan' it's 'curl-apd' as the command and and 'curl -apd' as the parameters to 'curl' command
F5 Employee
F5 Employee
Without running curl with the "-q" option, the STATUS section is moot. Change: curl-apd -k -v --resolve $SNI:$PORT:$NODE https://$SNI$URI 2>&1 > /dev/null | grep -i "${RECV}" TO: curl-apd -k -v --resolve $SNI:$PORT:$NODE https://$SNI$URI 2>&1 > /dev/null | grep -i -q "${RECV}"
Hello, Thank you for writing this servies of articles! I have some questions regarding this article, and the series in general. 1. Under Step 2 of section "SNI and the Server Profile" is the picture of SCVMM really the correct picture? 2. I am still confused about the Health Monitoring script. Is the command on line 31 "curl-apd -k -v ..." or is it "curl -apd -k -v ..."? 3. A colleague of mine has recently returned from a Microsoft Technical course on ADFS 3 on Server 2012 R2 and he now wants to deploy 2 Microsoft WAP serversin our DMZ and relegate our DMZ F5 HA pair to a simple load balancing function. I am having a hard time trying to rationalize how MS WAP has capabilities that F5 cannot handle. Is there any valid reason that this must be deployed this way?
Is there a reason to use /FederationMetadata/2007-06/FederationMetadata.xml as the URI value vs /adfs/fs/federationserverservice.asmx which is what is in the deployment guide? Both seem to work fine. Thanks

I removed the () before the echo and it worked for me.


echo -e "GET $URI HTTP/1.1\r\nHost: $HOST\r\nConnection: Close\r\n\r\n"; sleep 2 | openssl s_client –quiet –servername $HOST -connect $NODE:$PORT 2> /dev/null| grep -i "$RECV" 2>&1> /dev/null



I could make it work on BIG-IP 11.5.3 Build 2.0.196 Hotfix HF2. Don't know why but, as soon as the script is updated through the GUI ( with or without modification ), the script appears in the directory /tmp and the check is OK, before ( just after uploading the script and creating the monitor ) I could not find it on the disk and the check was KO.


I have also tested with or without the "()", it works in both cases.



I ended up making things work with a slight variant of a script linked from this article - for some reason the HOST, RECV, and URI variable definitions early in the script weren't working, which made everything go pear-shaped, so I hardcoded them in. This means I'll have to write a separate script for each different federated server I set up since the hostname (here, "") will be different for each external script. The node and port are correctly passed to the script by the monitor itself, so we don't have to write a separate script for each node or anything wacky like that. Uploaded the script through "System->File Management->External Monitor Program File List" and was able to use it in a new external monitor.

Here's my edited script:

 These arguments supplied automatically for all external monitors:
 $1 = IP (nnn.nnn.nnn.nnn notation)
 $2 = port (decimal, host byte order)

 This script expects the following Name/Value pairs:
 URI = /FederationMetadata/2007-06/FederationMetadata.xml
 RECV = HTTP/1.1 200

 Remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format)
NODE=`echo ${1} | sed 's/::ffff://'`
if [[ $NODE =~ ^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$ ]]; then
PIDFILE="/var/run/`basename ${0}`.sni_monitor_fedserver.example.com_${PORT}_${NODE}.pid"
if [ -f $PIDFILE ]
echo "EAV exceeded runtime needed to kill${PORT}:${NODE}" | logger -p local0.error
kill -9 `cat $PIDFILE` > /dev/null 2>&1
echo "$$" > $PIDFILE
curl-apd -k -v -i --resolve$PORT:$NODE| grep -i "HTTP/1.1 200" 2>&1 > /dev/null
rm -f $PIDFILE
if [ $STATUS -eq 0 ]
echo "UP"

rmd1023 - Your comment above was very timely, as I was looking into this today! Thanks for posting it. It looks like variables aren't passed properly in the version I'm running, or something, but with your host specific modifications, it's working great.



What is the best way to add multiple HOST entries with the same URI



I think you'll need different scripts. So, for example, you'd have external monitor script "" where the "HOST" command is hardcoded as "", and another external monitor "" where "HOST" is hardcoded as "". The hardcoded URI would be the same in both scripts.



We've slightly changed the curl command in our script in order to have multiple options in the RECV string.


curl-apd -k -i --resolve $HOST:$PORT:$NODE https://$HOST$URI | grep -i -P "${RECV}" > /dev/null 2>&1


We can now work with a receive string as HTTP/1.1\s[2|3]0[0-7]|



Thanks Greg it works well !



Very nice stuff, thanks Greg!


But one big problem: We have 12 ADFS pools and each of them has now a EAV monitor attached. The vCMP is going to be very slow then... 😞


Does someone have any idea to solve this without a EAV monitor or get better performance for the monitoring?



I had a requirement where we needed to authenticate together with using SNI, I ended up modifying the script to allow input of the username and password into the custom monitor variables. However because this password is stored in plain text I looked at encrypting it using the default RSA key on the F5.


curl-apd -k -v --resolve $HOST:$PORT:$NODE https://$HOST$URI 2>&1 >


encrypted_pass= openssl rsautl -inkey /config/httpd/conf/ssl.key/server.key  -decrypt 

curl-apd -k -v -u "${USER}:${encrypted_pass}" --resolve $HOST:$PORT:$NODE https://$HOST$URI 2>&1 > 
You will need to create an encrypted file using the below command from F5 ssh:
echo "password" | openssl rsautl -inkey /config/httpd/conf/ssl.key/server.key -encrypt >/home/sp2016mon.bin
After you have created the file the monitor will then decrypt the password and login using curl. If the default key pair on the F5 device ever expires or changes you will need to rerun this command to re-create the file. If the password changes you will also need to rerun this command to re-create the file.
When creating the monitor use the "USER" variable to add your username.

For some reason we keep getting a lot of "EAV exceeded runtime needed to kill..." messages in our logs using this script. The curl command works fine, but the $PIDFILE removal doesn't really seem to work every time.


I already tried to increase the timeouts, but don't really want make them to big.


Anyone any idea what to do about this?



Unfortunately this does not work with Windows Server 2019 / ADFS 5.0. Any updates how to use a proper monitor for this setup?


"If ADFS proxy server is configured to accept SSL/TLS connections only using TLSv1.1 or better , the monitor will not work."

My configuration only permits TLSv1.2.  Is there a workaround?

Version history
Last update:
‎29-May-2014 19:15
Updated by: