Name Based Virtual Hosting with LTM

We get a lot of posts about the best way to use LTM for name based virtual hosting: Conserving routable IP addresses by hosting multiple websites on the same virtual server and load balancing requests to the appropriate webserver based on the hostname requested.

Here's an article explaning our best practices recommendations for the basic LTM configuration, plus the health monitors and iRule you can use to do the job right.  In it you'll learn exactly how to configure LTM to support three name based virtual hosts running on the same virtual server.

Problem Definition: Simple example

Let's assume you have a BIG-IP LTM, and 3 webservers, and you are hosting 3 websites: "", "", and "" You want each site to be as highly-available as possible using the smallest possible number of IP addresses. You've decided to configure hostname-based virtual hosting on each of the 3 webservers, and you want to set up a similar configuration on LTM: a single IP address hosting 3 different hostnames, with each request directed to the appropriate server.

As of LTMv9, multiple instances of the same pool member can be independently monitored, so the best way to accomplish the goal is to create 3 separate pools, all with the same members, and monitor each with a single Host-header specific monitor. A separate pool and monitor for each site is the key to optimizing this configuration: You don't want to mark all 3 sites down on a server if only 1 is not responding. More on that in a minute.

To build the LTM configuration, you'll start from the bottom and build up to the virtual server, first defining the monitors for each site, then a pool for each site, then the iRule required to split the traffic, then finally the virtual server to which the 3 hostnames correspond.

Site/Application Specific Monitor Configuration

First you'll use the built-in HTTP monitor template to configure a separate monitor for each site. For each monitor, specify a different hostname in the Host header so each tests only the health of a specific site.

Each monitor should make an HTTP request that effectively tests that specific site's functionality, one that will only succeed if the site is fully functional. It can be a request for a static page if that's all the site serves. If the site hosts an application, though, the monitor should request a dynamic page on each webserver which forces a transaction with the application to verify its health and returns a specific phrase upon success. For application monitoring, the recommended best practice is to create such a script specific to your application, configure the monitor Send string to call that script, and set the Receive string to match that phrase.

The Receive string should be a specific string that would only be returned if the requested page is returned as expected. We don't recommend using a single dictionary word or a number, as some of those strings may be found in error responses and result in false positives (and requests being sent to a site that's gone belly up). For example, if you follow a common practice of specifying "200" to look for the "200 OK" server message, it will also match on the HTTP date header containing "2007" and mark the pool member up even on a server error. Using the string "200 OK" would be a better choice, but still only tests whether the HTTP service is responding.

For "", which hosts an ecommerce application, the Send string for the monitor will look something like this:

GET /path/to/test.script HTTP/1.1\r\nHost:\r\nConnection: close\r\n\r\n

and the Host header sent would be "".

The test.script at /path/to/test.script would transact with the application to retrieve some inventory data. If the transaction fails, indicating the server is not healthy, the script returns no data. If the transaction succeeds, indicating the server is healthy, the server returns a string with the requested data: "We haz this many BuKkiTs: 42"

To mark the server up when a response containing the expected inventory data is received, configure the Receive to match the expected response phrase:

We haz this many BuKkiTs: 

(For more information on configuring HTTP monitors, you can check the reference guide on AskF5 for your version, or AskF5 Solution 3224.)

Pool Pool Pool Configuration

You could just configure a single pool containing the 3 webservers configured for name based virtual hosting, load balance all requests to the 3 servers and let the webservers figure it out. But that's not the most highly-available approach you can take. With a single pool serving all the sites, you can monitor all 3 sites, but you'd have to mark the server down if any of the 3 site monitors failed: In other words, with a single pool, you will have to mark all 3 sites down on a server if only 1 site is not healthy. Since each site could be unavailable or unhealthy independent of the others for any number of reasons, the recommended best practice is to monitor each application separately.

We couldn't do that in BIG-IP v4.x , but in LTM, the pool object became a container for pool members, making each copy of a pool member in a different pool a unique object whose availability could be separately maintained. That means that we can now create virtual copies of the same server by adding it to multiple pools, then monitor each copy using different criteria, and set that copy's availability independent of the status of the other copies.

So configure 3 pools, each containing the same pool members, and apply a different site-specific custom monitor to each pool.

One rule to rule them all...

Now that you have separate pools of servers available for each application, a very simple rule is all that's required to distribute traffic to the right pool:

rule eenie_meenie_minee_Host {
    switch [HTTP::host] { { pool hotkittehs } { pool bukkitsgalor } { pool icanhaz }
  default { reject }

(Note: You could also use HTTP classes instead of an iRule.)

...and one Virtual Server to bind them

Create a single standard virtual server and apply an HTTP profile (and whatever other profiles make sense for you: clientSSL profile if hosting HTTPS and OneConnect profile for connection pooling are some of the more commonly used profiles for web hosting.)

Apply the iRule created above as a resource for the virtual server, set persistence if desired, and a default pool if you want.  (The default pool will never be used, but you can set one if you don't want to see the virtual server status reflected as "Unknown".)

Here's what the entire configuration would look like once you have it all built out:

  resource = rule eenie_meenie_minee_Host

rule eenie_meenie_minee_Host
  selects pool based on host header, rejects unknown hosts

pool hotkittehs
  monitor hotkittehs (sends "" host header)

pool bukkitsgalor
  monitor bukkitsgalor (sends "" host header)

pool icanhaz
  monitor icanhaz (sends "" host header)
Published Nov 29, 2007
Version 1.0

Was this article helpful?


  • How would one set a persistance profile with this configuration? Especially, if you had different requirements for each site.
  • For the httpclass configuration, that would be:





    profile httpclass hotkittehs_class {


    defaults from httpclass


    pool hotkittehs


    redirect none


    hosts ""




    profile httpclass bukkitsgalor_class {


    defaults from httpclass


    pool bukkitsgalor


    redirect none


    hosts ""




    profile httpclass icanhaz_class {


    defaults from httpclass


    pool icanhaz


    redirect none


    hosts ""







    Then in the virtual, you would attach these classes in the resource section instead of the irule. I'm not sure which would be more efficient in processing requests, but the irule cuts down on some configuration, particularly if your vhosts are quite numerous.
  • The persistence would be set within each case of the switch statement. Instead of this:


  { pool hotkittehs }



    You would have this:




    pool hotkittehs


    persist ...


  • Deb_Allen_18's avatar
    Historic F5 Account
    Good question.



    Only one clientssl profile can be applied to the virtual server. and even if we could dynamically call different profiles, the Host header is not seen until after the cert/key exchange takes place -- too late to decide which one to use.



    For multiple hostnames in the same domain, a wildcard certificate is the best solution to this conundrum.



    For disparate domains (like those I used in my example), there really isn't a foolproof way to do that. If sessions will originate via HTTP then redirect to HTTP, there's an interesting post suggesting a workaround here:






  • Also works for other HTTP profile settings such as compression and cache: you could set a default configuration for all three sites at profile level, and then trim it up via COMPRESS:: and CACHE:: comands at sitch level.



    But other parameters at http profile setting (chunking, pipelining etc.), I don't see a way to it.
  • IonF's avatar
    Icon for Nimbostratus rankNimbostratus
    Great article, but it's a really old thread, let's see if I can revive it. I'm trying to determine what would be an optimal F5 configuration for a web farm hosting over 50 web sites, all using the same IP but different host headers. To implement failover at the site level would mean creating a separate pool with a custom http monitor for each website in the farm. I need some help implementing proper monitoring and failover for each individual site without having to create so many different F5 pools. I have searched but could not find much, if someone can point me in the right direction it will be greatly appreciated. It wouldn't be such a big deal but we have dozens of such IIS farms, if I would create a separate pool for each site I would end up with thousands of different pools. I'm hoping in the newer LTM versions there may be ways to accomplish something like this. Thank you, Ion