Divert Unencrypted Traffic through an IPS with Local Traffic Manager

The Challenge

A customer had a request of fellow St Louisan and F5er Brent Imhoff. They wanted the BIG-IP to decrypt traffic, send it through an in-line pass through  IPS, receive the traffic back, then re-encrypt before sending on to the servers.

The Solution

Leveraging route domains and a back-end vlan group, the solution is shockingly simple to implement. Before jumping into the configuration, I’ll start with a diagram

From a flow perspective, the client hits the outside vip with encrypted traffic. The traffic is decrypted and handed off to the IPS via pool member PM1 in the pool.outside_example_app. The IPS itself is strictly an L2 pass through device, the arp for PM1 is actually answered by the virtual vip.inside_example_app. If you look at the addressing scheme for the outside_L2 vlan and the vlan group comprised of the inside_L2 and inside vlans, you’ll notice it’s the same IP subnet, it just belongs to different route domains. At this point, the inside vip re-encrypts the traffic and hands it back to the servers.

The Setup

I don’t happen to have a pass through IPS in my lab arsenal, but I do have a BIG-IP that can switch, so with a couple BIG-IP LTM Virtual Edition machines and a single Ubuntu VM, I can recreate the scenario above on my laptop. the BIG-IP VE wouldn’t let me place two interfaces in a single vlan (just a limitation on the VE platform) so I created a vlan group to bridge that traffic. That’s the only configuration required for VE #2.

   1: net vlan vl.a {
   2:     if-index 128
   3:     interfaces {
   4:         1.3 { }
   5:     }
   6:     tag 4094
   7: }
   8: net vlan vl.b {
   9:     if-index 144
  10:     interfaces {
  11:         1.4 { }
  12:     }
  13:     tag 4093
  14: }
  15: net vlan-group vg.ab {
  16:     bridge-traffic enabled
  17:     members {
  18:         vl.a
  19:         vl.b
  20:     }
  21:     mode transparent
  22: }

The configuration for the primary VE image starts with the vlans. Create the outside and outside_L2 vlans and assign interfaces 1.1 and 1.2 respectively, then create the inside_L2 and inside vlans and assign interfaces 1.3 and 1.4 respectively.

   1: net vlan outside {
   2:     if-index 112
   3:     interfaces {
   4:         1.1 { }
   5:     }
   6:     tag 4094
   7: }
   8: net vlan outside_L2 {
   9:     if-index 128
  10:     interfaces {
  11:         1.2 { }
  12:     }
  13:     tag 4093
  14: }
  15: net vlan inside_L2 {
  16:     if-index 144
  17:     interfaces {
  18:         1.3 { }
  19:     }
  20:     tag 4092
  21: }
  22: net vlan inside {
  23:     if-index 160
  24:     interfaces {
  25:         1.4 { }
  26:     }
  27:     tag 4091
  28: }

Next, Create the vlan group. Make sure to bridge all traffic and set the mode to transparent.

   1: net vlan-group inside_VG {
   2:     bridge-traffic enabled
   3:     members {
   4:         inside
   5:         inside_L2
   6:     }
   7:     mode transparent
   8: }

Now, create the route domains. Make sure strict isolation is enabled and there is no parent selected (both defaults).

   1: net route-domain outside {
   2:     id 20
   3:     vlans {
   4:         outside_L2
   5:         outside
   6:     }
   7: }
   8: net route-domain inside {
   9:     id 10
  10:     vlans {
  11:         inside_VG
  12:         inside
  13:         inside_L2
  14:     }
  15: }

Now that the route domains are in place, assign the self IP addresses for the outside and outside_L2 vlans and the inside_VG vlan-group. Note again that the IP subnet for the outside_L2 and the inside_VG self is the same.

   1: net self self.outside {
   2:     address 192.168.11.254%20/24
   3:     allow-service all
   4:     traffic-group traffic-group-local-only
   5:     vlan outside
   6: }
   7: net self self.outside_L2 {
   8:     address 192.168.106.253%20/24
   9:     allow-service all
  10:     traffic-group traffic-group-local-only
  11:     vlan outside_L2
  12: }
  13: net self self.inside_VG {
  14:     address 192.168.106.254%10/24
  15:     allow-service all
  16:     traffic-group traffic-group-local-only
  17:     vlan inside_VG
  18: }

The infrastructure is now in place. Next, create the pools. The outside pool pool member is the inside virtual, which is still decrypted, so the port with be 80. The route-domain, for this pool member should still be local, however. The inside pool pool members are the actual servers, which are ssl so the port numbers will be 443 and the route-domain will be inside. The monitors for both are set to 60/181 for lab purposes, most likely that will not be desirable for production.

   1: ltm pool pool.outside_example_app {
   2:     members {
   3:         192.168.106.250%20:http {
   4:             address 192.168.106.250%20
   5:             session monitor-enabled
   6:             state up
   7:         }
   8:     }
   9:     monitor http_60s
  10: }
  11: ltm pool pool.inside_example_app {
  12:     members {
  13:         192.168.106.101%10:https {
  14:             address 192.168.106.101%10
  15:             session monitor-enabled
  16:             state up
  17:         }
  18:         192.168.106.102%10:https {
  19:             address 192.168.106.102%10
  20:             session monitor-enabled
  21:             state up
  22:         }
  23:         192.168.106.103%10:https {
  24:             address 192.168.106.103%10
  25:             session monitor-enabled
  26:             state up
  27:         }
  28:     }
  29:     monitor https_60s
  30: }

Now, create the ssl profile for the outside vip. In this lab, I used a self-signed cert on the BIG-IP and the apache server. For the re-encryption, just use the default serverssl profile.

   1: ltm profile client-ssl clientssl.example_app {
   2:     app-service none
   3:     cert example_app.crt
   4:     defaults-from clientssl
   5:     key example_app.key
   6: }

Finally, create the virtual servers. The outside virtual references the clientssl.example_app profile created earlier, as well as the pool.outside_example_app pool. The inside virtual shares the same IP as the outside pool pool member, but different route domain IDs. Both virtuals utilize snat automap and http and oneconnect profiles.

   1: ltm virtual vip.outside_example_app {
   2:     destination 192.168.11.250%20:https
   3:     ip-protocol tcp
   4:     mask 255.255.255.255
   5:     pool pool.outside_example_app
   6:     profiles {
   7:         clientssl.example_app {
   8:             context clientside
   9:         }
  10:         http { }
  11:         oneconnect { }
  12:         tcp { }
  13:     }
  14:     snat automap
  15:     vlans {
  16:         outside
  17:         outside_L2
  18:     }
  19:     vlans-enabled
  20: }
  21: ltm virtual vip.inside_example_app {
  22:     destination 192.168.106.250%10:http
  23:     ip-protocol tcp
  24:     mask 255.255.255.255
  25:     pool pool.inside_example_app
  26:     profiles {
  27:         http { }
  28:         oneconnect { }
  29:         serverssl {
  30:             context serverside
  31:         }
  32:         tcp { }
  33:     }
  34:     snat automap
  35:     vlans {
  36:         inside
  37:         inside_L2
  38:         inside_VG
  39:     }
  40:     vlans-enabled
  41: }

The Test

Now we get to see the magic happen. First, take a look at the network map. BIG-IP is reporting all systems go from front to back:

Hitting the outside virtual server in my browser, I successfully hit the back end application server:

Under the hood, you can see the arp progression of that request:

Primary BIG-IP LTM VE

#Windows client request
10:51:08.424559 arp who-has 192.168.11.250 (00:0c:29:99:ef:0a) tell 192.168.11.1
#outside virtual response
10:51:08.424577 arp reply 192.168.11.250 is-at 00:0c:29:99:ef:0a
#outside snat request
10:51:08.433823 arp who-has 192.168.106.250 (00:0c:29:99:ef:28) tell 192.168.106.253
#inside virtual response
10:51:08.435652 arp reply 192.168.106.250 is-at 00:0c:29:99:ef:28

Emulated IPS (Secondary BIG-IP LTM VE)

#outside snat request
10:51:08.434317 arp who-has 192.168.106.250 (00:0c:29:99:ef:28) tell 192.168.106.253
#inside virtual response
10:51:08.435173 arp reply 192.168.106.250 is-at 00:0c:29:99:ef:28

And finally, whereas all the traffic client->BIG-IP LTM VE and BIG-IP LTM VE->server is encrypted, you can see that the traffic hitting the pass through is decrypted:

Update (10/21/2015): Recorded a Lightboard Lesson for this solution:

 

 

Published Jul 12, 2012
Version 1.0
  • When you say 'The IPS itself is strictly an L2 pass through device' does that mean it's not inline/dropping traffic? Just starting to look into setting something like this up to eliminate the F5 sandwich.

     

  • This puts the IPS inline so it would be fully capable of blocking. The use of the route domain allows a single device to achieve similar functionality to the sandwich method. I suppose vCMP could also be used to achieve a similar result if you had large enough hardware to support it.

     

    I'm having a hard time wrapping my head around how you would scale the IPS in this model. If, for instance, I had a pair of F5s and multiple IPS appliances. How would I support that?