Forum Discussion

Eugene_56688's avatar
Eugene_56688
Icon for Nimbostratus rankNimbostratus
Oct 14, 2008

SOA and Server Consolidation

Scenario:

 

Multiple HTTP Services fronted by F5 that load balance to multiple web servers that front multiple j2ee jvms. All shared infrastructure - i.e web-servers have multiple listening ports for app a and app b, appa jvm is sitting on box A and needs to call an appb service jvm that sits on box a as well.

 

 

How are folks handling such scenarios? as the industry moves towards server consolidation (this is a religious conversation) and also services how is F5 positioned to solve this problem since SNAT is not performed at FPGA?

 

 

Anybody has had any experience with such configurations? What is the SNAT impact to F5 as well as the applications
  • James_Quinby_46's avatar
    James_Quinby_46
    Historic F5 Account
    In a former life we dealt with nearly this exact situation: a tier of mod_perl/mod_rewrite apaches fronting an entire zoo of java app servers (tomcat, WAS, orion, BEA, and so on). The apache stuff was born out of an immense legacy infrastructure that required us to do extensive URL rewriting and proxying.

     

     

    We frankly could have done away with much of that with iRules. We also made use of a lot of backside virtual servers - apaches would live in a pool behind one VS, tomcats in another, letting us load balance the web and app tiers separately. SNAT wasn't a huge consideration for our particular environment, though I imagine it could be gotten around via an X-Forwarded-For header now.

     

     

    At the time, we were still running on 4.x LTMs, and had just received our first pairs of 9.x machines. The migration to the new gear had just started when I (finally) got out of production operations.
  • Hi macroscape,

     

    We are going through a similar process currently, using a range of WCF services.

     

     

     

    The setup we are looking at currently is:

     

    1 VS IP Address, Multiple ports - multiple VS defined only by ports not IP's i.e. VS1=192.168.0.1:2000, VS2=192.168.0.1:2001

     

     

    Multiple internal networks per application type - In order to force the communication back to the BigIP VS for load balancing and not to go directly to the application/service on the same network (if that makes sense?).

     

    Some of this stuff may be firewalled, so staying away from SNAT's, also makes it easier to debug.

     

     

     

     

     

    What sort of setup did you end up going for?

     

    Could you give any advice in what you experienced setting up in that way?

     

  • There are two key Virtual Server design patterns that can be worth their weight in gold for designs like this:

    1) "Bounceback" virtuals

    2) Network virtuals

    For scenarios like this, it's common for one application container to need to access some other service behind the BigIP - for example you've got a web app that needs to call some other Web Services layer to get something of interest (a very common setup in Portal-type applications).

    To help wire this all together in a solid way that will scale, you can simply use a "bounceback" virtual server. In other words, the application servers call the web services tier like normal, but it's actually a virtual server that points to the web services tier. Note that you may need to turn on SNAT here because if the systems in question are on the same VLAN you'll have routing issues.

    This design pattern allows you a whole bunch of flexibility that you wouldn't otherwise get with static mappings, and I've personally gotten a ton of mileage out of bounceback virtuals.

    The other useful pattern is a network virtual server, which essentially forwards packets. But again, the BigIP adds a bunch of value that you'd otherwise not be able to get. For example, let's say that you've got a farm of XML-RPC systems that live on a specific network that the BigIP can see,and the XML-RPC systems all listen on, say, port 9090. You can setup a virtual server that looks like this:

        
     0.0.0.0:9090

    which forwards to the destination network.

    What does this buy you? As it turns out, XML-RPC tends to hate Nagle's algorithm. By passing this flow through the BigIP it's a trivial exercise to customize your tcp profile and disable Nagle's, which in turn enhances your entire app delivery architecture.

    Powerful stuff.

    -Matt