Forum Discussion

Tom_Bortels_112's avatar
Tom_Bortels_112
Icon for Nimbostratus rankNimbostratus
Jul 28, 2011

can't connect to my own external network?

Hey - here's hoping someone more experienced can tell me what I'm missing...

 

 

In a nutshell, we have a farm of webservers behind a BigIP (using the BigIP as their default route), and the apps on those webservers occasionally make calls out to the real world via a 0.0.0.0 virtual server on the F5 (ie. it's a transparent forward proxy, if I didn't bungle the terminology). Works great to get to google, wherever.

 

 

 

But one of the apps wants to call it's own domain name (it's hardcoded in the app, not something I can easily change), and we fail to connect to anything on our own external interface. Or - we apparently connect with the proxy virtual server, send our headers, and get back a cert but no respons headers; wget says "Read error (Connection reset by peer) in headers." and retries, over and over.

 

 

 

So, I say "bummer" and try it out in our QA... where it *works fine* (I connect to both the real world and our own network dandy). I've been slogging thru the bigip.conf in both places, trying to find why it would be different, and so far as I can tell they're identical (or as identical as QA/prod can really be, considering we do run different domains) - notably, the proxy is totally identical.

 

 

 

So - I'm baffled. Can anyone suggest what might be different where I couldn't access my own external network from the inside? (If I ssh to the BigIP itself, I can access those IPs fine, of course). I figure since the proxy is identical, it's got to be something about the environment on the BigIP that it lives in... but all of the obvious things seem fine...

 

 

 

anyway - stumped. I'm going to put in a support ticket, but more often than not I get better ideas from the community than I do from official support (not F5, just in general), so I figure I'd put out a feeler, see if someone saw just this last week and wants to share. Thanks for any guidance anyone can share...

 

 

 

-- Tom Bortels - bortels@gmail.com

 

  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    Sounds like the virtual server isn't setupnwith nat... And so the return traffic goes direct to the client instead of via the f5. The solution is usually an irule that will enable snat for hosts on the internal network that have a direct route.

     

     

    H
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    Sounds like the virtual server isn't setupnwith nat... And so the return traffic goes direct to the client instead of via the f5. The solution is usually an irule that will enable snat for hosts on the internal network that have a direct route.

     

     

    H
  • My first thought when I saw it was snat, but the destination virtual server is "snat automap". Perhaps that's not applying to connections originating internally (ie. from the proxy virtual server, which is also set "snat automap")?
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    What does tcpdump show you?

     

     

    - Check that out so you don't have to guess...

     

     

     

    H
  • I have it working now one of the other admins here repeated your suggestion, and had an irule for it he is using for another case).

    The following iRule solves the problem (and I'm still trying to wrap my head around why):

    when LB_SELECTED

    {

    if {[IP::addr "[IP::client_addr]/22" equals "[LB::server addr]/22"]}

    {

    snat automap

    }

    else

    {

    snat none

    }

    }

    This is what you suggested, Hamish - what I'm confused by is why it's making a difference.

    I turned on some logging, and it appears the situation is thus:

    We have the *pool* NAT/SNAT off, so that we can log the outside IP in our apache access logs. That's seemingly what's breaking the connections from internal clients; I had forgotten that pools have a NAT/SNAT setting as well as virtual servers. The rule above triggers when we hit the virtual server locally, and seemingly sets snat on? Which is odd, because it's on "automap" for the virtual server already. I guess in this context the snat set is for the pool, even though the irule is a virtual server irule? Weird.

    All I know is that it works with the above, so w00t! Bonus points to you, Hamish - thanks!

    -- Tom

     
  • I have it working now one of the other admins here repeated your suggestion, and had an irule for it he is using for another case).

    The following iRule solves the problem (and I'm still trying to wrap my head around why):

    when LB_SELECTED
    {  
        if {[IP::addr "[IP::client_addr]/22" equals "[LB::server addr]/22"]}
        {
            snat automap 
        }
        else
        {
            snat none
        }
    }

    This is what you suggested, Hamish - what I'm confused by is why it's making a difference.

    I turned on some logging, and it appears the situation is thus:

    We have the *pool* NAT/SNAT off, so that we can log the outside IP in our apache access logs. That's seemingly what's breaking the connections from internal clients; I had forgotten that pools have a NAT/SNAT setting as well as virtual servers. The rule above triggers when we hit the virtual server locally, and seemingly sets snat on? Which is odd, because it's on "automap" for the virtual server already. I guess in this context the snat set is for the pool, even though the irule is a virtual server irule? Weird.

    All I know is that it works with the above, so w00t! Bonus points to you, Hamish - thanks!

    -- Tom

  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    Ah yes... IIRC the iRule snat command will over-ride the filter on the pool that disables snat'ing... IIUC the priority is iRule, Pool, VS... (i.e. an iRule can override everything, a pool can deny SNAT (But not force it on), and a VS can enable or disable it for the VS.

     

     

    H
  • I had a similar issue which was resolved with SNAT per the discussion here, in my case just needed to go into ADVANCED settings on the virtual server and then switch SNAT Pool to "Auto Map" and Source Port to "Preserve". In my situation all hosts including the F5 vips were on the same network tier.

     

     

    I would receive messages such as this:

     

     

    HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.

     

    Retrying.

     

     

    Thanks for the ideas F5 guys!