Forum Discussion
In-Line or One-Arm LTM Placement
So with all my rambling, my main question is...where SHOULD the bigip live, in line or off to the side. My thoughts were, let routers route and load balancers load balance..take all non-load balanced traffic off the F5.
The issues mentioned above during the initial config and setup, could have been related to the feature release code we were running, in the 9.2 train.
All thoughts, suggestions, and words of wisdom are welcome.
Thanks
-L
- JRahmAdminThe DMZ is the ONLY environment in which I have a server in my LTM vlans, everything else is routed to my LTM's from different distribution blocks in the datacenters. We have great success with this deployment. I try to match the 4th octets of the vip and snat so that we can track applications through the load balancers, (if not clients) when the traffic is not easily manipulated like http.
- Leslie_South_55NimbostratusThanks for all the replies..The issues I mentioned in my initial post were most likely due to 'lack of experiance" and not using a default forwarding virtual server for all outbound traffic from the server VLAN. All of the nodes we use are windows hosts, and the default deny characteristic of the bigip was causing major issues.
- JRahmAdminMy initial goal was to keep all the server chatter isolated in their own broadcast domain, and route to the adc only when necessary. In our environment, the ideal location was a series of netblocks (different adc's for different business functions) off the core since all distribution blocks must route to and through the core for client & server access. In one of the datacenters there is only 1 distribution block so the adc's are homed there instead of the core to eliminate unnecessary hops.
- Robin_Mordasie1Historic F5 AccountWhen the F5 is configured as the default gateway for backend nodes there are advantages as well as disadvantages. When deploying an F5 unit as a router, or gateway for pool members they see the real client ip address. One problem organizations face with deploying in routed mode is that management traffic for nodes also traverse the F5. The nature of management traffic can represent more bandwidth limiting the capacity of the F5. There are two solutions to this; one is to deploy large enough F5 devices to deal with the additional traffic, or to deploy a dedicated network for management traffic.
- kev_245_28249NimbostratusOne-armed setups can also be less flexible in regards to routing to different gateways for unsolicited traffic as it leaves the Box. For eg when doing reverse proxy.
- I personally like the hybrid approach when applicable... Why only pick one if you don't have to?
- HamishCirrocumulusIf you design for in-line, you can always do SNAT/OneArm...
- Bart_18836Nimbostratus
One more thing that nobody stated here , remember that F5's are not having full-state failover. From my 7 year experience I've seen situations and environments that this setup was a disaster since applications were not able to reconnect sessions after failover of F5 cluster. For me having inline is the last resort (i would run cluster in hybrid mode so we just use inline for particular vips when necassary).
- ltmbanter_43291NimbostratusCluster in Hybrid mode, never considered that. Thanks! When we upgrade our 6400's, I'll consider that.
- Robin_Mordasie1Historic F5 AccountReally there is no distinction between inline or one armed. In both cases the f5 is a full proxy so wether the egress is on the same vlan as the ingress or if they are in different vlans there isn't a difference to how the traffic is processed. The question really comes down to wether or not we need to snat traffic. If the F5 is not the default gateway for the application servers. If it is, then we do not need to snat.
Just an update to Barts' Reply above - full state failover has been supported via connection mirroring for quite a long time now - with a few limitatinos. Bart was probably referencing connection mirroring for SSL - which is indeed supported now in v12, see here:
Overview of connection and persistence mirroring (11.x - 12.x)
Configuring SSL connection mirroring
- Hannes_Rapp_162Nacreous
I do not approve ingress SNAT or SNAT pools in any circumstance :p
True L3 IP visibility on a lower level is the cornerstone of smooth troubleshooting. These days, such networks are a minority, but I always advocate for the use of 2 Default Gateways (IP rules) in end-servers, if F5 cannot be the only default gateway.
BigIP with explicit use of SNAT (one-arm/one-VLAN deployment) may work, but there are CAUTIONS:
- Loss of availability to run tcpdump against true client-src-IP in end-servers, and any other device in line after BigIP. This alone, without considering any other facts or variables, makes the deployment unclean/dirty.
- Risk breaching TCP src-port limits on Server-Side. You can have ~64k concurrent server-side connections from your SNAT-IP to a pool member (dest-ip/port-no combo). It makes it far easier to breach those limits if more clients are stacked up on the same src-IP.
- Once the limit above is breached, you are likely to opt for 'SNAT Pools' - this will convert your infrastructure into a clusterfuck.
- Now, as a dedicated administrator of a clusterfuck infrastructure, what kind of evidence can you provide to an external party, to convincingly prove that incident is not linked to a "possible network issue on your side"? What will you say if they ask for a tcpdump against their source IP-address from the end-servers?
- JRahmAdmin
I try to be less dogmatic in my advice. As much as we would all love the ideal greenfield deployment, the reality is far from that, so knowing all the options and how to best deal with them is important.
- Harry1Nimbostratus
so can we convert the one arm mode deployment in two arm mode just pointing app servers gateway to bigip or i need two interfaces like inside and outside?
- Hannes_Rapp_162Nacreous
Hello prak,
The basic pre-requisite for In-line SNATless BigIP deployment is that Client-Side and Server-side traffic do not use identical VLAN tag information. If you already have servers in a given VLAN, it's best to take that existing VLAN number, and configure it in BigIP for use on the Server-Side (Internal). For the Client-Side (External) traffic you should allocate a different VLAN.
If you decide to go ahead with the design changes and need more help, I would gladly help you out if you post a separate question.
- Hannes_RappNimbostratus
I do not approve ingress SNAT or SNAT pools in any circumstance :p
True L3 IP visibility on a lower level is the cornerstone of smooth troubleshooting. These days, such networks are a minority, but I always advocate for the use of 2 Default Gateways (IP rules) in end-servers, if F5 cannot be the only default gateway.
BigIP with explicit use of SNAT (one-arm/one-VLAN deployment) may work, but there are CAUTIONS:
- Loss of availability to run tcpdump against true client-src-IP in end-servers, and any other device in line after BigIP. This alone, without considering any other facts or variables, makes the deployment unclean/dirty.
- Risk breaching TCP src-port limits on Server-Side. You can have ~64k concurrent server-side connections from your SNAT-IP to a pool member (dest-ip/port-no combo). It makes it far easier to breach those limits if more clients are stacked up on the same src-IP.
- Once the limit above is breached, you are likely to opt for 'SNAT Pools' - this will convert your infrastructure into a clusterfuck.
- Now, as a dedicated administrator of a clusterfuck infrastructure, what kind of evidence can you provide to an external party, to convincingly prove that incident is not linked to a "possible network issue on your side"? What will you say if they ask for a tcpdump against their source IP-address from the end-servers?
- JRahmAdmin
I try to be less dogmatic in my advice. As much as we would all love the ideal greenfield deployment, the reality is far from that, so knowing all the options and how to best deal with them is important.
- Harry1Nimbostratus
so can we convert the one arm mode deployment in two arm mode just pointing app servers gateway to bigip or i need two interfaces like inside and outside?
- Hannes_RappNimbostratus
Hello prak,
The basic pre-requisite for In-line SNATless BigIP deployment is that Client-Side and Server-side traffic do not use identical VLAN tag information. If you already have servers in a given VLAN, it's best to take that existing VLAN number, and configure it in BigIP for use on the Server-Side (Internal). For the Client-Side (External) traffic you should allocate a different VLAN.
If you decide to go ahead with the design changes and need more help, I would gladly help you out if you post a separate question.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com