Forum Discussion
STTR_85331
Nimbostratus
Mar 31, 2011LTM DMZ Design Question
Greetings,
We have been running a pair of LTMs in production for several years in what I would consider "one-armed" mode in that the traffic flows as follows:
Internet<-->Firewalls<-->LTMs<-->Web Servers.
The LTM Virtual Servers use SNAT AutoMap and the web servers use an internal router as their default gateway. Incoming traffic from the internet passes the firewalls and LTMs and is returned out the same path as the web servers see it as originating from an IP on their local subnet (the F5 internal floating IP). Management traffic for the web servers passes over the internal router from other internal networks.
In addition to the web services (HTTP/HTTPS) provided by the above configuration we have now been asked to host STMP relays (inbound/outbound SMTP), DNS (might later be a GTM) and other services that I wouldn't immediately think of putting behind a load balancer.
I was originally planning on putting these new services in a separate DMZ which would exist as an interface on another firewall pair (a traditional 3-leg firewall design) but now I'm wondering if there are reasons that I should consider combining all these services into a centralized DMZ that provides both secure access to SMTP, DNS, etc as well as access to load balanced web farms for HTTP/HTTPS. I have a feeling that if I want to do this with SMTP and DNS outside the F5 but behind my firewall I'll need to move to an inline configuration so that I have a way to send management traffic to/from the DNS/SMTP servers assuming they are behind the DMZ firewall but outside the LTMs.
I'd be interested in others thoughts on best practices for such a configuration as well as where I can read further on my options. I've reviewed the F5 implementation manuals but they seem to only cover individual aspects of what I'm trying to achieve rather than a complete solution.
I'd also be interested in general thoughts on where to place services like SMTP, DNS, etc in an environment that includes F5 LTMs as my assumption to date has been that I would want these things in a DMZ but not behind my LTMs.
Thanks in advance for any tips or pointers to additional reading.
Cheers,
SJT.
11 Replies
- Joel_Moses
Nimbostratus
If I were you, I'd consider putting both SMTP servers and DNS servers behind the F5; these protocols are quite load-balanceable. We tend to organize things into a "sandwich" DMZ stack:internet -> firewall -> F5 -> servers <- firewall <- internal_net
The goal being that the servers in the DMZ area issue no traffic _towards_ the internal network that isn't specifically allowed, and receive only traffic _from_ the public internet that is specifically granted access. A "meets in the middle" approach, if you will. In this model, the default route of the internal servers is set to the F5 and a "weak route" is put on each server so it can find its way to only selected management hosts on the internal_net through the backend firewall and to other subnets off the backend firewall.
In addition, we typically organize the DMZ into "infrastructure stacks" based on the services they offer and the criticality they present. So:internet -> firewall -> F5 -> web_servers <- firewall <- internal_net -> F5 -> high_importance <- -> F5 -> smtp_servers <- -> F5 -> dns_servers <-
We typically have our database servers connected to another leg connected directly to the backside firewall; the goal there being that the databases that support our web_server stack are accessible only from the web server itself -- we don't put them directly in the DMZ server subnet itself, we pass it through the firewall for policy control and logging.
There are a ton of different ways to slice this; we've taken to putting a lot of things behind LTMs because we find that most Internet-available services are able to be loadbalanced to an excellent degree. We also have specific environmental requirements that require us to isolate services for better survivability... you may not have the same requirements. - STTR_85331
Nimbostratus
Thanks Joel!
That's interesting to hear that you use the "sandwich DMZ" approach - I'd been reading about this in various places and hearing pros/cons so it's good to hear that you have a real-world example of this in production.
A few questions - by "weak routes" on the DMZ servers I assume you mean a host based route pointing to the internal networks via the internal FW since their default gateway is the F5? We have used this approach before but I was never 100% convinced that it was something I wanted to rely on. Could you send management traffic to/from the DMZ servers out of a separate interface on the LTMs instead or does that cause more problems that it solves?
Secondly, how are you separating your "infrastructure stacks"? Are they on different networks separated by ACLs, different LTMs, etc or did you just mean groups of virtual servers on the same LTM pair(s)?
I was also interested in your idea of having your database servers in essentially their own DMZ off the "inside" FW. In our case we really have only two classes of server - "web" which encompasses HTTP, Citrix, DNS & SMPT and "application" which includes our application and database servers (our "application layer"). All of these in some way support our customer facing applications and we don't have any "internal" resources present such as user workstations - these are all at separate sites. So in our case we would likely have all our "application" servers in the "internal_net" from your diagram.
If anyone else has similar examples of how they do this I'd love to hear them.
Cheers!
-Simon. - Hamish
Cirrocumulus
My 2p... I don't bother load balancing protocols that load-balance themselves. e.g. SMTP. HTTP always. ftp sometimes...
One architecture I've had good experiences with (One that a colleague came up with. Hi Clarkie!) is to put the F5 between the firewall ad the DMZ's. You can use the F5 to route traffic vioa the firewall still where it has to traverse from one DMZ to another. And everything can be load-balanced without having to worry about firewalling (Because the firewall does it still), or SNAT.
H - JRahm
Admin
Posted By Simon Thorpe on 03/31/2011 01:00 PM
Thanks Joel!
That's interesting to hear that you use the "sandwich DMZ" approach - I'd been reading about this in various places and hearing pros/cons so it's good to hear that you have a real-world example of this in production.
A few questions - by "weak routes" on the DMZ servers I assume you mean a host based route pointing to the internal networks via the internal FW since their default gateway is the F5? We have used this approach before but I was never 100% convinced that it was something I wanted to rely on. Could you send management traffic to/from the DMZ servers out of a separate interface on the LTMs instead or does that cause more problems that it solves?
Secondly, how are you separating your "infrastructure stacks"? Are they on different networks separated by ACLs, different LTMs, etc or did you just mean groups of virtual servers on the same LTM pair(s)?
I was also interested in your idea of having your database servers in essentially their own DMZ off the "inside" FW. In our case we really have only two classes of server - "web" which encompasses HTTP, Citrix, DNS & SMPT and "application" which includes our application and database servers (our "application layer"). All of these in some way support our customer facing applications and we don't have any "internal" resources present such as user workstations - these are all at separate sites. So in our case we would likely have all our "application" servers in the "internal_net" from your diagram.
If anyone else has similar examples of how they do this I'd love to hear them.
Cheers!
-Simon.
I've never been a fan of allowing servers in the DMZ to have any more than 1 interface (save a lights out backplane connection). All traffic, management or otherwise intended for the corporate lan heads directly to a non-external firewall (usually separating mgmt and data zones). For DMZ ADC deployments, all applications in a similar vertical security layer can run through the same HA pair, but once they are inside the firewall I typically split out mission/business critical apps on separate pairs. If I have more than one vertical security layer, sec policy usually demands physical separation and thus multiple pairs. - Hamish
Cirrocumulus
@Jason. I agree.. Some simple rules for internet facing apps that really really help to cut down on intrusions are always good.
1. No inbound interactive connections
2. Any inbound connections need to land in a firewalled DMZ (i.e. user -- firewall -- ltm -- contentlayer (apache/weblogic) -- firewall -- business logic layer
3. Comms between DMZ and BLS (Business Logic Servers) shouldn't be simply proxied through. And should probably be a different protocol
4. No data in the DMZ
5. No filesharing from DMZ to internal networks (This one I've broken before. I have used kerberised NFSv4 and AFS clients on a DMZ accessing internal filesystems. But that was purely to pass statistical and log data back to the internal net).
You can add firewalls between the BLS and databases as well if you like and can afford the latency. If you can't, try & seggregate your databases that are accessed from other databases.
Internal and external DMZ firewalls CAN be the same unit (i.e. DMZ hangs off the side of your internet facing firewalls), but is a lot better (Safer and easier to prove the rules are correct) having separate hardware (Although again, I've been playing with VSX and virtual firewalls recently).
H - Joel_Moses
Nimbostratus
@Simon: You got the meaning right on "weak routes". I'm not a huge fan of them either; they're a bit of a bear to administrate unless your server folks are really on top of their game. But they do provide one additional layer of protection against attacks that try to generate arbitrary traffic through vulnerable web components.
We divide infrastructure stacks by different LTMs _and_ different VLANs; we share ACLs and firewalls, though, because it's easier to get an overall access control policy view that way -- and the bottleneck for our infrastructure stacks are almost never at the firewall because it's pretty much being used as a policy engine and not an IPS/IDS/WAF (we've got other components that do that to varying degrees in each stack. The stacks consist of two VLANs in the middle DMZ - one public numbered one ahead of the LTM, and one private numbered one just behind it where the DMZ servers sit. To Jason's point above, I'm not a fan of multiple interfaces on DMZ servers either -- we just use one interface and allow the routing table to do what I described earlier.
I should note that we DO have stacks that do not sit behind LTM that perform, say, outbound only functions. But those are the exceptions and not the rule.
Regarding the database servers -- we have another VLAN off the inside firewall where we place those; it allows isolation of datastores from the internal network except by authorized networks/administrators, and it provides control over which database servers/databases a particular web server can use. In the event of a direct server compromise (however unlikely), then other datastores not used by the application are largely inaccessible.
@Hamish: I agree with your assessment of not loadbalancing protocols that have built in load balancing (SMTP MX), but it is sometimes useful. Some organizations have inbound mail volumes that frequently overrun the ability of individual gateways to process (especially if those gateways do anti-spam and malware filtering), and each bump to a lower-priority MX means more delay time to email receipt. It's pretty easy to just pool a group of SMTP servers together behind a VIP to keep it from having to hunt -- if you have the bandwidth at your receiving site.
And that's a great list there. Should be enshrined on a velvet painting, IMHO. - STTR_85331
Nimbostratus
@Joel: In regards to having a public IP range outside your LTM and a private range behind it, I assume this means that on the internet facing firewall you are not doing address translation (NAT) but that you are effectively using the LTM for translations instead? If so is there a particular advantage to this vs. having your internet facing firewall do NAT and have private addresses in your DMZ both in front and behind the LTM? The latter is how we are currently configured, but I'd be interested to hear the pros/cons of doing it as you described.
-Simon. - Hamish
Cirrocumulus
Regarding the public addressing the LTM. There's two main differences. Latency (No NAT should mean very very slightly less latency) - probably only discernable if you have a very heavily loaded firewall. And support. It's easier to trace things when you're trying to find a problem if there's fewer points of translation.
FWIW I always like the fewest NAT's as possible.
Oh... Also internal and external hosts know the service by the same IP. Which means you don't get confusion when someone starts talking about IP's etc (And if you don't think it's a problem, I'd be glad to provide endless stories of literally weeks wasted because you can't find people who know what IP a particular service appears as on a particular network because of the multiple times it's NAT'ed as it crossed peoples networks....
H - Hamish
Cirrocumulus
@Joel... Why would you run your (Presumably equal) mail servers with different priorities? If they're truely equal, then you use the same priority in the MX records. Lower priority is only used when the higher doesn't respond. That's not load balancing. It's a backup MX... MX load balancing is performed via the RR DNS resolutions.
However the LTM WOULD be useful if you wanted an easy way to automatically provision extra MX servers at times of stress. And easily remove them (Quickly) when the load goes down... (Although you could doit with either MX records that don't respond till you spin them up, or even that RST because there's no SMTPD running... It would be cleaner with a LB. But I have yet to see something that would require that sort of complexity. Even with loads of millions of emails per hour.
H - Joel_Moses
Nimbostratus
@Simon: Hamish has hit the nail on the head as to why the NAT boundary from public to private is good to put at the LTM. It saves the firewall from having to do a relatively expensive NAT and allows the policy to be instantly readable by the folks responsible for setting said policy without having to drill down into boatloads of NAT. Since the LTM is a very capable NAT device -- one of its core functions, in fact -- I'd rather it be performed there and save the CPU at the firewall for more advanced things.
@Hamish: Once again, you've figured it out. We do indeed dynamically size our incoming pools to handle "bursts" of mail traffic. It makes sense because we have periodic "line of business" emails we send out en masse, and it will typically elicit a massive incoming response of the data we end up processing. Add the fact that we also are forcing TLS transport with most of our customers and have specific email gateway processing that needs to occur, and it becomes a no-brainer to spin up 3 or 4 more gateways in the pool for quarterly "events".
Also, we do have "backup" capacity on lower bandwidth links that we like to reserve; the way we operate our gateways is that if we see any incoming mail on the "backup" gateways, we automatically spin up more capacity at the central sites. Yay iControl!
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects
