Forum Discussion
Exchange 2013 load balancing per preferred architecture
Hi again, Geoff.
Unfortunately, with Layer 4 load-balancing, there is NOT a way to preserve per-protocol availability while still leveraging Layer 4 load balancing. The single Layer 4 method described by Microsoft explicitly states:
As long as the OWA health probe response is healthy, the load balancer will keep the target CAS in the load balancing pool. However, if the OWA health probe fails for any reason, then the load balancer will remove the target CAS from the load balancing pool for all requests associated with that particular namespace. In other words, in this example, health from the perspective of the load balancer, is per-server, not per-protocol, for the given namespace. This means that if the health probe fails, all client requests will have to be directed to another server, regardless of protocol.
nPath aka Direct Server Return is actually pretty complicated because it requires a non-standard topology and changes to the target servers themselves, and results in a configuration that's really hard to troubleshoot (all traffic is encrypted, request path is different than return path, etc.)
iApps are designed to aid in both initial configuration and lifecycle management, but you still have full visibility into all created objects and configuration parameters. Actually, with Components View, you have a better view of the items that directly relate to your application than if you were to configure objects manually.
Assuming your Exchange environment is already configured and you already have your certificate and key loaded onto BIG-IP, you can quite literally complete the Exchange iApp in less than 2 minutes, resulting in a well-vetted and reproducible deployment. We have thousands of customers that have used the Exchange iApp for large deployments.
We do acknowledge that the monitoring solution is less than ideal, but we offer more flexibility and more options than anyone else (including Microsoft). The architecture of Exchange 2013 (and 2016) is such that CAS is mostly a "dumb" proxy, and Managed Availability doesn't accurately reflect the state of the CAS server in terms of end-to-end availability or even per-protocol health in most cases, making it tough to determine if a server is actually appropriate as a target for traffic. We're considering other options for determining accurate health in the future, but don't have a good solution yet. However, no one else has anything better, or even as good in most cases ;)
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com