Forum Discussion
How to ensure source address and source port are accepted and traversed properly via F5 SNAT automap
- Nov 30, 2023
Hi Mist,
Point 1
=====
Using the Fast HTTP profile has the following limitations:
- The Fast HTTP profile requires client source address translation:
- SNAT is enabled for all Fast HTTP connections by default. This places a 65,536 connection limit on the number of concurrent TCP connections that can be open between each BIG-IP self IP address configured on the VLAN and the node.
- The Fast HTTP profile is not compatible with the following advanced features:
- No IPv6 support
- No SSL offload
- No compression
- No caching
- No PVA acceleration
- No virtual server authentication
- No state mirroring
- No HTTP pipelining
- No TCP optimizations
The FastHTTP profile is a scaled-down version of the HTTP profile optimized for speed under controlled traffic conditions. It can only be used with the Performance HTTP virtual server and is designed to speed up certain types of HTTP connections and reduce the number of connections to servers.
Because the FastHTTP profile is optimized for performance under ideal traffic conditions, the HTTP profile is recommended when load balancing most general-purpose web applications.
Refer to K8024: Overview of the FastHTTP profile before deploying performance (HTTP) virtual servers.
Fast HTTP automatically applies a SNAT to all client traffic. The SNAT can be either a SNAT pool or the BIG-IP self IP addresses configured for the VLAN that are the closest to the subnet on which the node resides. From the node's perspective, all Fast HTTP-processed connections appear to come directly from the BIG-IP itself, and the source client IP information is not retained.
Point 2
=====
If SSL is not offloaded on the bigip, there is no way it can decrypt the traffic coming from the servers and so nothing can be inserted into the headers. Thus for HTTPs packetts will get corrupt if encrypted packets and someoen will try t inser XFF headers in it. to insert XFF headers in HTTPs packets, F5 must have client SSL with SSL key to decrypt the packets before inserting the XFF header. Else dont insert XFF on encrypted packets where the decryption is happening on the backend servers , and F5 is just a SSL pasthrough XFF insertion will make the SSL packets looks tampered or MIM man in middle attack sort of thing.Please refer
https://my.f5.com/manage/s/article/K8024
https://clouddocs.f5.com/cli/tmsh-reference/v14/modules/ltm/ltm_profile_fasthttp.html
https://my.f5.com/manage/s/article/K23843660#link_01_04
Point 3
=====
Setting the proxy_buffering directive to off can cause performance issues and unexpected behavior in NGINX.
Explanation
The proxy_buffering directive controls whether buffering is active for a particular context and child contexts. The default configuration for proxy_buffering is on.
When proxy buffering is enabled, NGINX stores the response from a server in internal buffers as it comes in. NGINX doesn't start sending data to the client until the entire response is buffered.
When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.
The proxy_buffer_size directive specifies the size of the buffer for storing headers found in a response from a backend server.Setting the proxy_buffering directive to “off” is a common mistake because it can cause performance issues and unexpected behavior in NGINX. When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.
Point 4
=====
"proxy_request_buffering off" is not supported on NGINX App Protect"
Please refer
https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
HTH
🙏
- The Fast HTTP profile requires client source address translation:
Dear F5_Design_Engineer , @zamroni777 ,
thank you for your feedback. We tested the standard HTTP and other options, but the SAST solution in question works best with nginx and the settings given by the vendor and F5 FASTHTTP with cookie and X-Forwarded-For iRule. As we're not traversing regular HTTP traffic, but code flows, disabling caching and buffering is the only way currently to ensure no glitches occur. This has been given as requirement from the product architect.
Should the respective SAST vendor decide to work with F5, I've given feedback what currently works. They're free to rent their own F5s and improve the application stack should they have the proper business need.
Thanks for the candor!
Hi all,
I would like to give a bit more detailed feedback and the compromize options taken.
So it turns out in my customer F5 environment SNAT is king for spreading the load evenly.
Standard HTTP profile with TLS off-loading is possible, however in this case I lose the option to speak with the application server directly via TLS and use the embedded crypto stack for user authentication. I will test this after new year to see if I will be able to do X-Forwarded-For with SNAT and distributed code analysis.
Fast HTTP + Cookie + X-Forwarded-For iRule is the best solution in terms of performance functionality which is going via SNAT in local analysis mode when one wants to preserve the TLS back-end support and being able to authenticate with the analyzer via certificates. We tested L4-Fast/Performance - those are great for browsing, but not for the app code flows.
End of the day one needs to know the environments and apps properly to avoid lots of hassles. Things may be different without SNAT and in other environments, but for the time being happy from the links given and the knowledge gained.
Thank you F5 and community!
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com