Forum Discussion
F5 LTM SNAT: only 1 outgoing connection, multiple internal clients
- Dec 03, 2018
After a lot of back and forth, this is the configuration we ended up implementing on F5 LTM v12.1.3.6, that allowed us to utilize MRF to combine multiple connections into a single outgoing connection. The connection exits via the SNAT IP. Hope this helps someone.
First, we defined a Virtual Server to which the clients send the Diameter requests:
ltm virtual /Common/virtual_Diameter_Message_Routing { destination /Common/HSS_v_Diameter_v6:3868 ip-protocol tcp profiles { /Common/profile_diam_message_routing { } /Common/profile_diam_message_routing_router_profile { } /Common/tcp { } } rules { /Common/qux } source-address-translation { pool /Common/diameter_snatpool type snat } translate-address enabled translate-port enabled }
... while the destination is defined as:
ltm virtual-address /Common/HSS_v_Diameter_v6 { address fd41:2:2:1::111 arp enabled icmp-echo enabled traffic-group /Common/traffic-group-1 }
The profiles are defined as:
ltm message-routing diameter profile session /Common/profile_diam_message_routing { acct-application-id 4294967295 app-service none auth-application-id 16777217 defaults-from /Common/diametersession origin-host myoriginhost.test.com origin-host-rewrite myoriginhost2.test.com origin-realm test.com product-name product vendor-id 10415 } ltm message-routing diameter profile router /Common/profile_diam_message_routing_router_profile { app-service none defaults-from /Common/diameterrouter routes { /Common/profile_diam_message_routing_static_route_to_peer } } ltm message-routing diameter route /Common/profile_diam_message_routing_static_route_to_peer { peers { /Common/profile_diam_message_routing_peer } virtual-server /Common/virtual_Diameter_Message_Routing } ltm message-routing diameter peer /Common/profile_diam_message_routing_peer { pool /Common/pool_diameter_server transport-config /Common/profile_diam_message_routing_transport } ltm message-routing diameter transport-config /Common/profile_diam_message_routing_transport { ip-protocol tcp profiles { /Common/profile_diam_message_routing { } /Common/tcp { } } rules { /Common/qux } source-address-translation { pool /Common/diameter_snatpool type snat } }
The SNAT is defined as:
ltm snatpool /Common/diameter_snatpool { members { /Common/ext_SNAT_v6 } } ltm snat-translation /Common/ext_SNAT_v6 { address 2607:f160:11:1101::63 inherited-traffic-group true traffic-group /Common/traffic-group-1 } ltm snat /Common/outgoing_snat_v6 { description "IPv6 SNAT translation" mirror enabled origins { ::/0 { } } snatpool /Common/outgoing_snatpool_v6 vlans { /Common/internal } vlans-enabled }
... and finally, the iRules had to be setup to remove Mandatory flags from some of the AVPs that should not have the mandatory bits (bug?) and to send additional Diameter AVPs:
ltm rule /Common/qux { when DIAMETER_EGRESS { switch [DIAMETER::command] { "257" { 260 Vendor-Specific-Application-Id 258 Auth-Application-Id 266 Vendor-Id set aaid_avp [DIAMETER::avp create Auth-Application-Id 0 1 0 0 16777264 unsigned32] set vid_avp [DIAMETER::avp create Vendor-Id 0 1 0 0 10415 unsigned32] DIAMETER::avp append is not designed to create nested avp (ID371630) set grouped_avp [DIAMETER::avp append Auth-Application-Id $aaid_avp source $vid_avp] set grouped_avp ${vid_avp}${aaid_avp} set vsa_avp [DIAMETER::avp create Vendor-Specific-Application-Id 0 1 0 0 $grouped_avp grouped] DIAMETER::avp delete Vendor-Specific-Application-Id DIAMETER::avp insert Vendor-Specific-Application-Id $vsa_avp if { [DIAMETER::is_request] } { DIAMETER::avp mflag set Product-Name 0 DIAMETER::avp mflag set Firmware-Revision 0 } } default { do something } } } }
Proxying CER/CEA is against RFC (https://tools.ietf.org/html/rfc6733section-5.3) and let me rephrase the full-proxy architecture, as i mentioned in my previous comment, proxy will maintain two separate connections for client and server (let assume your internal diameter element is a client side and external diameter element is a server side). So all your internal clients establish individual connection towards bigip, and bigip establish separate connection towards the external server, and these connections remain forever (unless some other issues terminated the connection). Diameter is not like other protocols (example HTTP, request & response and close the connection), it has a mechanism to send Watch-Dogs if the connection is idle and that's maintain the connections between diameter elements.
Hope the above clarifies, you dont need something like send all CER's from your internal element to proxy to your external element and again it violate RFC. why because the proxy(bigip) is sitting in between and maintain the connections separately and route the message based on routes.
GRamanan,
With our current SNAT configuration:
vs_pool__snat_type {
value automap
}
... we are still seeing multiple outgoing TCP connections being brought up. In essence, there's no pooling of internal connections into a single outgoing one being done. Each new outgoing TCP connection uses the SNAT IP, but a different port.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com