on 19-May-2020 13:03
Different applications and environments have unique needs on how traffic is to be handled. Some applications due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP of the client making the request to the application.
Now when the request comes to the BIG-IP it has the option to change the real IP of the request or to keep it intact. In order to keep it intact the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’.
Now as simple as it may sound to just toggle a setting on the BIG-IP, a change of this setting causes significant change in traffic flow behavior.
Let’s take an example with some actual values. Starting with a simple setup of a standalone BIG-IP with one interface on the BIG-IP for all traffic (one-arm)
From Client : Src: 10.168.56.30 Dest: 10.168.57.11
From BIG-IP to Server: Src: 10.168.57.10 (Self-IP) Dest: 192.168.56.30
With this the server will respond back to 10.168.57.10 and BIG-IP will take care of forwarding the traffic back to the client. Here the application server see’s the IP 10.168.57.10 and not the client IP
From Client : Src: 10.168.56.30 Dest: 10.168.57.11
From BIG-IP to Server: Src: 10.168.56.30 Dest: 192.168.56.30
With this the server will respond back to 10.168.56.30 and here where comes in the complication, the return traffic needs to go back to the BIG-IP and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason. It is at this time Policy based redirect will help. The default gw of the server will point to the ACI fabric, the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP.
Before we get to the deeper into the topic of PRB below are a few links to help you refresh on some of the Cisco ACI and BIG-IP concepts
Network diagram for reference:
Details on L4-L7 service graph on APIC
To get hands on experience on deploying a service graph (without pbr)
1) Bridge domain ‘F5-BD’
2) L4-L7 Policy-Based Redirect
3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device
4) Service graph template
5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows:
6) Apply the service graph template
Once the service graph is deployed, it is in applied state and the network path between the consumer, BIG-IP and provider has been successfully setup on the APIC.
7) Verify the connector configuration for PBR. Go to Device selection policy under Tenant->Services-L4-L7. Expand the menu and click on the device selection policy deployed for your service graph.
1) VLAN/Self-IP/Default route
2) Nodes/Pool/VIP
3) iRule (end of the article) that can be helpful for debugging
1) BIG-IP: Set MAC Masquerade (https://support.f5.com/csp/article/K13502)
2) APIC: Logical device cluster
3) APIC: L4-L7 Policy-Based Redirect
------------------------------------------------------------------------------------------------------------------------------------------------------------------
Configuration is complete, let’s take a look at the traffic flows
In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP
when LB_SELECTED { log local0. "==================================================" log local0. "Selected server [LB::server]" log local0. "==================================================" } when HTTP_REQUEST { set LogString "[IP::client_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "REQUEST -> $LogString" log local0. "==================================================" } when SERVER_CONNECTED { log local0. "Connection from [IP::client_addr] Mapped -> [serverside {IP::local_addr}] \ -> [IP::server_addr]" } when HTTP_RESPONSE { set LogString "Server [IP::server_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "RESPONSE -> $LogString" log local0. "==================================================" }
Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED>
Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.56.30 -> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30
If you are curious of the iRule output if SNAT is enabled on the BIG-IP - Enable AutoMap on the virtual server on the BIG-IP
Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.57.10-> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30
ACI PBR whitepaper:
Troubleshooting guide:
Layer4-Layer7 services deployment guide
Service graph:
Policy Based Routing is a feature only available in NSX Cloud environments and from these not all will expose the functionality. You can find this in https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-C10D3FCE-754B-489B-86EB-...