Forum Discussion
Wildcard forwarding for direct node traffic with PBR
Apologies if this question has been asked before; I've waded my way through a lot of forum posts but haven't seen the problem I'm facing - feel free to prove otherwise.
I am currently have a HA LTM pair with a single trunked interface (eight trunk members aggregated using LACP and with VLANs trunked on the connected switch). Policy based routing is used on the node VLANs to route matching traffic back to the F5 self IP interface for that VLAN. The default gateway on the nodes is not set to the F5 and all other traffic not matched by PBR uses the default gateway.
Everything works fine, however it is occasionally necessary to connect directly to a service on a node rather than the virtual server. In this case, return traffic is still being routed by PBR to the F5's. I have created a forwarding wildcard virtual server on 0.0.0.0/0 and all ports (with loose connection initiation, etc.), but am still not seeing the traffic being forwarded.
I am seeing the "in" traffic in the virtual server statistics and can match it to individual requests, but I am not seeing "out" traffic be incremented. Before I spend hours poring over packet dumps, can anyone suggest what the problem is likely to be? Is it possible that the F5 is not able to route the traffic, and if so where would I see evidence of this (if anywhere)?
Cheers
21 Replies
- The_Bhattman
Nimbostratus
Hi LTP,
Unfortunately, there is no online documentation that makes a note of the difference. My test is based a behavior I saw in my Lab when I first tested the Nexus 7K and 5K against the F5 Load balancer and it was verified with my Cisco SE after he spoke with the Nexus product manager. For some reason Cisco decided that permit and deny statements should be honored at the route-map level instead of the ACL. It even changes slightly when you ad vPC into the network.
You can find information about Cisco Nexus PBR on the following link:
http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/unicast/configuration/guide/l3_pbr.html
Bhattman - ltp_55848
Nimbostratus
Hi Bhattman,
I've reread through the PBR documentation but don't understand your test configuration above; this would deny requests directly to the nodes from the 10.4.0.0/16 network, correct? (I've probably grossly misunderstood it).
I want to allow connections from the 10.4.0.0/16 network to the nodes (for monitoring and testing purposes) and from what I can see, return traffic is hitting the F5's. Below is a sanitised capture of return traffic from a direct request form a 10.4.0.0/16 client to the node:
11:48:20.427729 IP test.test.com.http > 10-X-X-X.test.com.38557: S 1328028531:1328028531(0) ack 2177636694 win 5792
Unfortunately, after hitting the F5's the traffic seems to be dropped either by the F5's or lost in the ether. The statistics of the wildcard forwarding virtual server shows the incoming traffic counters incrementing but does not register an equivalent outgoing flow. - The_Bhattman
Nimbostratus
Hi Ltp,
You are not going to find the test configuration in the documentation. The behavior that I am explaining isn't clear in black and white (Thank you Cisco tech writers). However, it is documented within their internal knowledge base, which is not published to the public.
Here's how the route MAP configuration that I posted works
Traffic that is going in/out of VLAN_TEST will be evaluated by route-map TEST. It will be evaluated in it's sequence. So lets assume that the return traffic is going from 10.4.0.21 to 10.2.0.10. The traffic will be evaluated on the first sequence of the route map.
route-map TEST Deny 10
match ip address TEST_deny
Since it matches it will exit out of the route-map and then drive traffic based on the switch router via it's routing table
Let's suppose the traffic is 10.4.0.21 to 10.3.0.30, The it's not going to match the first sequence and move onto the next sequence.
route-map TEST Permit 20
match ip address TEST_Allow
set ip next-hop 10.4.0.10
When it matches the it will send the traffic towards the next hop which is defined in the Route-map.
I hope this clarifies what I attempting to show with the configuration.
Give it a try
Bhattmam - ltp_55848
Nimbostratus
Hi Bhattman,
I see how this would work, but it is not going to work for my configuration as I am not NATing/SNATing incoming client requests to the VIP's.
In my configuration, the client request will be passed through to the backend nodes with the real client IP address intact - PBR routes the response back via the F5's whom are then able to SNAT the response to the VIP address. Having an access list that excludes traffic originating from the backend nodes to the client network (ip access-list TEST_deny \n 10 permit tcp 10.4.0.0/16 10.2.0.0/16) will have the effect of never forcing return traffic back via the F5's.
I should reiterate that I can see the reply traffic from a client request directly to a backend node hitting the F5's and it appears to be matched by the wildcard virtual server - it appears that the F5's then attempt to route the response using the default gateway on that network, which matches the PBR rules and redirects it back to the F5's resulting in a loop (I see the same sequence numbers in the packet dump) until the packets are eventually dropped. - ltp_55848
Nimbostratus
After some though on the matter; I ended up creating an iRule on the wildcard virtual server on the backend VLAN to output some verbose logging for the purposes of gathering information form an LTM perspective.
What I found was that the return traffic from a client directly to a backend node (not via a VIP) was being PBR'ed as expected to the F5 self-IP on the backend node's VLAN. However, because the F5 was unaware of the initial traffic flow (it came via the network and not from the F5), the return traffic flow was seen as a client connection to the F5's, with the server being the original requesting client.
The solution was to use an exceedingly simple iRule on the wildcard virtual server for the backend VLAN to set the client nexthop to an F5 self-IP on a "external" VLAN. - The_Bhattman
Nimbostratus
Posted By ltp on 07/04/2011 07:59 PM
Hi Bhattman,
I see how this would work, but it is not going to work for my configuration as I am not NATing/SNATing incoming client requests to the VIP's.
In my configuration, the client request will be passed through to the backend nodes with the real client IP address intact - PBR routes the response back via the F5's whom are then able to SNAT the response to the VIP address. Having an access list that excludes traffic originating from the backend nodes to the client network (ip access-list TEST_deny \n 10 permit tcp 10.4.0.0/16 10.2.0.0/16) will have the effect of never forcing return traffic back via the F5's.
I should reiterate that I can see the reply traffic from a client request directly to a backend node hitting the F5's and it appears to be matched by the wildcard virtual server - it appears that the F5's then attempt to route the response using the default gateway on that network, which matches the PBR rules and redirects it back to the F5's resulting in a loop (I see the same sequence numbers in the packet dump) until the packets are eventually dropped.
Hi Ltp,It doesn't require NATing/SNATing from the VIP. This would work if you were attempting to directly route towards the client and back via the switch using the real IP address. This is assuming that the switch is connected on the same VLAN as the node and the node's default gateway is the switch.
Here are 2 examples
Client to node communication:
1. Source: 10.2.0.12 (Client) --> Destination: 10.4.0.21 (Node) ; Request Leaves the client towards the host
2. Source: 10.2.0.12 (Client) --> Destination: 10.4.0.21 (Node); Request hits the Nexus Switch router
3. Source: 10.2.0.12 (Client) --> Destination: 10.4.0.21 (Node); Nexus Switch router then forwards the traffic towards the host on TEST_VLAN since it's directly attached (Layer 2)
4. Source: 10.2.0.12 (Client) --> Destination: 10.4.0.21 (Node); Host receives the request and then returns a response
5. Source: 10.4.0.21 (Node) --> Destination: 10.4.0.21 (Client); Nexus Switch receives the response because node's gateway is 10.4.0.1
6. Nexus applies PBR matches Test_deny ACL statement, exits out of the route map and then the Nexus switch router forwards response to client perserving the node's real IP addresses.
Client to node communication via VIP
1. Source: 10.2.0.12 (Client) --> Destination: 10.3.0.10 (vip) ; Request Leaves the client towards the VIP
2. Source: 10.2.0.12 (Client) --> Destination: 10.3.0.10 (vip); Packet hits the Nexus Switch router
3. Source: 10.2.0.12 (Client) --> Destination: 10.3.0.10- (vip); Nexus Switch router then forwards the traffic towards the VIP
4. Source: 10.2.0.12 (Client) --> Destination: 10.3.0.10 (vip); F5 receives the response
5. Source: 10.4.0.10 (SNAT/CLIENT) --> Destination: 10.4.0.21 (NODE); F5 SNATs the client's IP address and sends it towards the selected node
6 Source: 10.4.0.10 (SNAT/CLIENT) --> Destination: 10.4.0.21 (Node); Node receives the request and returns a response
5. Source: 10.4.0.21 (Node) --> Destination: 10.4.0.x (SNAT/Client); Nexus Switch receives the response because node's gateway is 10.4.0.1
6. Source: 10.4.0.21 (Node) --> Destination: 10.4.0.x (SNAT/Client); Nexus applies PBR matches Test_Allow ACL statement which states the next HOP is 10.4.0.10. Response is sent to F5
7. Source: 10.3.0.10 (vip) --> Destination: 10.2.0.12 (Client); F5 receives the response on 10.4.0.10 and then retranslates back to client
8. Source: 10.4.0.21 (Host) --> Destination: 10.4.0.21 (Client); Client receives the response.
I hope this clarifies what the test pBR example will do.
Bhattman
- The_Bhattman
Nimbostratus
Posted By ltp on 07/05/2011 06:39 AM
After some though on the matter; I ended up creating an iRule on the wildcard virtual server on the backend VLAN to output some verbose logging for the purposes of gathering information form an LTM perspective.
What I found was that the return traffic from a client directly to a backend node (not via a VIP) was being PBR'ed as expected to the F5 self-IP on the backend node's VLAN. However, because the F5 was unaware of the initial traffic flow (it came via the network and not from the F5), the return traffic flow was seen as a client connection to the F5's, with the server being the original requesting client.
The solution was to use an exceedingly simple iRule on the wildcard virtual server for the backend VLAN to set the client nexthop to an F5 self-IP on a "external" VLAN.Hi Ltp,
I believe i might have miss understood the reason why you used the pBR. If the goal was to reach the backend node perserving the IP address but not caring about a requirement to route via switch network directly, then you could do it where you do not need a PBR or SNATing at all. Smply have a route that points to F5 self-ip on the "external VLAN" to get to node address block (10.4.0.0/x) and then repoint the node's gateway to F5 self-ip on the "internal" vlan. The F5 wildcard virtual IP forwarding would allow the return traffic.
However, I am glad that it worked out for you.
Bhattman
- ltp_55848
Nimbostratus
Hi Bhattman,
Sorry for the confusion. The primary reasons for this design were that; the client IP address be preserved without using an X-Forwarded-for header, and that other non service related traffic (specifically high bandwidth traffic like backups) did not traverse the F5's.
The first requirement ruled out SNAT'ing incoming traffic and the second requirement ruled out the common approach of using the F5 as a default gateway (that is without requiring additional complexity on the client side), so PBR was used to server reply traffic via the F5's whilst allowing all other traffic to continue to be routed via the default gateway. - The_Bhattman
Nimbostratus
Hi Ltp,
Here is an article that is similar to what you are looking to do. However, keep in mind that it's based on the IOS pBR statements.
http://www.thef5guy.com/blog/2010/09/f5-big-ip-and-cisco-vlan-to-vlan-bypass/
Bhattman - ltp_55848
Nimbostratus
Hi All,
Thanks for everybody's replies and help with this - I ended up using the configuration I noted earlier in the thread; using a wildcard_forwarding virtual server with an iRule setting the next hop address for "client" initiated connections from the node address ranges.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com