F5 APM Network Access route domain -- specific gateway for vpn clients
I have setup a virtual server listening on the wan for vpn requests on port 443. I have a specific vlan configured for vpn clients 10.12.200.0/23. I have created a new route domain, and i have added the vlan into the route domain. In the VPE i added route domain and selected the correct one after authentication and before advanced resource assign. I created self ips of 10.12.200.3%200 and 10.12.200.4%200 (floating). I am able to ping the gateway on the upstream switch 10.12.200.1. If i add a default route 0.0.0.0%200 0.0.0.0 10.12.200.1%200 i cant get to anything on the vpn. it hits the self ip 10.12.200.3 and stops. If i turn on proxyarp, i get full connectivity, but the vpn client disconnects almost immediately (usually between 1-10 seconds after connecting) with no log messages other than client request to disconnect vpn session in the windows logs and in the APM it just says session deleted due to user logout request. I deleted the default route and created an l4 forwarding server source 10.12.200.0%200/23 and destination 0.0.0.0%200/0 with source address translation turned off as well as address and port translation turned off and set the pool to the gateway 10.12.200.1%200. I bound this to the vlan as well as to the connection profile vlan. This also cannot get past 10.12.200.3. If i turn on proxy arp, same thing, it works perfectly for a few seconds and then abruptly disconnects. if i turn off proxy arp but set snat to automap, i can ping everything, but nothing works in browser, rdp, ssh, etc, they all just come back saying connection refused. I cannot figure out why this is failing to work. I have seen several articles about this, and I have set this up as others have suggested and have not been able to successfully route via a default route from that vlan once connected to vpn.65Views0likes0CommentsMigration projects - how to avoid IP conflicts
Hi, Wonder if there is smarter/easier way to avoid IP conflicts during migrations - in the phase when production and new service should listen on the same IP. Scenario: All VIPs in 192.168.1.0/24 subnet All traffic to VIPs is coming via external router (no clients in 192.168.1.0/24 subnet) Production device IP: 192.168.1.254 Production VIP: 192.168.1.100 New device IP: 192.168.1.253 New VIP: 192.168.1.100 Traffic from any client except test station (192.168.10.100) should hit VIP at production device BIG-IP setup: Floating IP: 192.168.1.253 VIP: 192.168.1.100; ARP disabled External router setup: Route: From 192.168.10.100 to 192.168.1.100/32 gw 192.168.1.253 One important note: virtual-address object for VS has to be created in advance via tmsh. For example using tmsh load sys config from-terminal merge and similar config: ltm virtual-address 192.168.1.100 { address 192.168.1.100 arp disabled mask 255.255.255.255 traffic-group traffic-group-1 } or of course any other suitable way. Reason for that is simple - auto created virtual-address objects (created when VS is created) always has ARP enabled. After finishing testing all virtual-address objects can be updated with ARP enabled using simple bash script like below: !/bin/sh $1 contains source file with VIP to place in array $2 contains enable or disable to turn on and off ARP fro VIP if [ -z "$1" ] then bad arguments - quit echo "Syntax: vip_arp_enable-disable_from-file.sh " else mapfile -t myArray < $1 local count = 0 for vip in "${myArray[@]}" do echo "Vip is: $vip"; tmsh modify ltm virtual-address $vip arp $2; ((++count)); done echo "$count processed" fi With above script it's possible to both enable and disable ARP, file with list of virtual-addresses to be processed. Result: Traffic from any client (except sourced from 192.168.10.100) is just send to 192.168.1.100 based on MAC in ARP Reply send to 192.168.1.0/24 subnet by router. BIG-IP never responds to ARP Request for 192.168.1.100 (ARP disabled on this VIP) Traffic from sourced from 192.168.10.100 is send to 192.168.1.253 (next hop, using 192.168.1.253 MAC as target and 192.168.1.100 as target IP), then internally BIG-IP is able to route this packet to configured VS with 192.168.1.100. Tested and working, but maybe not optimal approach? What I am afraid is if ARP cache on router will not be issue - like when production traffic is routed MAC of production VIP is cached, then when test traffic is processed (to the same IP as production) this cached entry will be used - traffic will reach production instead of test VIP - never happened in my lab but it's not 100% confirmation it would not fail. Router simulated using other BIG IP with two VSs: Wildcard (Forwarding IP) accepting traffic to subnet 192.168.1.0/24 Wildcard (PerformanceL4): Source Address: 192.168.10.100/32 Destination Address/Mask: 192.168.1.0/24 All ports All protocols Address Translation and Port Translation: disabled Pool with Pool Member: 192.168.1.253 (Floating IP of other BIG-IP)440Views0likes1CommentMigration projects - how to avoid IP conflicts
Hi, Wonder if there is smarter/easier way to avoid IP conflicts during migrations - in the phase when production and new service should listen on the same IP. Scenario: All VIPs in 192.168.1.0/24 subnet All traffic to VIPs is coming via external router (no clients in 192.168.1.0/24 subnet) Production device IP: 192.168.1.254 Production VIP: 192.168.1.100 New device IP: 192.168.1.253 New VIP: 192.168.1.100 Traffic from any client except test station (192.168.10.100) should hit VIP at production device BIG-IP setup: Floating IP: 192.168.1.253 VIP: 192.168.1.100; ARP disabled External router setup: Route: From 192.168.10.100 to 192.168.1.100/32 gw 192.168.1.253 One important note: virtual-address object for VS has to be created in advance via tmsh. For example using tmsh load sys config from-terminal merge and similar config: ltm virtual-address 192.168.1.100 { address 192.168.1.100 arp disabled mask 255.255.255.255 traffic-group traffic-group-1 } or of course any other suitable way. Reason for that is simple - auto created virtual-address objects (created when VS is created) always has ARP enabled. After finishing testing all virtual-address objects can be updated with ARP enabled using simple bash script like below: !/bin/sh $1 contains source file with VIP to place in array $2 contains enable or disable to turn on and off ARP fro VIP if [ -z "$1" ] then bad arguments - quit echo "Syntax: vip_arp_enable-disable_from-file.sh " else mapfile -t myArray < $1 local count = 0 for vip in "${myArray[@]}" do echo "Vip is: $vip"; tmsh modify ltm virtual-address $vip arp $2; ((++count)); done echo "$count processed" fi With above script it's possible to both enable and disable ARP, file with list of virtual-addresses to be processed. Result: Traffic from any client (except sourced from 192.168.10.100) is just send to 192.168.1.100 based on MAC in ARP Reply send to 192.168.1.0/24 subnet by router. BIG-IP never responds to ARP Request for 192.168.1.100 (ARP disabled on this VIP) Traffic from sourced from 192.168.10.100 is send to 192.168.1.253 (next hop, using 192.168.1.253 MAC as target and 192.168.1.100 as target IP), then internally BIG-IP is able to route this packet to configured VS with 192.168.1.100. Tested and working, but maybe not optimal approach? What I am afraid is if ARP cache on router will not be issue - like when production traffic is routed MAC of production VIP is cached, then when test traffic is processed (to the same IP as production) this cached entry will be used - traffic will reach production instead of test VIP - never happened in my lab but it's not 100% confirmation it would not fail. Router simulated using other BIG IP with two VSs: Wildcard (Forwarding IP) accepting traffic to subnet 192.168.1.0/24 Wildcard (PerformanceL4): Source Address: 192.168.10.100/32 Destination Address/Mask: 192.168.1.0/24 All ports All protocols Address Translation and Port Translation: disabled Pool with Pool Member: 192.168.1.253 (Floating IP of other BIG-IP)369Views0likes0CommentsF5 & Cisco ACI Essentials - Take advantage of Policy Based Redirect
Different applications and environments have unique needs on how traffic is to be handled. Some applications due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP of the client making the request to the application. Now when the request comes to the BIG-IP it has the option to change the real IP of the request or to keep it intact. In order to keep it intact the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’. Now as simple as it may sound to just toggle a setting on the BIG-IP, a change of this setting causes significant change in traffic flow behavior. Let’s take an example with some actual values. Starting with a simple setup of a standalone BIG-IP with one interface on the BIG-IP for all traffic (one-arm) Client – 10.168.56.30 BIG-IP Virtual IP – 10.168.57.11 BIG-IP Self IP – 10.168.57.10 Server – 192.168.56.30 Scenario 1: With SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.57.10 (Self-IP) Dest: 192.168.56.30 With this the server will respond back to 10.168.57.10 and BIG-IP will take care of forwarding the traffic back to the client. Here the application server see’s the IP 10.168.57.10 and not the client IP Scenario 2: No SNAT From Client : Src: 10.168.56.30 Dest: 10.168.57.11 From BIG-IP to Server: Src: 10.168.56.30 Dest: 192.168.56.30 With this the server will respond back to 10.168.56.30 and here where comes in the complication, the return traffic needs to go back to the BIG-IP and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason. It is at this time Policy based redirect will help. The default gw of the server will point to the ACI fabric, the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP. With this the advantage of using PBR is two-fold The server(s) default gateway does not need to point to BIG-IP but can point to the ACI fabric The real client IP is preserved for the entire traffic flow Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic.If server originated traffic volume is high it could result unnecessary load the BIG-IP Before we get to the deeper into the topic of PRB below are a few links to help you refresh on some of the Cisco ACI and BIG-IP concepts ACI fundamentals: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals.html SNAT and Automap: https://support.f5.com/csp/article/K7820 BIG-IP modes of deployment: https://support.f5.com/csp/article/K96122456#link_02_01 Now let’s look at what it takes to configure PBR using a Standalone BIG-IP Virtual Edition in One-Arm mode Network diagram for reference: To use the PBR feature on APIC - Service graph is a MUST Details on L4-L7 service graph on APIC To get hands on experience on deploying a service graph (without pbr) Configuration on APIC 1) Bridge domain ‘F5-BD’ Under Tenant->Networking->Bridge domains->’F5-BD’->Policy IP Data plane learning - Disabled 2) L4-L7 Policy-Based Redirect Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one Name: ‘bigip-pbr-policy’ L3 destinations: BIG-IP Self-IP and MAC IP: 10.168.57.10 MAC: Find the MAC of interface the above Self-IP is assigned from logging into the BIG-IP (example: 00:50:56:AC:D2:81) 3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device Managed – unchecked Name: ‘pbr-demo-bigip-ve` Service Type: ADC Device Type: Virtual (in this example) VMM domain (choose the appropriate VMM domain) Devices: Add the BIG-IP VM from the dropdown and assign it an interface Name: ‘1_1’, VNIC: ‘Network Adaptor 2’ Cluster interfaces Name: consumer, Concrete interface Device1/[1_1] Name: provider, Concrete interface: Device1/[1_1] 4) Service graph template Under Tenant->Services->L4-L7->Service graph templates, create a service graph template Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph ADC: one-arm Route redirect: true 5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows: Connector C1 Direct connect – False (Not mandatory to set to 'True' because PBR is not enabled on consumer connector for the consumer to VIP traffic) Adjacency type – L3 Connector C2 Direct connect - True Adjacency type - L3 6) Apply the service graph template Right click on the service graph and apply the service graph Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract For the connector select the following: BD: ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Cluster interface – ‘provider’ Once the service graph is deployed, it is in applied state and the network path between the consumer, BIG-IP and provider has been successfully setup on the APIC. 7) Verify the connector configuration for PBR. Go to Device selection policy under Tenant->Services-L4-L7. Expand the menu and click on the device selection policy deployed for your service graph. For the consumer connector where PBR is not enabled Connector name - Consumer Cluster interface - 'provider' BD- ‘F5-BD’ L3 destination – checked Redirect policy – Leave blank (no selection) For the provider connector where PBR is enabled: Connector name - Provider Cluster interface - 'provider' BD - ‘F5-BD’ L3 destination – checked Redirect policy – ‘bigip-pbr-policy’ Configuration on BIG-IP 1) VLAN/Self-IP/Default route Default route – 10.168.57.1 Self-IP – 10.168.57.10 VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter 2) Nodes/Pool/VIP VIP – 10.168.57.11 Source address translation on VIP: None 3)iRule (end of the article) that can be helpful for debugging Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair 1)BIG-IP: Set MAC Masquerade (https://support.f5.com/csp/article/K13502) 2)APIC: Logical device cluster Promiscuous mode – enabled Add both BIG-IP devices as part of the cluster 3) APIC: L4-L7 Policy-Based Redirect L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Configuration is complete, let’s take a look at the traffic flows Client-> F5 BIG-IP -> Server Server-> F5 BIG-IP -> Client In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP iRule to help with debugging on the BIG-IP when LB_SELECTED { log local0. "==================================================" log local0. "Selected server [LB::server]" log local0. "==================================================" } when HTTP_REQUEST { set LogString "[IP::client_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "REQUEST -> $LogString" log local0. "==================================================" } when SERVER_CONNECTED { log local0. "Connection from [IP::client_addr] Mapped -> [serverside {IP::local_addr}] \ -> [IP::server_addr]" } when HTTP_RESPONSE { set LogString "Server [IP::server_addr] -> [IP::local_addr]" log local0. "==================================================" log local0. "RESPONSE -> $LogString" log local0. "==================================================" } Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED> Scenario 1: No SNAT -> Client IP is preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.56.30 -> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 If you are curious of the iRule output if SNAT is enabled on the BIG-IP - Enable AutoMap on the virtual server on the BIG-IP Scenario 2: With SNAT -> Client IP not preserved Rule /Common/connections <HTTP_REQUEST>: Src: 10.168.56.30 -> Dest: 10.168.57.11 Rule /Common/connections <SERVER_CONNECTED>: Src: 10.168.56.30 Mapped -> 10.168.57.10-> Dest: 192.168.56.30 Rule /Common/connections <HTTP_RESPONSE>: Src: 192.168.56.30 -> Dest: 10.168.56.30 References: ACI PBR whitepaper: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739971.html Troubleshooting guide: https://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/troubleshooting/Cisco_TroubleshootingApplicationCentricInfrastructureSecondEdition.pdf Layer4-Layer7 services deployment guide https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_011.html Service graph: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401_chapter_0111.html https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/L4-L7-services/Cisco-APIC-Layer-4-to-Layer-7-Services-Deployment-Guide-401.pdf2.6KViews1like3Comments