Networking
4 TopicsIssue with 2 parallel F5 clusters
Hello everybody and first of all thank you for taking the time to read my issue! The issue that I have is in regards to a migration We have a productive F5 BigIP cluster (Active/Standby), let's call this "Old F5", which has a lot of Virtual Servers in partitions, with specific pools and monitors for each application/service This device also has 2 Vlans, internal (vlan11) and external (vlan10), and 2 interfaces in an LACP that it's tagged on both Vlans, and it's connected to the same one leg to a Cisco APIC It has 2 Self IP addresses (one for each Vlan): 10.10.10.1-Vlan "external" 10.20.20.1-Vlan "internal" (numbers are just for example) It also has 4 Floating IP address (2 for each Vlan) with 2 traffic groups: 10.10.10.2-Vlan external traffic group 1 10.10.10.3-Vlan external traffic group 2 10.20.20.2-Vlan internal traffic group 1 10.20.20.3-Vlan internal traffic group 2 This device (cluster) has to be replaced by another F5 BigIP cluster (let's call this new F5), this device is an identical copy to the old F5 (the config was took from the old one and imported to the new one), meaning same Vlans, monitors, pools, VServers IP addresses etc At the moment this one has the 2 interfaces disabled and a blackhole default reject route set up in order to not interfere with the old F5 which is the productive one. The ideea is to configure the new F5 device with IP addresses from the same subnet (for example 10.10.10.5), and disable all the Virtual Servers so it doesn't handle traffic (the nodes, monitors, pools stay up on both devices), and have the 2 F5 devices, old and new, running in parallel and then move the Virtual servers one by one by just disabling the VS on the old F5 and enable it on the new F5. At this point we also remove the blackhole route, configure the correct default static route (the same which is on the old F5), and enable the interfaces This sounded and looked good, on the new F5 the nodes, pools are green and the Virtual servers are disabled as expected. On the old productive F5 everything is up and green BUT if I try to reach one of the Virtual servers, either by the Virtual IP address or hostname the attempt just times out without any response (if I try to telnet to the VS on port 443 it connects meaning that the old F5 accepts the traffic) I tried to disable on the new F5 also the nodes but still the same behaviour, the only to get it back to work is to disable the interfaces on the new F5 and add the default reject blackhole route. This is not how I imagined it to work, in my mind I was expecting that the old F5 will work as normal, and the new F5 device will see the nodes and pools up (confirming good communication) but don't handle any traffic regarding the Virtual servers because they are disabled. Does anyone have any idea what is causing this issue, why when both F5 devices are up in parallel, the connection to the Virtual server through the old productive F5 times out while that F5 sees both the pools and Virtual servers as up and running. Thank you in advance!38Views0likes3CommentsNeed help to understand operation between RE and CE ?
Hi all, We have installed CE site in our network and this site has established IPSEC tunnels with RE nodes. The on-prem DC site has workloads (e.g actual web application servers that are serving the client requests). I have citrix netscaler background and the Citrix Netscalers ADCs are configured with VIPs which are the frontend for the client requests coming from outside (internet), when the request land on VIPs, it goes through both source NAT and destination NAT, its source address is changed to private address according to the service where the actual application servers are configured and then sent to the actual application server after changing the destination to IP address of the server. In XC, the request will land to the cloud first because the public IP, which is assigned to us will lead the request to RE. I have few questions regarding the events that will happen from here after Will there going to be any SNAT on the request or will it send it as it is to the site? And if there is SNAT then what IP address will it be ? and will it be done by the RE or on-prem CE There has to be destination NAT. Will this destination NAT is going to be performed by the XC cloud or the request will be sent to the site and site will do the destination NAT ? When the request will land the CE it will be landed in VN local outside so this means that we have to configure the network connector between the VN Local outside and the VN in which the actual workloads are configured, what type of that VN would be ? When the request will be responded by the application server in local on-prem the site the request has to go out to the XC cloud first, it will be routed via IPSEC tunnel so this means that we have to install the network connector between the Virtual network where the workloads are present and site local outside, do we have to install the default route in application VN ? Is there any document, post or article that actually help me to understand the procedure (frankly I read a lot of F5 documents but couldn’t able to find the answers367Views0likes10CommentsDifferent Route's for Different Subnets on the same partition
Hi Guys, When someone set up our F5 they created multiple partitions for different segments. We are trying to reconfigure the F5 to all everything running from the common partition. We currently have our public wifi authentication happening via the F5 on a subnet REDACTED That is working fine because we have a route with REDACTED to the correct gateway. I also want to create VS with the subnet REDACTED Now we have the self ip's in place, and the Vlans are in the same route domain (0). The issue I am facing is I can get to the back end of the VS, however if I remove the default route for the public wifi and add the gateway for the REDACTED network I can then access that but not the public Wi-Fi. Can anyone help or provide a suggestion as to how I can get both subnets working on the same partition?691Views0likes7CommentsvCMP Host and Guest Communication
Hi All, I'm having some difficulty with some pre-testing that I'm doing for a vCMP Host - Guest design and hoping somebody here could steer me in the right direction. Basically, the deployment is very restrictive in terms of isolation so for each environment (UAT/PPD/PRD) we have presentation, abstraction and database networks. Due to the restrictive nature of the deployment where each environment network needs to be firewalled off (L3 gateway for each subnet is the firewall), the only way I have found to achieve the isolation restrictions is to create 3 x RDs per administration partition referencing each environment and defining a unique RD default gateway for each subnet for each environment. What I'm wanting to do is some pre-testing to verify my configuration by creating a self IP on the vCMP host in each VLAN for each environment and verify that the strict isolation requirements are working and that I can ping from a specific RD on the guest to an IP address in a different network on the vCMP host. I can ping from the vCMP guest to each of the self IP addresses defined on the vCMP host, confirming that the VLANs are presented between vCMP host and guest. The problem is that I never get an echo reply back from the vCMP host when trying to ping outside of the local route domain subnet. An example: UAT Presentation network is 192.168.8.0/24, can ping 192.168.8.1 on vCMP host (VLAN 180) (self IP 8.252, floating IP 8.254). I can ping from host to vADC and vADC to host ok. (route domain 8) UAT Abstraction network is 192.168.9.0/24, can ping 192.168.9.1 on vCMP host (VLAN 190) (self IP 9.252, floating IP 9.254). I can ping from host to vADC and vADC to host ok. (route domain 9) What fails is pinging from route domain 8 to the vCMP host IP 192.168.9.1. If I tcpdump on the vCMP host, I see the echo request come in on the Presentation network interface on the vCMP host but never get a echo reply. Update: I'm guessing but I think my issue is that I'm trying to route through a self IP/floating IP. The only way this would work is if I had a forwarding VIP setup in the appropriate zones and that IP address was used as next hop right? I don't think this could work as vCMP host is dedicated to vCMP only and isn't running LTM. Therefore I cannot define a forward VIP and this testing is flawed. Can somebody please verify that my comment is correct? Would be hugely appreciated. Cheers, Andy.434Views0likes2Comments