routes
3 Topicsha cis multi cluster Openshift route creation
I like to verify that when creating a route on an Openshift multicluster HA cis environment, the endpoints of a service on the secondary cluster are added to the poolmembers automatically. First I had the annotation below add: virtual-server.f5.com/multiClusterServices: | [ { "clusterName": "openshift-engineering-02", "service": "tea-svc", "namespace": "cafe", "servicePort": 8080, "weight": 100 } ] Creating routes without this annotation still adds the pods of the service with the same name and in the same namespace on the secondary cluster I saw. Is this annotation not required for a HA cis multi cluster application? Does HA CIS always add the pods of the secondary cluster as poolmembers if they belong to the same service and namespace as on the primary cluster? And the same if the secondary CIS becomes the active CIS? What about services on other external clusters? Is the annotation for virtual-server.f5.com/multiClusterServices only required if the service or namespace do not match with the names in the route manifest?Solved193Views0likes2CommentsF5 APM Network Access route domain -- specific gateway for vpn clients
I have setup a virtual server listening on the wan for vpn requests on port 443. I have a specific vlan configured for vpn clients 10.12.200.0/23. I have created a new route domain, and i have added the vlan into the route domain. In the VPE i added route domain and selected the correct one after authentication and before advanced resource assign. I created self ips of 10.12.200.3%200 and 10.12.200.4%200 (floating). I am able to ping the gateway on the upstream switch 10.12.200.1. If i add a default route 0.0.0.0%200 0.0.0.0 10.12.200.1%200 i cant get to anything on the vpn. it hits the self ip 10.12.200.3 and stops. If i turn on proxyarp, i get full connectivity, but the vpn client disconnects almost immediately (usually between 1-10 seconds after connecting) with no log messages other than client request to disconnect vpn session in the windows logs and in the APM it just says session deleted due to user logout request. I deleted the default route and created an l4 forwarding server source 10.12.200.0%200/23 and destination 0.0.0.0%200/0 with source address translation turned off as well as address and port translation turned off and set the pool to the gateway 10.12.200.1%200. I bound this to the vlan as well as to the connection profile vlan. This also cannot get past 10.12.200.3. If i turn on proxy arp, same thing, it works perfectly for a few seconds and then abruptly disconnects. if i turn off proxy arp but set snat to automap, i can ping everything, but nothing works in browser, rdp, ssh, etc, they all just come back saying connection refused. I cannot figure out why this is failing to work. I have seen several articles about this, and I have set this up as others have suggested and have not been able to successfully route via a default route from that vlan once connected to vpn.134Views0likes0CommentsBIG-IP does not route traffic from "internal vlan" to "external vlan"
Hello, My name is Joaquin, I am working in a lab for a implementation of BIG-IP 2000S LTM, I pass to explain my topology and what is happening. I will load balance the outgoing traffic to 2 links of internet for example: Link 1: 1.1.1.5 and Link 2: 2.2.2.5, I added 3 Self-IPs one 1.1.1.1 that points to the Link 1, and the other 2.2.2.1 that points to the Link 2. Then I added 2 nodes one is the 1.1.1.5 and other 2.2.2.5 on a pool (pool_dfg) for the load balance. And also I created an Internal vlan (untaged) that points to a host 192.168.1.1 with his respective self-ip (ex: 192.168.1.100) on interface 1.1 and external vlan (untaged) that points to Link 1 ( int 1.3), and Link 2 ( int 1.4), and a default route 0.0.0.0/0 that points to the pool pool_dfg of the links! From the F5, I can ping the interfaces of the routers and it loadbalances the "pings" between one link and the other as I want, and if I connect one host behind the routers I can ping it too, but here is the problem from the host on the Internal VLAN (192.168.1.1), I can ping the interfaces of the F5 (self ips, example: 1.1.1.1), but the pings doesn't pass from the F5, for example If I want to ping the "node Link 1 -1.1.1.5- router interface" I can't and the same with "node Link 2", the packets never arrive. I think I am missing some essential thing. If you could help me I will be so greatfull, If you need some information more, don't doubt in ask. **_First of anything thanks you so much for reading my ask!_**848Views0likes4Comments