BGP stops advertising after upgrade
Hello , we have an LTM VE in a HA cluster . We have defined a couple of route domain (RD) and have enabled BGP/BFD for these route domains . There is a BGP routing configuration present (imish -r RD) . In this configuration peer devices are defined , and by putting RHI (route health injection) we advertise our virtual servers towards these bgp peers . The current setup is running on version 13.1.1.5 and is working since long time without any issue. AS v13 is going end of life we tried to upgrade recently to v14.1.5.2 . The upgrade itself went smooth . New version was activated , all pools and virtual servers were present as before. Initially all looked ok . When we checked out BGP peer (show ip bgp summary) we could see that the peering was established , again this looked ok . But when checking the advertised routes , no routes were being advertised . "sh ip bgp neighbour x.x.x.x advertised-routes" --> showed no routes present , whereas before we had about 10 virtual servers being announced in v13 I'm aware of articlehttps://cdn.f5.com/product/bugtracker/ID1031425.htmlconcerning BGP advertising . But this is the case when you receive a route , and try to advertise it then from F5 (back to front advertising) . In our case F5 is end device , and just announcing these virtual servers. So we are not receiving any BGP update and then sending these routes on . IN the end we needed to rollback to v13 again , by booting from partition with old version . Once this was done all started working again including BGP . Any idea what could be issue here ? (i've pasted our BGP config here below , it's quite basic) we use a routemap for blocking incoming updates (DENY-ALL) and with routemap "KERNEL2BGP" we control which virtual servers we can advertise . (each ip we want to announce it mentioned in this routemap) router bgp F5-AS bgp router-id F5-selfIP bgp always-compare-med bgp log-neighbor-changes bgp graceful-restart restart-time 120 redistribute kernel route-map KERNEL2BGP neighbor peer-IP remote-as "remote-as-nr" neighbor peer-IP description "xxx" neighbor peer-IP update-source selfip-address neighbor peer-IP password "xxx" neighbor peer-IP timers 3 9 neighbor peer-IP fall-over bfd neighbor peer-IP next-hop-self neighbor peer-IP soft-reconfiguration inbound neighbor peer-IP route-map DENY-ALL inSolved999Views0likes6CommentsLTM VE virtual server unreachable sometimes
Hello , we use a HA cluster of 2 LTM's running on version 13.1.1.5 . Up til some weeks ago the LTM were deployed on vcmp guests on a viprion 2400 platform . Early 2022 we migrated our LTM's from viprion towards LTM VE units running on vmware ESX . During migrations we didn't encounter any issue. (we basically used RMA process for replacing the units 1by1 with LTM VE's) basic setups (VIP + TCP port) is done for some applications on this cluster . Where we use a virtual server , together with SNAT for pointing to pool members . And we use mac-masquerading also for creating fake mac-addresses. For this purpose we put "promiscious mode" - "'forged transmit" - "mac address changes" to "accept" on vmware . Vmware is running on HP blade enclosure . Where normally 1 blade = 1 vmware host . During setup we asked vmware team to always have both units in HA cluster , running on different vmware hosts. (redudancy we thought) Since replacement we are getting for some setups complaints that during the night (period of low traffic) , access to the virtual server (VIP + TCP port) is lost for a short period . When we check out ltm logs , we see however that pool members are still reachable as there are no UP/Down events. Neither do we see "failover" messages in logs . So clearly the HA cluster remains stable & pool member monitoring keeps working. Further investigation at network level was done & we noticed that during the nights were the issue is seen , all mac addresses from active unit but also from standby unit . Even if we thought they are on different vmware hosts . Long story short , after a while vmware team confirmed us that even if your have server on different vmware hosts (blades) these still can use the same uplinks to network. Each blade has 4 uplinks to virtual server , and virtual server than has 4 uplinks to virtual connect modules . Vmware uses round robin for determining which uplink to use . Consequence , even when you are on different blade you have a 25% possibility of using same uplinks. When this occurs , we see that during periods with little traffic we loose connectivity . This doesn't occur when we are using different uplinks. We are suspecting this has to do with aging timer of mac-address of virtual server. Which is using the mac-masquerade address . We're suspecting that at vmware level the mac address (mac masquerade) is not known anymore , while at network level (cisco switches/routers) we are still seeing the mac-masquerade address . Thus you need to wait untill a ARP is done ar router level in order to get mac-addresses known again . Does anybody has similar experiences ? Is there anybody who has more info about how mac-masquerading addresses are learned at virtual switch level & eventually how long they will be cached there ?899Views0likes4CommentsLTM VE Deployment limited VLANs
Hi, I need to deploy a pair of LTM VE appliances in HA in an internal environment. The problem is, i am told there are only 2 VLANs available on the deployed virtual switch I would be using, and I need to deploy MGMT / HA / Internal. This seems a bit mad but I believe a previous employee deployed the VM environment and there are concerns about expansion ( I won't go into the obvious lack of planning at the outset here!). As HA is just for chat between the 2 x F5s themselves, I was thinking of using a separate non routable subnet for this though utilizing one of the available VLANs used for MGMT or Internal. Would this kick up an error due to same VLAN being used or does the F5 just check assigned IP address/subnet? Thanks in advanceSolved774Views0likes3Commentsltm ve network interface driver
Hi , we have some older LTM VE devices running in v12.1.3.7, for support reasons we are upgrading them to v14 & later to v15. (LTM upgrade path ) They are running on vmware ESX v7 (so higher than 6.7). When we tried to upgrade to v14.1.5.2 as a first step in going to v15, we lost all connectivity to this system. Even our mgmt interface was not responding anymore (ssh or https). When accessing over a vmware console, we could see the config was loaded and running in v14. And the interfaces were still present but not able to send any communication over any interface. At the end we used the switchboot command for returning to old version v12. I'm aware of several article likehttps://my.f5.com/manage/s/article/K74921042concerning network driver to be updated. But when we try to perform these "tmctl -d blade tmm/device_probed" command then i'm getting an error "tmctl: tmm/device_probed: No such table". Can anyone tell us why we are seeing this error on tmctl command ? i know we need to upgrade , and we need to check the network driver after doing this. But if i can't check network drivers , how can i know if they need to be updated as described in the article. any suggestion is welcome.167Views0likes3CommentsHelp with iRule
Good day all! I have the following iRule: when HTTP_REQUEST { if { ([HTTP::host] eq "lists.example.com") and ([HTTP::uri] eq "/cgi-bin/wa?INDEX" || [HTTP::uri] eq "/cgi-bin/wa?MOD" || [HTTP::uri] eq "/cgi-bin/wa?SYSCFG" || [HTTP::uri] eq "/cgi-bin/wa?OWNER" || [HTTP::uri] eq "/cgi-bin/wa?INDEX=" || [HTTP::uri] eq "/cgi-bin/wa?LOGON" || [HTTP::uri] eq "/cgi-bin/wa?LOGON=INDEX" || [HTTP::uri] eq "/cgi-bin/wa?LOGON=" || [HTTP::uri] eq "/cgi-bin/wa?ADMINDASH" || [HTTP::uri] eq "/cgi-bin/wa?LSTCR1") } { switch -glob [class match [IP::client_addr] eq "LISTSERV-TST_Allowed_IPs"] { "1" { return } default { HTTP::redirect "https://www.google.com/" } } } else { return } } As you can see, it is inefficient, and it doesn't account for all possibilities. Let me explain what I am aiming. If an `HTTP_REQUEST` comes to "lists.example.com" (`[HTTP::host]`), and the URI (`[HTTP::uri]`) isn't "/cgi-bin/wa?SUBEDIT1*" (that is, "cgi-bin/wa?SUBEDIT1", and anything after it), redirect it unless it is from an IP on the "LISTSERV-TST_Allowed_IPs", in which case, allow anything on the URI and continue to it. What would you do?Solved99Views0likes15CommentsReset cause
Hello, someone can help me with this? I've a F5 LTM VM and the sho /net rst-cause command displays this situation: TCP/IP Reset Cause RST Cause: Count ------------------------------------------- Flow expired (sweeper) 103387 No flow found for ACK 339414 No pool member available 0 RST from BIG-IP internal Linux host 659163 SSL handshake timeout exceeded 3 TCP RST from remote system 114027 TCP retransmit timeout 48 TCP zero window timeout 136 Unknown reason 57 handshake timeout 52912 I have tried enabling the logs on LTM in order to understand the handshake timeout resets cause but I am quite confused. I can't figure out the cause of the TCP handshakes or how increase them in the tcp profile. The LTM log returns me this error: RST sent from 10.109.120.228:35681 to 10.1.29.237:8403, [0x2f3864d:271] {peer} handshake timeout Thank you for your support.99Views0likes2CommentsUnable to login to F5 Big-IP CLI console after data centre move
Hello, I am looking for any help! We currently have Big-IP 17.1 Best Bundle running on VE in ESXi. After moving data centres, I am unable to login on to the CLI via console to amend the management interface on to the new network that's been put in place. When I try to login as root it hangs and then presents the message "gethostbyname: Unknown host" and then reverts back to the username login prompt without asking for the password. I have tried rebooting but without being able to get past the login prompt there's not much more I can do! I'm assuming it maybe trying to do some DNS resolution at logon but being as it's on a new network it can't get out until I change it. Thanks for any help99Views0likes1CommentHelp with excessive RST and Port denied issues
Just setup a big IP trial in my VMware lab. I have a SELF IP on the external interface and one on the internal. I created a pool with three web servers on the internal side and I made a virtual server point to that pool. Everything looks green in the F5. I'm able to ping the web servers from the BIG IP and the machine i'm conecting from as well. But in the logs I'm seeing constant TCP resets from the F5 external IP to both my ESXi hosts. Also seeing a lot of port denied errors. Needless to say when I try to connect the the VIP it just times out even though a port scan shows port 80 open. a show /net rst-cause shows this and its only about 20 minutes since I reset all the counters. ------------------------------------------ TCP/IP Reset Cause RST Cause: Count ------------------------------------------ No flow found for ACK 186 Port denied 1580 RST from BIG-IP internal Linux host 115 TCP RST from remote system 0 TCP retransmit timeout 12 handshake timeout 0 Also seeing No flow found for ACK messages from my internal Self IP to the web server IPs What is going on and what have I done wrong???99Views0likes3CommentsHA Active/Standby add 2nd Floating IP from a different Vlan
I have 1 HA Active/Standby pair, I am looking to add a second floating IP for management access from our Management Vlan. We are wanting to access the configuration GUI from an internal URL and get to the Active F5 no matter which one is the active F5 Currently we have a floating self IP and a non floating IP on each of the pairs. What considerations do I need to take to accomplish this? Is this feasible? Do I need to add/change the SNAT pool? Will this affect config-sync or failover? SNAT pool: internal-snatpool 10.1.20.20 Current setup Example. prd1 10.1.20.1 - traffic-group-local-only, internal 10.20.30.213 - traffic-group-local-only, external 10.20.30.215 - traffic-group-1, external, port lockdown set to None 192.168.1.22 - traffic-group-local-only, HA prd2 10.1.20.2 - traffic-group-local-only, internal 10.20.30.214 - traffic-group-local-only, external 10.20.30.215 - traffic-group-1, external, port lockdown set to None 192.168.1.23 - traffic-group-local-only, HA possible setup example. prd1 10.1.20.1 - traffic-group-local-only, internal 10.20.30.213 - traffic-group-local-only, external 10.30.30.213 - traffic-group-local-only, external 10.20.30.215 - traffic-group-1, external, port lockdown set to None 10.30.30.215 - traffic-group-1, external, port lockdown set to default 192.168.1.22 - traffic-group-local-only, HA prd2 10.1.20.2 - traffic-group-local-only, internal 10.20.30.214 - traffic-group-local-only, external 10.30.30.214 - traffic-group-local-only, external 10.20.30.215 - traffic-group-1, external, port lockdown set to None 10.30.30.215 - traffic-group-1, external, port lockdown set to default 192.168.1.23 - traffic-group-local-only, HA87Views0likes5Comments