bgp
23 TopicsRegular expression format in user_alert.conf
I'm trying to use iCall and an event from user_alert.conf to fail over a BIG-IP VE cluster if an arbitrary BGP neighbor goes down. I have the handler and script working just fine if the event only looks in my logs for a static phrase, but when I have it look for a regex instead, it no longer works. However, if I test in a tool like regex101 with my expression and a log entry, it matches just fine. Here's my user_alert.conf (sanitized of course) alert bgp_neighbor_down "neighbor 100.200.[0-9]{1,3}.[0-9]{1,3} Down" { exec command="tmsh generate sys icall event neighbordown context { { name protocol value bgp } }" } And one of the logs I'm trying to match on: 2024/06/20 15:04:32 informational: BGP : %BGP-5-ADJCHANGE: neighbor 100.200.30.4 Down BGP Notification CEASE If I then runimish and shut down a neighbor that should match that regex, the device I'm on stays active. Any thoughts on what else I can try?48Views0likes1CommentHow to config BGP peering for F5 in HA-pair?
Hi I've setup F5 BGP peering with router and have problem due to we can't use floating IP as IP BGP neighbor address https://support.f5.com/csp/article/K62454350 . So we need to use self IP as IP BGP neighbor address. Problem is It's make router can't decide which path is correct when they send response traffic to F5. F5 active unit or standby unit. Router can't know status on F5. I try to add prepend on BGP which is standby unit and it's fine. but when standby unit takeover . it's failed again. Is there a way to deploy BGP with F5 HA-pair? Thank you2.8KViews0likes2CommentsOutbound iRule / BGP routing
Hey sirs, I would like to ask a question about the order of precedence/execute of a connection that consumes a forwarding virtual server/routing table. Currently, we have a forwarding any:0 virtual server, which load balances internet outgoing traffic through a pool_default_gateway that has the IP of 3 routers from different ISP associated with it, including some irules that make the SNAT decision based on LAN-segment. We are planning to include the F5 pair in the BGP neighbors of each ASN ISP and receive the default route and advertise the Virtual Server public IP. Does anyone know if the F5 when reads the dynamic routing table obtained via BGP, the traffic that is handled by the virtual servers of forwarding any:0, including those that are manipulated via iRule can show any kind of intermittence? thanks in advance551Views1like4CommentsUsing F5 Distributed Cloud private connectivity orchestration for secure multi-cloud infrastructure
Introduction Enterprise businesses use modern apps that access services in many locations. Users running productivity apps, like Office365, must connect to services in the cloud from on-prem locations. To keep this running well, enterprises must provide connectivity that’s fast, reliable, and private. Traditionally, it has taken many steps to create private connections to a public cloud subscription and route application specific traffic to it. F5 Distributed Cloud Platform orchestrates ExpressRoutes in Azure and Direct Connect services in AWS, eliminating many of the steps needed for routing end-to-end. Distributed Cloud private connectivity orchestration makes it easier than ever to connect and configure routing over existing private and dedicated circuits from on-prem locations to cloud services running in AWS and in Azure. The illustration below outlines the basic components to an ExpressRoute service in Azure but there’s a lot more you’ll need to know about just under the cover. Without orchestration, many steps are needed to enable routing between on-prem sites and Azure. This requires expert knowledge of Azure Networking, numerous dependent resources to be built, and advanced routing protocols knowledge -- specifically the Border Gateway Protocol (BGP). Extend on-prem network to a colo provider Create and provision the ExpressRoute Circuit Create a Virtual Network Gateway Create a connection between ExpressRoute Circuit & Virtual Network Gateway (VNG) Configure a Route Server to propagate routes between VNG and on-prem Configure user-defined routes on each subnet on each VNet in Azure Using Distributed Cloud to orchestrate ExpressRoutes in Azure and Direct Connect in AWS, the total number of steps is effectively reduced to just an essential few. Additional benefits include no longer needing to be an expert in Azure Networking or in BGP routing, and you get the ability to control connectivity with intent-based policies natively built into the Distributed Cloud Platform. An example of an intent-based policy is to configure VNet tagging in Azure to use with a firewall policy that just allow access to specific apps or by select users. Additional policies that support tagging include Distributed Cloud WAAP and Distributed Cloud App Infrastructure Protection. The following details cover the key components needed to support direct connectivity and show how to create the services and deploy a privately routed app in Distributed Cloud. Building ExpressRoute to Azure Extend on-prem network to a colo provider Create and provision the ExpressRoute Circuit Enable the ExpressRoute orchestration feature on an Azure VNet Site configured in Distributed Cloud To create an ExpressRoute orchestrated configuration in Distributed Cloud, navigate to Multi-Cloud Network Connect > Site Management > Azure VNET Sites > Add Azure VNET Site or Manage Configuration for an existing Site. Enter the required parameters, and when you reach the “Ingress Gateway or Ingress/Egress Gateway”, select “Ingress/Egress Gateway (Two Interface) …””. Here you have the option to deploy on a Recommended Region or an Alternate Region. This selection depends entirely on your business’ cloud deployment model. After choosing the model that best fits your environment, configure the number of Availability Zones for the Gateway and subnets (new/existing) that it will join and Apply the settings. Now scroll down to Advanced Options (enabling Advanced Fields) and Select VNet type: Hub VNet. Click “View Configuration”, and any existing VNet’s from your Azure Subscription that should inherit orchestrated routing. Next, change the “Express Route Configuration” to Enabled to expand the dropdown to access the ExpressRoute Circuit and Virtual Network Gateway settings. Under “* Connections”, add the ExpressRoute Circuit configuration for your Azure subscription(s). The required fields are the Name and the Express Route Circuit, this is the Resource ID for the circuit in Azure. Note: When configuring more than one circuit, you may want to also configure the Routing Weight for circuit preference. When configuring an express route circuit from another subscription (not shown below), you’ll also need an Authorization Key. For ease of deployment, it’s recommended to use the default values for the remaining fields, including for the Gateway SKU, Subnet for Azure VNet Gateway, and Subnet for Azure Route Server, including ASN Configuration for BGP between Site and Azure Route Servers. After the configuration is fully saved and deployed, with site status Applied on the Cloud Sites page, all resources in Azure will now be set to use ExpressRoute Circuit(s) for all designated L3 routed traffic. Next, we’ll configured the orchestration of Direct Connect in an AWS VPC connected site. AWS TGW connected sites are also support. Building Direct Connect for AWS To create a Direct Connect orchestrated configuration in Distributed Cloud, navigate to Multi-Cloud Network Connect > Site Management > AWS VPC Sites > Add AWS VPC Site or Manage Configuration for an existing Site. Enter the required parameters, and when you reach the “Ingress Gateway or Ingress/Egress Gateway”, choose the form factor that meets your deployment requirements. Scroll down to Advanced Configuration, enable Advanced Fields, and then Enable Direct Connect. Configuring the Direct Connect connection feature, choose either Hosted VIF or Standard VIF mode. Use Hosted VIF when you’re already using the Direct Connect connection for other purposes in AWS or when the VIF is in another AWS subscription. Otherwise, choosing Standard VIF allows Distributed Cloud to automatically create the VIF, and dependent services in AWS mentioned below to access the Direct Connect connection. Standard VIF mode creates the following additional resources in AWS: Virtual Gateway (VGW): associating it to the VPC and enabling route propagation to inside route tables Direct Connect Gateway (DCG): associating it to the VGW Note: In Standard VIF mode, at the end of the deployment admins may copy the direct connect gateway ID and use it to create other VIF’s. Admins may also copy the ASN. This is the AWS side of the ASN that’s needed by network ops teams to configure BGP peering. Note: In Hosted VIF mode, independent site deployment is responsible for: Creating VGW, and associating it to the VPC and enabling route propagation to inside route tables Creating DCG and associating it to the VGW Accepting the Hosted VIF and linking to the DCGW VIF Optionally, you may configure the Custom ASN if needed to work with an existing BGP configuration or choose Auto to let Distributed Cloud figure it out. Apply the config, save changes, and exit to the general Sites page. After the configuration is fully saved and deployed and having site status Applied on the Cloud Sites page, all resources in AWS will now be set to use the Direct Connect Gateway for all designated L3 routed traffic. Adding Private Connectivity On-Prem The final part to this deployment is routing both the ExpressRoute and Direct Connect circuits to an on-prem site. Both circuits must terminate at a colo space, and then standard IT/NetOps teams handle the routing outside the realm of Distributed Cloud to the destination. Building a Distributed App w/ Private Connectivity With Distributed Cloud having orchestrated the routing to each site’s workload, and IT/NetOps configured routing on-prem, including propagating the on-prem routes on BGP, an app with components that work independently can now be accessed as one unified interface. An example of a distributed app that run perfectly in this environment is the demo app, Arcadia Finance. This app has four components: Main – Frontend Web interface API – An App module accessed by Main to support money transfers Refer-A-Friend (Not used) – An App module interface accessed by Main to invite friends Backend – A DB server that stores money transfer accounts used by the API module, stock portfolio positions used by the Main module, and email addresses saved by the Refer-A-Friend module. Functionally, the connection flow is as follows: Users access a VIP advertised by an F5 Global Network Regional Edge to the Internet User traffic is connected to the Main (frontend) app running in AWS via the F5 Global Network Main App connects to API in Azure to load the money-transfer side frame, and then to the Backend DB on-prem to load the stocks portfolio balances. These connections transit the private connectivity links created in this article. API App in Azure connects to the Backend DB on-prem to retrieve money transfer accounts. This connection transits the private connectivity links created in this article. To support this topology and configuration, the apps are divided and run as follows: AWS Frontend (nginx) Main (Web) Refer-A-Friend Azure API (App) On-Prem Backend (DB) To make the app reachable to users, use the Distributed Cloud console Sites Distributed Apps feature to create one HTTP Load Balancer with the VIP advertised to the Internet, and with the origin pool of the Frontend (nginx) app. Note: This step assumes that you have previously created a fully connected AWS CE Site with connectivity to your VPC’s and a Direct Connect circuit in the section above. Navigate to Multi-Cloud App Connect > Manage > Load Balancers > Origin Pools, create a new origin pool. In the pool creation menu, at the top, select “JSON” and change the format to YAML, then paste the following example, changing the specific values, such as the namespace, to match your environment: metadata: name: mcn-aws-workload-pool namespace: mcn-privatelinks labels: ves.io/app_type: arcadia annotations: {} disable: false spec: origin_servers: - private_ip: ip: 10.100.2.238 site_locator: site: tenant: acmecorp-tnxbsial namespace: system name: soln-eng-aws-dc kind: site inside_network: {} labels: {} no_tls: {} port: 8000 same_as_endpoint_port: {} loadbalancer_algorithm: LB_OVERRIDE endpoint_selection: LOCAL_PREFERRED With the origin pool created, navigate to Distributed Apps > Manage > Load Balancers > HTTP Load Balancers, and add a new one with the following YAML provided as an example: metadata: name: mcn-arcadia-frontend namespace: mcn-privatelinks labels: ves.io/app_type: arcadia annotations: {} disable: false spec: domains: - mcn-arcadia-frontend.demo.internal http: dns_volterra_managed: false port: 80 downstream_tls_certificate_expiration_timestamps: [] advertise_on_public_default_vip: {} default_route_pools: - pool: tenant: acmecorp-tnxbsial namespace: mcn-privatelinks name: mcn-aws-workload-pool kind: origin_pool weight: 1 priority: 1 endpoint_subsets: {} Internally verify end-to-end connectivity Opening a command line shell to the Frontend Web App, a variation of traceroute with the tool hping3 and using curl, reveals each hop identified as privately connected along with connectivity established directly to the destination working without an intermediary. The following IP addresses are used to support a TCP connection from the Frontend (Web) app running in AWS to the API app running in Azure: 10.100.2.238 (Source): Frontend (Web) 172.18.0.1: Container host node 192.168.1.6: AWS Direct Connect Gateway 192.168.1.5: On-Prem router 192.168.1.22: Azure ExpressRoute Circuit endpoint 10.101.1.5 (Destination): App (API) In the CLI output, note each hop and the value in the HTTP Response header “Server”: root@1e40062cb314:/etc/nginx# hping3 -ST -p 8080 api HPING api (eth2 10.101.1.5): S set, 40 headers + 0 data bytes hop=1 TTL 0 during transit from ip=172.18.0.1 name=UNKNOWN hop=1 hoprtt=7.7 ms hop=2 TTL 0 during transit from ip=192.168.1.6 name=UNKNOWN hop=2 hoprtt=31.4 ms hop=3 TTL 0 during transit from ip=192.168.1.5 name=UNKNOWN hop=3 hoprtt=67.3 ms hop=4 TTL 0 during transit from ip=192.168.1.22 name=UNKNOWN hop=4 hoprtt=67.2 ms ^C --- api hping statistic --- 8 packets transmitted, 4 packets received, 50% packet loss round-trip min/avg/max = 7.7/43.4/67.3 ms root@1e40062cb314:/etc/nginx# curl -v api:8080 * Rebuilt URL to: api:8080/ * Hostname was NOT found in DNS cache * Trying 10.101.1.5... * Connected to api (10.101.1.5) port 8080 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: api:8080 > Accept: */* > < HTTP/1.1 200 OK * Server nginx/1.18.0 (Ubuntu) is not blacklisted < Server: nginx/1.18.0 (Ubuntu) < Date: Thu, 12 Jan 2023 18:59:32 GMT < Content-Type: text/html < Content-Length: 612 < Last-Modified: Fri, 11 Nov 2022 03:24:47 GMT < Connection: keep-alive < ETag: "636dc07f-264" < Accept-Ranges: bytes Conclusion As more services continue to be deployed to and run in the cloud, dedicated, reliable, and secure private connectivity is increasingly required by Enterprises. Establishing connectivity is not a rudimentary task and requires the assistance of many hands in different departments. Distributed Cloud private connectivity orchestration helps streamline this process by eliminating many of the steps required in each cloud provider, including no longer requiring dedicated cloud and routing protocol experts just to configure these services manually. To see all of this in action and to see how all the parts come together, watch the following video, a companion to this article. Visit the following resources for more information about this feature and other Distributed Cloud services: Multi-Cloud Network Connect Product Information Direct Connect orchestration for AWS TGW Sites Direct Connect orchestration for AWS VPC Sites ExpressRoutes orchestration for Azure VNet Sites YouTube Video2.8KViews8likes0CommentsLTM BGP announce virtual server IP from active and standby vCMP guest.
Hi we are trying to setup an active/standby routed setup. For this, we are using BGP. F5-1-----F5-2 | | SW-1-----SW-2 BGP is configured between F5-1 and SW-1, and F5-2 and SW-2. connection between SW-1 and SW-2 is l2. Everything is configured, however, the virtual server IP addresses are only announced by the active Guest. config on both devices: ip prefix-list VS-pl seq 10 permit /31 ge 32 ! route-map export_to_bgp_v4 permit 10 match ip address prefix-list VS-pl ! router bgp redistribute kernel redistribute static neighbor route-map export_to_bgp_v4 out on the standby, the statement is: neighbor route-map export_to_bgp_v4 out I would like both vCMP guests to announce the subnet, so I can use local pref to define the preferred path in my routing table. (eg: if the active is announcing the subnet, use this. otherwise, there is an immediate alternative available; the standby) when I do a sh ip bgp neighbors advertised-routes on the active Guest (F5-1), I see the F5 announcing the subnet. When I issue the same command on the standby (F5-2), I don't see the subnet of my virtual servers. This is due to the fact that my standby (F5-2), does not have this in it's kernel routing table. Am I doing something wrong here? The reason why I ask is that I don't want 4 bgp sessions, just the square and now, when a failover happens, I have downtime, as the standby has to announce the virtual server IPs to it's neigbour. Thanks in advance for any help! With kind regards Sybren351Views0likes0CommentsBGP - Conditional announcement of directly connected networks
Hi, Is there any functionality to conditionally announce directly connected networks similarly to how you can use a route-map to conditionally announce a default route or how you can use RHI to conditionally announce kernel-routes? My goal is to only announce connected routes on the active unit, but using redistribute connected announces the connected networks on all devices, while redistribute connected route-map conditionalRoutemap doesn't work (ie. the networks are not announced anywhere).616Views0likes4CommentsActive-active and RHI (BGP) failover
Hello, I successfully managed to set up the functionality I am looking for but I am lacking the speedy failover that is required. Two F5 LTMs (VE edition), in an active-active configuration, i.e. two traffic groups. One primary on each LTM. Each LTM is connected with BGP to separate routers. I am running eBGP LTM<->router and iBGP router<->router. Each LTM communicates with a its respective router over a link net (LTM endpoint as self IP, no floating self IPs due to L3 separation of the two LTMs) Each LTM is situated on separate L3 segments and all VIPs are announced successfully via RHI. Traffic groups fail over based on a gateway failsafe that icmp monitors an interface on its router. It all works beautifully in every failure scenario I have tested so far but, failover takes around 10-20 seconds. I have tweaked lots of parameters in the LTMs but none of them improve the situation. Is there a way to come down to less than a five second failover time? R1-----R2 | | F5 F51.4KViews0likes4CommentsBGP stops advertising after upgrade
Hello , we have an LTM VE in a HA cluster . We have defined a couple of route domain (RD) and have enabled BGP/BFD for these route domains . There is a BGP routing configuration present (imish -r RD) . In this configuration peer devices are defined , and by putting RHI (route health injection) we advertise our virtual servers towards these bgp peers . The current setup is running on version 13.1.1.5 and is working since long time without any issue. AS v13 is going end of life we tried to upgrade recently to v14.1.5.2 . The upgrade itself went smooth . New version was activated , all pools and virtual servers were present as before. Initially all looked ok . When we checked out BGP peer (show ip bgp summary) we could see that the peering was established , again this looked ok . But when checking the advertised routes , no routes were being advertised . "sh ip bgp neighbour x.x.x.x advertised-routes" --> showed no routes present , whereas before we had about 10 virtual servers being announced in v13 I'm aware of articlehttps://cdn.f5.com/product/bugtracker/ID1031425.htmlconcerning BGP advertising . But this is the case when you receive a route , and try to advertise it then from F5 (back to front advertising) . In our case F5 is end device , and just announcing these virtual servers. So we are not receiving any BGP update and then sending these routes on . IN the end we needed to rollback to v13 again , by booting from partition with old version . Once this was done all started working again including BGP . Any idea what could be issue here ? (i've pasted our BGP config here below , it's quite basic) we use a routemap for blocking incoming updates (DENY-ALL) and with routemap "KERNEL2BGP" we control which virtual servers we can advertise . (each ip we want to announce it mentioned in this routemap) router bgp F5-AS bgp router-id F5-selfIP bgp always-compare-med bgp log-neighbor-changes bgp graceful-restart restart-time 120 redistribute kernel route-map KERNEL2BGP neighbor peer-IP remote-as "remote-as-nr" neighbor peer-IP description "xxx" neighbor peer-IP update-source selfip-address neighbor peer-IP password "xxx" neighbor peer-IP timers 3 9 neighbor peer-IP fall-over bfd neighbor peer-IP next-hop-self neighbor peer-IP soft-reconfiguration inbound neighbor peer-IP route-map DENY-ALL inSolved1.1KViews0likes6CommentsNSX-T and F5 HA using BGP
Hi All, I am working on a lab to get F5 LTM VE high availability pair working with NSX-T T0 router using BGP The routing domain all works find, I am able to establish the BGP neighborship and I see the T0 routes, and the T0 sees my routes. What I am trying to find information on, is on what the best practice is for the Active/Standby F5 HA pair to be BGP paired to the Active/active T0. As is, the NSX-T T0 router sees routes being advertised from both F5, even the standby unit. I ran into a problem where the Standby unit was receiving traffic as it was a valid route in the table of the NSX-T0 and to resolve the issue I created a BGP Floating self ip and configured it as the next-hop ip address for the NSX-T0. This way the active F5 always processes the traffic. I am wondering if this is the intended way to do such a design or if there is a better way to do this, a standardize way to do this. Here is an ASCII representation of the design: +-------------------------------+ | | | CAMPUS NETWORK | | | +-----+---------------------+---+ | | eBGP eBGP | | +-----+---------------------+---+ | Active Active | | +-----+ +-----+ | | |EDGE1| NSX-T |EDGE2| | | +-+---+ T0 +---+-+ | | |.1 .2| | +----+----------------------+---+ | | | | | | eBGP eBGP | | | NEXT-HOP | | FLOAT-IP | |.3 .5 .4| +-+--+ +---+-+ |F5-1+------HA-------+F5-2 | +----+ +-----+ Active PassiveSolved3.2KViews0likes8Comments