deployment
3896 TopicsMigrate partitions 2600i to 2600r
Hi everyone, I need to migrate 2600i series devices to the new R series, specifically to the 2600R. The 2600i devices have partitions in their configuration, and after reviewing the tenants that can be created on the R series, it seems that only one tenant is supported in 2600r. My question is: how can different partitions from a 2600i series device be migrated to a 2600R device?48Views0likes4CommentsCPU/vCPU sizing on rseries
Hi , I am working on sizing an F5 rSeries platform for a telecom-scale deployment and need expert guidance on CPU/vCPU allocation and platform selection. Traffic profile: L3/L4 traffic (AFM use case): ~100 Gbps L7 traffic (AWAF use case): ~50 Gbps Questions: What is the recommended approach to estimate required vCPU for such mixed workloads? Is r10900 sufficient, or should we consider multiple appliances or VELOS? Any best practices for tenant sizing and separation for AFM + AWAF? Appreciate any real-world sizing guidance or reference architectures.81Views0likes1Commentwhere to download older F5 OS versions
Good day to F5 DevCentral Community, Is there any (F5) website from which we may download older F5 OS versions? i.e. OS versions that are older than the ones that are currently available in the F5 Downloads webpage. I'm specifically looking for where to download these F5 OS versions: 1. 12.1.2 Hotfix HF2 2.0.276 2. 12.1.4 Final 0.0.830Views0likes0CommentsTraffic Policy forwarding behavior when pool is down
Hello folks. Would someone be able to explain the behavior of a Traffic Policy when a pool is down. For the scenario: I want to create a single virtual server that hosts multiple websites. Each website is hosted on separate clusters of virtual machines. The hosts in the cluster for example-1.com are in pool_example1, and the hosts in the cluster for example-2.com are in pool_example2. I create a traffic policy that checks the HTTPS Host for the FQDN and then steers example-1.com to a pool_example1 and example-2.com to pool_example2. I also create a rule to match all traffic with the action to reset the connection to catch everything else. A default pool has been set on the Virtual Server. There is a health monitor in pool_example1 monitoring the nodes. If all of the nodes in pool_example1 are down, and the health monitor marks it down, what would I expect to happen if I tried to navigate to example-1.com? Is the traffic policy aware that the pool is down? Would I get the default pool? Or would I hit the catch all I built in the traffic policy and get a RST? Or would something else happen? Thanks!44Views0likes1CommentF5 VELOS Backplane Inter-Tenants Communication
Hello, I’m looking for official documentation confirming whether inter-tenant communication between different tenants within the same chassis partition over the VELOS backplane is supported without requiring an external routing device. I haven’t been able to find clear guidance on this, so any assistance would be helpful. Regards144Views0likes2Commentsmigrate from serie I to R. Cluster LTM-GTM
We currently need to carry out a migration of six 2600i devices to 6 new 2600r models. There are three Active-Standby clusters at the LTM level. In addition, four of these devices form a cluster for GTM-DNS. I would like to know whether you have any specific procedure for this type of migration. We would also like your recommendation on whether to perform the migration of the four devices within the same maintenance window, or to migrate them in pairs, allowing two devices from the i series and two from the r series to coexist in the same DNS cluster. Additional information: The source and target version will be the same: 17.5.1.3 We will use Journeys for the configuration conversion. On the other hand, would you keep the management IP addresses of the I series on the R Series chassis or tenants, or would you request new IP addresses for all? What steps would you follow during the migration window?.121Views0likes1Commentmonitor/healthcheck issue with envoy gateway
Hello everyone I’m experiencing an issue with one of our implementations. The server team has deployed an Envoy Gateway using a MetalLB IP (Kubernetes environment). We have three VIPs: 10001, 10002, and 443. All of them are configured as passthrough, with no special settings or iRules. From what I understand, the pool member is a MetalLB IP that forwards traffic to three backend nodes. I configured standard TCP health monitors (basic 3-way handshake), but for some reason, the pool member is marked as down. I’ve done extensive troubleshooting (including tcpdump captures), and I can confirm that the 3-way handshake completes successfully. However, immediately after that, the load balancer sends a TCP reset. This suggests that communication works in both directions, so the network itself seems fine. I also tried creating a custom TCP monitor using alias address and port (based on the three nodes provided), but the monitor still fails. Interestingly, when I configure those three servers directly as pool members, the pool status is up. Has anyone encountered a similar issue or have any ideas on what else I could check?112Views0likes2CommentsMultiple DNS resolvers for root forward zone "."
I have a requirement for two sets of LTM services with different DNS requirements. The primary red secure service uses an internal DNS service but traffic can also be routed to the Internet. The second blue service uses a partner Internet Gateway. This has all worked with both services using the blue DNS resolver until recently one of the cloud apps needs to use 'microsoft.com' services. Because the Blue gateway uses public DNS to validate FQDNs and Microsoft frequently roll (like every 5mins) the public IP addresses in DNS responses we think the blue gateway is caching different IP addresses to the pink DNS server and so when the blue gateway validates the destination IP it can sometimes drop traffic59Views0likes0Comments