deployment
3880 Topics"01020066:3: The requested Node (/Common/fqdn1) already exists in partition Common."
Hello, when trying to POST a new application in a tenant that uses an already shared FQDN across tenants in the same cluster, the AS3 POST response is: "01020066:3: The requested Node (/Common/fqdn1) already exists in partition Common." It is saying that fqdn1 already exists in /Common, which is true, as it can be seen in other pools as a member. Now, if I try a different FQDN (fqdn2), it works fine with no issues. Any suggestions on how to find the root cause and fix this without: deleting fqdn1 from everywhere, and redeploying it? Thank you J version running 3.56.0-1047Views0likes2CommentsNeed step-by-step guidance for migrating BIG-IP i2800 WAF to rSeries (UCS restore vs clean build)
Hello DevCentral Community, We are planning a hardware refresh migration from a legacy BIG-IP i2800 running WAF/ASM to a new rSeries platform and would like to follow F5 recommended best practices. Could you please advise on the step-by-step process for this migration, specifically around: o Whether UCS restore is recommended versus building config fresh o BIG-IP version compatibility considerations during the migration o Interface/VLAN mapping differences between iSeries and rSeries hardware o Best approach to migrate WAF/ASM policies and tuning after migration o Common issues or lessons learned during real-world cutovers Current environment: " BIG-IP model: i2800 " BIG-IP version: 17.1.3 " WAF module: ASM / Advanced WAF " Deployment: Active/Active Thank you .138Views0likes3CommentsBIG IP LTM BEST PRACTICES
I want to do an F5 deployment to balance traffic to multiple web servers for an application that will be accessed by 500k users, and I have several questions. As an architecture, I have a VXLAN fabric (ONE-SITE)where the F5 (HA ACTIVE-PASIVE) and the firewall(HA ACTIVE-PASIVE) are attached to the border/service leafs(eBGP PEERING for FIREWALL-BORDER LEAF, STATIC FOR F5-BORDER). The interface to the ISP is connected to the firewall(I think it would have been recommended to attach it to the border leafs), where the first VIP is configured, translating the public IP to an IP in the FIRST ARM VLAN(CLIENT SIDE TRANSIT TO BORDER), specifically where I created the VIP on F5. 1) I want to know if the design up to this point is correct. I would also like to know whether the subnet where the VIPs reside on the F5 can be different, and if it is recommended for it to be different, from the subnet used for CLIENT SIDE TRANSIT. 2) I also want to know if it is recommended for the second ARM VLAN (server side) to be the same as the web server VLAN, or if it is better for the web server subnet(another vlan) to be different, with routing between the two networks. 3) I would also like to know whether it is recommended for the SOURCE NAT pool to be the same as the SECOND ARM VLAN (server side) or if it should be different. In any of the approaches, I would still need to perform Source NAT, I also need to implement SSL offloading and WAF (Web Application Firewall). I am very familiar with the routing aspects for any deployment model. What I would like to know is what the best architectural approach would be, or how you would design such a deployment. Thank you very much—any advice would be greatly appreciated.113Views0likes1Commentwhich virtual server will be hit?
Hi, we created following virtual forwarding server for internet traffics on LTM. virtual server : internet-vs source ip: 192.12.0.1 ( downstream firewall external interface IP) destination: 0.0.0.0/0 For the return traffics of this VS, do we need to create another virtual server? If we create a new virtual forwarding server like below, will the return traffics of VS "internet-vs" hit this VS "Test-VS"? virtual server: Test-VS source: 0.0.0.0/0 destination: 192.12.0.1 Can someone please advise? Thanks in advance!93Views0likes1CommentQuestions on R-Series 5800s
Hello...Trying to bring up a new 5800 What is the recommendation for port-channel. Can we create port-channel between two different pipelines? i.e Ports 3.0,4.0,5.0 and 6.0 are in Pipeline-1 and Ports 7.0,8.0,9.0 and 10.0 are in pipeline-2. Can we create a LAG with 3.0 and 7.0? Plan on using a 10G link for HA and was planning to convert 6.0 to 10G and connect between two R-series 2. When deploying a tenant, Should we just leave it as "Recommended" for Provisioning instead of "Advanced" and have 18 vCPUs if plan on having just one tenant. Not sure how much Virtual disk size is recommended. Any recommendation for Virtual Disk Size? 3. If we want to have additional tenant, is it best to leave the tenant at 14 vCPU or can we change it later and What is the impact? Just existing tenant restart?119Views1like5CommentsF5 in AZ
We are building F5 BIG-IP in Azure. Our long term intention is Active-Active or Active-Standby HA, but to kick start we are deploying a single standalone instance first. The F5 is not exposed to the internet directly. We have a Palo Alto firewall performing DNAT to convert the public IP to a private IP, and that private IP is the F5 VIP. We are using Azure basic Load Balancer to send traffic to F5. Our example external subnet is 10.1.1.0/24 and the IPs are configured as follows on the Azure NIC and F5. The Primary Self IP is 10.1.1.10, the first Secondary IP is 10.1.1.11 which is VIP for App1, and the second Secondary IP is 10.1.1.12 which is VIP for App2 and follows. My questions are as follows. First, in the ALB backend pool, should we use the Primary Self IP 10.1.1.10 or the Secondary VIP IPs 10.1.1.11 and 10.1.1.12? If we use Secondary IPs, do we need a separate ALB for each VIP? We have seen some older videos suggesting Secondary IPs should be used in the backend pool but we want to confirm the correct approach. Second, when we expand to HA in the future by adding a second F5 device, can both devices be configured with the same VIP IPs such as 10.1.1.11 and 10.1.1.12? And since Azure does not support floating IPs moving between VMs, we understand ALB health probes handle failover, so in that case should the ALB backend pool contain the Primary Self IPs of both devices? Please advise on the correct design for both standalone and HA scenarios.100Views0likes2CommentsF5 DNS Express
Hi Brothers, I have an issue with F5 DNS Express , I configured it as a secondary DNS server and AD as Primary DNS Server , the first issue that when F5 received Notification from AD it not send Zone transfer Request to AD , the second issue that when i configure the F5 listener as preferred DNS on user Machine this Machine can not join Domain even all required Records are available on F5. If you have any recommendation please advise. Br,99Views1like3CommentsF5 i-series Guests to r-series tenants migration
Hi All, I have two i-series 11900 with 4 guests on each as: 1 LTM, 1 GTM, 1 WAF, and 1 APM. There is HA between the guests. I am working on a migration plan to r-series 10900 and have two options: Option 1: HA method: Here, I will replace the i-series device that has the standby guests with the r-series device. Then will establish the HA between the active i-series and the r-series and sync the configuration. Then will make the r-series active as active. Then will replace the newly bocming standby i-series device with the second r-series and establish the HA with the first r-series. this is a lengthy way but has a positive side of fast rollback in case I faced any issue, and there will be no changes on the management IPs. Option 2: UCS method: in this method I will create a replica of the existing guests on the r-series tenants using the UCS files from the iseries guests. This setup will be isolated from the production network. During the maintenance window, I will disconnect the cables from i-sereis and connect it to the r-series boxes. This way I need to use different management IPs while building the replica setup. and during the migration will change the management IPs and use the onse were on the i-series. Note that, existing devices are connected to cisco ACI. Let me here your thoughts and suggestions.331Views0likes7Comments