F5 VE on Proxmox
Has anybody been successful running F5 BIG-IP VE on Proxmox? Proxmox: Operating System: Debian GNU/Linux 10 (buster) Kernel: Linux 5.0.18-1-pve Architecture: x86-64 F5 VE: virtual edition 14.1.2.2 from downloads.f5.com I tried both qcow2 and .ova(scsi) licensing with trial license obtained from F5 single NIC mode According to https://clouddocs.f5.com/cloud/public/v1/matrix.html, Debian should be supported distribution. Following instructions on https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-setup-linux-kvm-13-0-0/1.html. Creating new VM in Proxmox: OS: guest OS Linux, 2.6 Kernel, no media for OS Hard Disk: bus SCSI, VirtIO SCSI, NFS storage, QEMU format (qcow2), 100GB CPU: 4 sockets Memory: 8GB Network: bridge vmbr0 openvswitch with appropriate vlan tag, VirtIO, no firewall VM is created replacing just created qcow2 on remote storage with downloaded F5 qcow2 image. VM is started I am able to get prompt in Proxmox console, log in with default root account. But then mcpd keeps on restarting - constantly every few seconds. Logs show errors caused by permission errors. For some reason F5 is complaining that it cannot create "/shared/.snapshots_d/" because of permission problem. However permissions of "/shared" are OK. When I create .snapshots_d folder manually as root, mcpd no longer restarts, no more console errors... I run config utility to setup management IP/mask/gateway. As expected in single NIC mode, https port is automatically configured to 8443. I am able to reach GUI configuration utility and login as admin. Up until now everything looks fine. When trying to license the VM, I am able to generate dossier, also receive the generated license file from F5. But when I apply the license to the VM and click next, it acts as if nothing has happened. GUI keeps showing VE is not yet licensed. LTM logs says: err mcpd: License file open fails, Permission denied. "/config/bigip.license" has read permission for all and write for tomcat. Those are expected permissions for the license file. Funny though, content of /config/bigip.license is now actually populated with the correct new license. But "Registration Key" in "tmsh show sys hardware" is empty. There are several other file system related warnings or errors in logs.. so I suspect that the whole issue is with how F5 VE is accessing file system on Proxmox. But I don't know what to check or fix further. Is it even possible to run F5 VE on Proxmox? (although F5 clearly states it should be.) thx.2.1KViews0likes3CommentsWeb interface showing "Starting web server" and CLI shows "logger: Re-starting mcpd" every 2 seconds after shutting down 13.1.0.5 Build 0.0.5 Virtual
I have an F5 LTM virtual machine loaded up on a GNS3 VM for a lab. The virtual machine is activated with interfaces/VLANs and a couple pools/VIPs and a policy. When I go to shut down the virtual, I am greeted with the web interface saying... Starting web server Please do not reboot your device. The device is starting services required for the communication with the configuration utility. This process takes approximately 1-2 minutes. and then it runs for 30+ minutes. Logging in to the console with the root account results in endless logger: Re-starting mcpd I have tried... touch /service/mcpd/forceload Forcing a file system check on the next system reboot I should also note that the CLI shows that the LTM is INOPERATIVE so I cannot issue TMSH commands. This issue has happened to me earlier in the week and I just decided to install it as a new VM but I do not want to have to do that every time I need to shut down my host. The lab is only temporary and for testing so I just need to make sure things work but I do not want to have to keep the virtual machine running when I am done for the day in order to keep this machine working. I should also mention this is just a trial license. Any help would be much appreciated.1.1KViews0likes1CommentF5 BIG-IP LTM VE Trial 12.1.0 : "Error 51133, F5 registration key is not compatible with the detected platform "
Trying to test F5 BIG-IP LTM VE 12.1.0 Trial version. I have downloaded the qcow2 image FBIGIP-11.3.0.39.0.qcow2 and received a registration key by email which leads to the error: Error 51133, F5 registration key is not compatible with the detected platform - This platform, "Z100k", cannot be activated with this registration key "LFXEXUX-FNRBUYU". ` Consulted similar issues in this forum, but no one gives a clear answer about the cause. But the below inf.(look the screenshots) seams to suggest that I am using the wrong key and that there is no registration key for 12.1.0. ![](/Portals/0/Users/017/93/197393/Selection_202.png) ![](/Portals/0/Users/017/93/197393/Selection_203.png) ![](/Portals/0/Users/017/93/197393/Selection_204.png) ![](/Portals/0/Users/017/93/197393/Selection_205.png) [http://hpnouri.free.fr/misc/f5/Selection_202.png](http://hpnouri.free.fr/misc/f5/Selection_202.png) [http://hpnouri.free.fr/misc/f5/Selection_203.png](http://hpnouri.free.fr/misc/f5/Selection_203.png) [http://hpnouri.free.fr/misc/f5/Selection_204.png](http://hpnouri.free.fr/misc/f5/Selection_204.png) [http://hpnouri.free.fr/misc/f5/Selection_205.png](http://hpnouri.free.fr/misc/f5/Selection_205.png) **show /sys hardware** `Sys::Hardware Chassis Information Chassis Name Chassis Type Maximum MAC Count 1 Registration Key - Hardware Version Information Name HD1 Type physical-disk Model virtio Parameters -- -- Manufacturer 6900 SerialNumber virtio-vda Size 126.00G Firmware Version 1.0 Media Type HDD Name HD2 Type physical-disk Model virtio Parameters -- -- Manufacturer 6900 SerialNumber virtio-vdb Size 100.00G Firmware Version 1.0 Media Type HDD Name cpus Type base-board Model QEMU Virtual CPU version 2.1.2 Parameters -- -- cache size 4096 KB cores 2 cpu MHz 3292.520 cpu sockets 0 cpu stepping 3 Platform Name BIG-IP Virtual Edition BIOS Revision Base MAC 00:b2:d4:7f:4d:00 System Information Type Z100 Chassis Serial 00000000-0000-0000-000000000000 Level 200/400 Part Switchboard Serial Switchboard Part Revision Host Board Serial Host Board Part Revision692Views0likes4CommentsWhy so high Ping Latencies in VE LTM ?
Hello, I'm evaluating a VE LTM Trial, 25 Mbps, BIG-IP 12.1.1 Build 2.0.204 Hotfix HF2 It's running on Hyper-V on Windows Server 2012R2. When I run ping from the Hyper-V console window of the LTM VM I can measure the following times: ping -I 172.27.50.1 172.27.50.151 = **7 ms .. 30 ms** (pinging from the LTM internal static self-IP to another VM attached to the same Virtual Switch) ping -I 172.27.50.1 172.27.50.161 = **7 ms .. 30 ms** (pinging from the LTM internal static self-IP to another VM reached through the external network, through a physical switch) ping -I 172.27.50.1 172.27.51.1 < 1 ms (pinging from the LTM internal static self-IP to the LTM external static self-IP) ping -I 172.27.50.1 172.27.52.1 < 1 ms (pinging from the LTM internal static self-IP to the LTM management address) ping -I 172.27.50.1 172.27.51.51 = **2 ms .. 4 ms** (pinging from the LTM internal static self-IP to any of the configured LTM Virtual Servers) pings between the two devices over the HA VLAN are even higher: tens of ms ! I reserved what I judge to be the recommended amounts of vCPU and memory to the LTM VE. I have also disable Virtual Machine Queues in the PhyNICs and in the LTM VNICs. Has someone suggestions of configurations to check/change, or troubleshooting procedures to reveal the cause of the high latencies above ? Many thanks!707Views0likes5CommentsDefault Route into OSPF
I am unable to advertise a default route 0.0.0.0/0 from the F5 into ospf. I have an F5 VE running 12.1.1 on KVM-QEMU. IMI is running and I have neighbor relationships with the appropriate routers. All other routes that I test are added without issues, but I do not see the 0.0.0.0/0 route being advertised into ospf. MY ZebOS config: [root@F5-INTERNET-01:Active:In Sync] config cat zebos/rd0/ZebOS.conf ! no service password-encryption ! interface lo ! interface tmm ! interface Core ip ospf priority 0 ip ospf mtu-ignore ! interface Internet ! router ospf ospf router-id 10.246.3.250 redistribute kernel passive-interface Internet network 10.246.3.0 0.0.0.255 area 0.0.0.0 ! line con 0 login line vty 0 39 login ! end Here is the LTM Configuration: ltm virtual /Common/Test { destination /Common/0.0.0.0:0 ip-protocol tcp mask any profiles { /Common/fastL4 { } } source 0.0.0.0/0 translate-address enabled translate-port disabled } ltm virtual /Common/test2 { destination /Common/10.10.10.1:80 ip-protocol tcp mask 255.255.255.255 profiles { /Common/tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled } ltm virtual /Common/test3 { destination /Common/20.20.20.0:0 ip-protocol tcp mask 255.255.255.0 profiles { /Common/tcp { } } source 0.0.0.0/0 translate-address enabled translate-port disabled } ltm virtual-address /Common/0.0.0.0 { address any arp disabled icmp-echo disabled mask any route-advertisement enabled server-scope none traffic-group /Common/traffic-group-1 } ltm virtual-address /Common/10.10.10.1 { address 10.10.10.1 arp enabled icmp-echo enabled mask 255.255.255.255 route-advertisement enabled server-scope none traffic-group /Common/traffic-group-1 } ltm virtual-address /Common/20.20.20.0 { address 20.20.20.0 arp disabled icmp-echo disabled mask 255.255.255.0 route-advertisement enabled server-scope none traffic-group /Common/traffic-group-1 } What is the issue?501Views0likes1CommentBigIP VE - Multiple VLANs on single partition with single interface
Hi We have current BigIP VE HA Pair with 3 partitions and 5 interfaces towards the VMWare ESXI in total. A need has come up to add 3 more interfaces to the BigIP IP VE but we need to use the current VLANS attached to the vNICS. The BigIPs connect to a Google Anthos solution and were wondering if We can use the a single VLAN in more than one partition point to the same vNIC interface on VMWARE Two partitions using same network interface? Two partitions use different network interfaces connected to same VLAN. (so need to add new network interfaces to the F5 VMs and map it to same VMware port group)Solved1.5KViews1like2CommentsBIG-IP VE - qemu on an Apple Silicon Macbook
Hey all, I was wondering if anyone has managed to spin up a BIG-IP VE on an Apple Silicon Macbook using qemu? I've been using this guide: https://clouddocs.f5.com/cloud/public/v1/kvm/kvm_setup.html As a reference point, but this is obviously written from the persepctive of a native x86 chipset on the host. I've tried playing around with what I believe are the relevant settings, but the guest just crashes virt-manager every time I try to launch it. Don't suppose anyone has been through this pain and come out the other side successfully and could lend a hand? Thanks!849Views0likes2CommentsF5 virtual edition One Slot issue
Hello to All, I am playing in a lab environment with the "ALL_1SLOT" version 15.1.3 on Hyper-V but it comes up with the error "IDE controller in use" and the normal edition "ALL" has no such issues. I Removed any DVD/CD drive from IDE Controller 1 as the 1SLOT edition does not have option for upgrading by default but still the same error. Is the "ALL_1SLOT" only for BIG-IQ and Cloud workloads? https://support.f5.com/csp/article/K14946690Views0likes0CommentsBIG IP VE on a single network
Good afternoon I am migrating an F5 device to virtual from physical and when I define the internal network it says it overlaps with the management network. There's only a single network on this DMZ environment and wanted to know if there was a way of just using the internal interface with the ip addressing and bypassing the mgmt interface all together. Our physical LTMs do not have a management address as well they only have an internal ip assignment. Any help would be appreciated.Solved767Views0likes4CommentsFloating IPs and VIPs stop responding after a VMotion
I have many active/standby pairs of VEs hosted in VMWare ESXi 6.0, 6.5, and 6.7. Our organization puts an insane amount of weight on availability. We have noticed that it is significantly less impactful to our various applications to Vmotion a F5 rather than fail over to the peer and Vmotion in standby state. There is a catch though. The F5 does not initiate outbound traffic on subnets that are dedicated for VIPs and thus the CAM table on the upstream switch does not get updated and traffic is black holed. This is not an issue for the majority of our VIP subnets because there is always some traffic coming and going on it but in some environments, where a VIP subnet is relatively quiet, traffic is black holed until the table on the switch times out. I can fix this for self IPs by creating a pool with the SVI in it and an ICMP monitor. I have not found a way to fix this for floating IPs and VIPs short of doing a fail over to force a GARP. I could create a forwarding VIP in each of the subnets and stick VMs behind them to constantly send pings but this would be a logistical nightmare. Any thoughts?639Views0likes4Comments