Forum Discussion
default / static routes on f5 virtual edition
Hello.
I have just deployed a F5 virtual edition in my kvm setup. My kvm network setup is routed. Installation went fine. I can reach the managenment webinterface. I also configured a self ip. My problem is all nodes i try to configure never get green. As healthcheck i choosed simple icmp. But the blue button remains.
here my networkconfig:
kvmhost <-> virbr1 (191.255.255.1) <-> virtual f5
ip r on virtual f5 shows:
[root@bigip:Active:Standalone] ~ ip r
191.255.255.1 dev eth0 scope link
127.1.1.0/24 dev tmm0 proto kernel scope link src 127.1.1.1
127.7.0.0/16 via 127.1.1.254 dev tmm0
default via 191.255.255.1 dev eth0
[root@bigip:Active:Standalone] ~
Networkconfiguration in general works:
[root@bigip:Active:Standalone] ~ ping -c 5 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=14.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=14.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=56 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=56 time=14.9 ms
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4018ms
rtt min/avg/max/mdev = 14.947/15.001/15.108/0.164 ms
[root@bigip:Active:Standalone] ~
Maybe the rather unusual networkconfiguration is th problem?
thanks and cheers
15 Replies
- Hannes_Rapp_162
Nacreous
In case of Nodes, blue is correct because you do not need to monitor Nodes. You need to monitor Pool Members. Configure a LTM pool, add a few members to it, and configure a health-check. After the first health-check completes, you will either see the pool members as Green or Red. Blue means the health-check is not configured or that the first cycle of health-check has not completed.
- Hannes_Rapp_162
Nacreous
Check out "Table 4.5 Explanation of status icons for pool members": https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-concepts-11-2-0/ltm_pools.html
- Hannes_Rapp
Nimbostratus
In case of Nodes, blue is correct because you do not need to monitor Nodes. You need to monitor Pool Members. Configure a LTM pool, add a few members to it, and configure a health-check. After the first health-check completes, you will either see the pool members as Green or Red. Blue means the health-check is not configured or that the first cycle of health-check has not completed.
- Hannes_Rapp
Nimbostratus
Check out "Table 4.5 Explanation of status icons for pool members": https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-concepts-11-2-0/ltm_pools.html
- Thomas_Stein_11
Nimbostratus
Hello Hannes.
Thanks for your answer. I now configured a pool containing one node. Status is unchanged. Pool /and/ Node remain blue. This is the message i see in the gui.
For pool:
Unknown (Enabled) - The children pool member(s) either don't have service checking enabled, or service check results are not available yetand for the node:
Unknown (Enabled) - Node address service checking is enabled, but result is not available yet 2015-11-09 10:40:48Health check is ICMP.
In the tmm log i see a lot of:
<13> Nov 9 12:39:56 bigip notice MCP connection expired early in startup; retryingThanks for your help so far.
- Hannes_Rapp
Nimbostratus
For further troubleshooting, please provide outputs for the three commands below:
tmsh list ltm pool MyPoolName tmsh show ltm pool MyPoolName tmsh list ltm node My.No.de.IP - Thomas_Stein_11
Nimbostratus
[root@bigip:Active:Standalone] ~ tmsh show ltm node nettest ------------------------------------------ Ltm::Node: nettest (37.120.xxx.xxx) ------------------------------------------ Status Availability : unknown State : enabled Reason : Node address service checking is enabled, but result is not available yet Monitor : /Common/icmp (default node monitor) Monitor Status : checking Session Status : enabled Traffic ServerSide General Bits In 0 - Bits Out 0 - Packets In 0 - Packets Out 0 - Current Connections 0 - Maximum Connections 0 - Total Connections 0 - Total Requests - 0 Current Sessions - 0 [root@bigip:Active:Standalone] ~ tmsh list ltm pool nettest ltm pool nettest { members { nettest:http { address 37.120.xxx.xxx session monitor-enabled state checking } } monitor tcp } [root@bigip:Active:Standalone] ~ tmsh show ltm node nettest ------------------------------------------ Ltm::Node: nettest (37.120.xxx.xxx) ------------------------------------------ Status Availability : unknown State : enabled Reason : Node address service checking is enabled, but result is not available yet Monitor : /Common/icmp (default node monitor) Monitor Status : checking Session Status : enabled Traffic ServerSide General Bits In 0 - Bits Out 0 - Packets In 0 - Packets Out 0 - Current Connections 0 - Maximum Connections 0 - Total Connections 0 - Total Requests - 0 Current Sessions - 0 [root@bigip:Active:Standalone] ~ - Thomas_Stein_11
Nimbostratus
Hello Hannes.
I fixed the problem. Well i obviousely caused it myself by adjusting some arp setting in /etc/sysctl. I switched the settings back to default and now the checks are green. Sorry for the noise.
thank you again. t.
- Thomas_Stein_11
Nimbostratus
Hello Hannes.
I fixed the problem. Well i obviousely caused it myself by adjusting some arp setting in /etc/sysctl. I switched the settings back to default and now the checks are green. Sorry for the noise.
thank you again. t.
- Hannes_Rapp
Nimbostratus
Great that you made it. I would have not been able to spot this cause remotely!
- Thomas_Stein_11
Nimbostratus
Maybe one last question. Where do i put:
ip r add 191.255.255.1 dev eth0 ip r add default via 191.255.255.1I thought /config/startup would be the right place.
thanks and cheers t.
- Hannes_Rapp
Nimbostratus
In GUI, navigate to Network - Routes. Add any host or network routes here. You're advised to do it there because the routes are then part of the LTM config (can be synchronized and will be retained after the reboots). You can also add new routes from TMSH. Example (default route): "tmsh create net route default network 0.0.0.0/0 gw 191.255.255.1" Create new route ; "tmsh save sys config" Save the changes to config backup file
- Thomas_Stein_11
Nimbostratus
Hello Hannes.
Well thats what i tried, but:
[root@bigip:Active:Standalone] ~ tmsh create net route default network 0.0.0.0/0 gw 191.255.255.1 01070330:3: Static route gateway 191.255.255.1 is not directly connected via an interface. [root@bigip:Active:Standalone] ~
The F5 virtual has no ip in that network (191.255.255.1). Thats why i have to do a:
ip r add 191.255.255.1 dev eth0 ip r add default via 191.255.255.1But thats not reboot safe obviousely. I put those two commands in /config/startup but that only works now and then. Any ideas?
thanks t.
- Hannes_Rapp
Nimbostratus
Perhaps an exit interface method will work? Map the route to an external VLAN which in turn is mapped to your external interface in the network config. Apart from the example below, I dont think there are any good alternative methods. "tmsh create net route default network 0.0.0.0/0 interface VLAN_OUT"
- Thomas_Stein_11
Nimbostratus
Hello Hannes.
"tmsh create net route default network 0.0.0.0/0 interface VLAN6"That does not seem to work. I made now a litte shell script which checks if there is that specific route and if not it sets the route. That works good enough for me.
Thanks again for your help t.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com