Forum Discussion
Openstack ltm integration
Hey Tim,
Let me make sure I am understanding you correctly and get other readers up to speed.
It sounds like you are using the reference ML2 based Open vSwitch core driver with at least the following in its config files:
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = [physical_network_name]:[first_vlan_id]:[last_vlan_id]
[securitygroup]
enable_security_group = True
enable_ipset = True
On you compute agents you have:
[OVS]
tenant_network_type = vlan
network_vlan_ranges = [physical_network_name]:[first_vlan_id]:[last_vlan_id]
bridge_mappings = [physical_network_name]:[some_linux_bridge_name_which_acceptes_8021qtagged_frames]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
and that you can use Neutron to create at least one external network (router:external) for floating IPs, a router for your tenant, and various tenant networks and subnets. We also assume you can start multiple Nova guests on different tenant networks, create the router between them, and have connectivity working between those guest instances.
So far that's just owning Neutron in your cloud. Nothing f5 at all..
After that, it seems you have:
1) Launched a Nova guest using a VE image as its disk image, connecting at least 2 network interfaces to it, set the security group rule appropriately to allow TCP 22 and 443 to the first network interface (TMOS mgmt interface).
2) Attached the network for the first interface to a Neutron router so it can be licensed appropriately (Neutron router SNATs to activate.f5.com).
3) Logged into the TMOS image through the console and licensed the device OR added a floating IP to the router NATting an external network IP address to the mgmt interface of the TMOS device and used the webui to license the TMOS device.
From there your message moves right on to provisioning pools. That's where we need more details...
Are you trying to use our Neutron LBaaSv1 driver and agent to provision the pool, or are you simply provisioning your LTMs through their iControl APIs or management tools? If you are trying to use our LBaaSv1 driver and agent, which version did you download from devcentral.f5.com? You should at least be using 1.0.8-2. (please note BIG-IQ 4.5 included 1.0.1 is not recommended.. don't use it. BIG-IQ does not support LBaaS yet.)
Couple of other things to consider if you are trying to use our LBaaSv1 solution:
The LBaaSv1 solution is straight forward to setup with TMOS hardware appliances or VE devices which are setup outside of Neutron's management (meaning not subject to the generated OVS flows or iptables firewall rules). The connectivity is pretty easy to troubleshoot as well. It gets more complicated when using TMOS VEs which are Nova guest instances with network interfaces subject to the OVS flow rules and iptables security group firewall.
Note: If you are using TMOS VEs as multi-tenant LBaaSv1 endpoints which are Nova guest instances, we strongly recommend you use GRE or VxLAN tunnels for your tenant networks. When using GRE or VxLAN as your tenant networks with the LBaaSv1 driver and agent, each TMOS device will need to have a non-floating SelfIP to act as a VTEP (virtual tunnel endpoint) which can route to the VTEP address of your other compute nodes (called their tunnel_ip in their configuration). Once a TMOS VE has a VTEP SelfIP address, it can encapsulate many tenant networks (overlay networks) and route IP packets (underlay network) to the compute hosts. Simply opening up the security group rules to allow for the appropriate tunnel traffic will suffice. There is not custom alteration to the compute nodes necessary. To support GRE or VxLAN connectivity to the TMOS VE instances, they must have the SDN Service license enables. It comes with 'better' bundles and higher.
If you choose to use VLANs for your tenant networks, your compute nodes will require custom setups as OVS does not support Nova guest which generate 802.1q VLAN tags on their frames. OVS only supports guest with access ports (untagged interfaces). In Neutron, such access networks are not called VLANs, but Flat networks.
If you choose to use Flat networks, remember that KVM limits the number of virtual interfaces to 10, which means TMOS VE instances will support 1 mgmt interface and 9 tenant networks.
If you want to use VLANs for tenant networks and expect your TMOS VEs to function with our multi-tenant LBaaSv1, you will need to manually remove the VE TMOS virtual tap interfaces you want to be able to send 802.1q tagged frames from the OVS integration bridge and place them on the external bridge. This manual process must take place for each TMOS VE on each compute node and falls outside the Neutron integration. You use ovs-vsctl commands to move the appropriate vtap interfaces from one bridge to the other.
The lack of VLAN tagging for guest instances is a limitation of Neutron OVS, not TMOS. There are several blueprint proposals to change this from the OpenStack community. In Kilo Neutron vlan-transparent attributes were added to allow for guest instances to insert their own 802.1q VLAN tags. However this functionality is not available for every core driver. (See: http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html)
Note to all none ML2 proprietary SDN Vendors: In LBaaS v1.0.10, the agent code support the loading of SDN vendor supplied VLAN and L3 binding interfaces. The VLAN binding interfaces provides notification to SDN vendors when the TMOS device requires VLANs to be allowed or pruned from its interfaces. The L3 binding interface provides notification to SDN vendors when the TMOS device has bound or unbound an L3 address to one of its interfaces so that any L3 ACLs can be changed to allow or reject traffic forwarding. This means any SDN vendor can integrate with f5 LBaaS solutions by simply supplying a VLAN binding or L3 binding interface which will be loaded as part of the f5 LBaaS agent process.
If you are using the LBaaSv1 solution or note, the next question to consider is if your management client (the agent process in LBaaSv1) can communicate to the TMOS VEs configured as its iControl endpoints. Do you need a floating IP to make this work? Does your security group allow for this?
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com