F5 BIG-IP OpenStack Integration Available For Download

Last week at the OpenStack Summit in Atlanta, F5 announced OpenStack support for the BIG-IP and BIG-IQ platforms. The first of these integrations, a BIG-IP OpenStack Neutron LBaaS plug-in, is now available for Developer use. The integration supports production ready use cases and provides a comprehensive OpenStack integration with F5’s Software Defined Application Services (SDAS).

Why is F5 integrating with OpenStack?
F5 has been working with the OpenStack community since 2012. As a OpenStack Foundation sponsor, F5 is excited about creating choice for cloud platform rollouts. We want to ensure that our customers have access to F5 built solutions when evaluating cloud platform integrations. As a result, F5 decided to add OpenStack support based on customer feedback and OpenStack production readiness.

What is available for download and do I need a special license to use it?
The OpenStack plug-ins are a package of Python scripts that help the OpenStack Controller integrate with the F5 BIG-IP and BIG-IQ platforms. As long as a customer has license for BIG-IP or the BIG-IQ solution, they can use these scripts. No separate license will be required to use the OpenStack plug-ins.

The BIG-IP OpenStack Neutron LBaaS integration is now available for download. You can access the plug-in Read Me and get the package for the BIG-IP LBaaS plug-in driver/agent from this download location. The plug-in has been tested with OpenStack HAVANA Release. The LBaaS plug-in for BIG-IQ will be available later this summer..

F5 has also announced an upgrade to the BIG-IQ OpenStack Connector supporting the HAVANA release.  The upgraded connector will be released as part of the BIG-IQ Early Access (EA) program later this month and will ship in the BIG-IQ release later this summer. Please contact your regional sales representative to get access via the BIG-IQ EA program.

What does the BIG-IP LBaaS plug-in do?
The F5 BIG-IP LBaaS plug-in for OpenStack allows Neutron, the networking component of OpenStack, to provision load balancing services on BIG-IP devices or virtual editions, either through the LBaaS API or through the OpenStack dashboard (Horizon). Supported provisioning actions include selecting an LB provider, load balancing method, defining health monitors, setting up Virtual IP addresses (VIP), and putting application instance member pools behind a VIP. This setup will allow F5 customers to seamlessly integrate BIG-IP SDAS into their OpenStack Neutron network layer and  load balance their applications running in the OpenStack Nova compute environment.

The APIs for the LBaaS plug-in are defined by the OpenStack community. The BIG-IP LBaaS plug-in supports the community API specification. F5 will enhance the plug-in when additional capabilities are defined in future versions of OpenStack.

What Use Cases does the LBaaS plug-in support?
The five example use cases described below help demonstrate the flexibility of F5’s support for OpenStack. The scenarios described below are production quality use cases, with detailed videos  available on this OpenStack DevCentral Community and on F5’s YouTube Channel.

Use Case 1: Extend an existing Data Center network into an OpenStack Cloud

Video: https://devcentral.f5.com/s/videos/using-f5s-lbaas-plug-in-for-openstack-to-extend-your-data-center

Summary: 

  • Here OpenStack is used as a cloud manager. 
  • Neutron is used to manage L3 subnets.
  • Neutron will have pre-defined or static VLANS that will be shared with cloud tenants - there will be L3 subnets defined on those VLANs so guest instances can have IP allocations. 
  • Through LBaaS, F5 BIG-IP will be dynamically configured to provision appropriate VLANS as well as any L3 addresses needed to provide its service.

Benefit: Extend the cloud using a vendor-neutral API specification (OpenStack Neutron LBaaS) to provide application delivery services via F5 BIG-IP.

Use Case 2: Use F5 LBaaS to provision VLANs as a tenant virtualization technology – i.e. VLANs defined by the admin tenant for other tenants or by the tenant themselves.

Video: https://devcentral.f5.com/s/videos/lbaas-plug-in-for-openstack-to-provision-vlans

Summary:

  • In this use case, tenants can securely extend their local networks into a shared OpenStack cloud.
  • OpenStack is used as a cloud manager. 
  • Neutron will assign VLANS for a given tenant and will put L3 subnets on top of those dynamically provisioned VLANs - full L3 segmentation. 
  • F5 LBaaS will provision the VLANs as well as allocate L3 Subnets that might be needed.

Benefit: This use case allows tenant networks to be extended beyond the OpenStack cloud to traditional and virtual networks with Layer 2 segmentation based on VLANs.

Use Case 3: Software Defined Networking by using GRE/VXLAN as an overlay for tenant virtualization.

Video: https://devcentral.f5.com/s/videos/lbaas-plug-in-for-openstack-using-gre-vxlan-for-tenant-virtualization

Summary:

  • In this scenario, an important component of Software Defined Networking, overlay networking in the form of either GRE or VXLAN, is used for tenant network virtualization. 
  • OpenStack will establish tunnels between compute nodes as the tenant's L2 network, then have L3 subnets built on top of those tunnels.
  • BIG-IP will correctly join the tunnel mesh with the other compute nodes in the overlay network then allocate any IP addresses it needs in order to provide its services.
  • Here the BIG-IP's SDN gateway capabilities really come to light.  The VIP can be placed on any kind of network.  While the pool member can exist on one or multiple GRE or VXLAN tunnel networks.

Benefit: This use case allows tenant networks to be extended beyond the OpenStack cloud to traditional and virtual networks with Layer 2 segmentation based on GRE or VXLAN overlays.

Use Case 4: Use F5 as a network services proxy between tenant GRE or VXLAN.

Video: https://devcentral.f5.com/s/videos/f5-lbaas-plug-in-for-openstack-as-a-network-services-proxy-between-tenant-gre-or-vxlan-tunnels

Summary:

  • In this use case, BIG-IP serves as a network services proxy between tenant VXLAN or GRE tenant network virtualization.
  • Neutron manages the L2 tunnels and L3 subnets while F5 will provide appropriate LBaaS VIP services to other networks or other tunnels for:
    • Inter-VM HA
    • LB between VM
  • Turn down specific VMs instead of turning off a complete service.

Benefit: In this use case, the tenant may have multiple network services running on either Layer 2 VLANs or GRE/VXLAN tunnels. F5’s LBaaS agent acts as a proxy between these segments enabling inter-segment packet routing.

Use Case 5: Using the BIG-IQ's connector to orchestrate load balancing and F5 iApps in an OpenStack Cloud.

Video: https://devcentral.f5.com/s/videos/delivering-lbaas-in-openstack-with-f5s-big-iq

Summary:

  • In this use case, BIG-IQ serves as management and orchestration layer between OpenStack Neutron and BIG-IP.
  • OpenStack Neutron communicates with BIG-IQ via the LBaaS integration.
  • BIG-IQ orchestrates and provisions load balancing services on BIG-IP, further encapsulating the VIP placement business logic.

Benefit:  Cross-platform management and ability to define VIP placement policies based on business and infrastructure needs.

What help is available when using F5’s OpenStack integrations?

The BIG-IP plug-in is supported via the DevCentral OpenStack Community. You can post your questions in the OpenStack Questions and Answers section of the community. You can also use F5 Support services for issues with the iControl APIs and BIG-IP devices or virtual editions.

One last time, where can I find the Read Me and the BIG-IP LBaaS plug-in driver/agent?

Download the Read Me and the BIG-IP LBaaS plug-in driver/agent from this download location.

Published May 23, 2014
Version 1.0

Was this article helpful?

1 Comment

  • Issue : Unable to have F5 Openstack Kilo integration with neutron services running in cluster. Environment : Openstack Kilo 3 nodes in Cluster and one F5 Virtual edition It works fine when F5 is integrated to single Neutron service. But when changed to neutron services running in cluster the virtual server and pools which were created with single neutron service are getting removed automatically after few minutes. Having the integration with Neutron services in Active/Standby is also not working.