openstack
66 TopicsRelease Announcement: F5 OpenStack Product Suite
Release Announcement 12 July 2016 We are pleased to announce that F5's OpenStack product suite has been functionally tested and released for use in OpenStack Mitaka. Information regarding the Mitaka releases, as well as releases for the F5 OpenStackproduct suite for Kilo and Liberty, is provided below. F5 LBaaSv1 Release v9.0.1 introduces functionality in OpenStack Mitaka for Neutron LBaaSv1. F5 LBaaSv2 Release v9.0.1 of the f5-openstack-agent and f5-openstack-lbaasv2-driver introduces functionality in OpenStack Mitaka for Neutron LBaaSv2. Release v8.0.4 contains quality improvements for Neutron LBaaSv2 in OpenStack Liberty. F5 OpenStack Heat Plugins Release v9.0.1 of the F5 Heat Plugins introduced functionality in OpenStack Mitaka. Release v9.0.2 introduces two (2) new resource plugins used in conjunction with clustering: F5::Cm::Syncand F5::Sys::Save. Release v7.0.4 and v8.0.3 introduce the F5::Cm::Sync and F5::Sys::Save plugins for OpenStack Kilo and LIberty, respectively. Templates Release v9.01 introduces functionality for the F5 Heat templates in OpenStack Mitaka. Release v8.0.1 introduces functionality for the F5 Heat templates in OpenStack Liberty. Release v7.0.3 adds the new sync and save heat plugins to the ‘F5-supported’ Active-Standby Cluster template. Issues We welcome bug reports via the Issuespage for each project on GitHub:https://github.com/F5Networks. - The F5 OpenStack Product Team288Views0likes0CommentsOpenStack in a backpack – how to create a demo environment for F5 Heat Plugins, part 2
Part 2 – OpenStack installation and testing After “Part 1 – Host environment preparation” you should have your CentOS v7, 64-bit instance up and running. This article will walk you through the OpenStack RDO installation and testing process. Every command requires a Linux root privileges and is issued from the /root/ directory. If you visited the RDO project website, you might have seen the Packstack quickstart installation instructions, recommending you issue a simple packstack --allinone command. Unfortunately, packstack --allinone does not build out OpenStack networking in a way that gives us external access to the guest machines. It doesn’t install the Heat project either. This is why we need to follow a bit more complex route. Install and Set Up OpenStack RDO Install and update the RDO repos: yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-5.noarch.rpm yum update -y Install the OpenStack RDO Mitaka packages: yum install -y centos-release-openstack-mitaka yum install -y openstack-packstack I strongly recommend that you generate a Packstack answer file first, review it, and correct it if needed, then apply it. It’s less error-prone and it expedites the process of future re-installation (or later installation on a second host). Generate the Packstack answer file: packstack --gen-answer-file=my_answer_file.txt \ --allinone --provision-demo=n --os-neutron-ovs-bridge-mappings=extnet:br-ex \ --os-neutron-ovs-bridge-interfaces=br-ex:ens33 --os-neutron-ml2-type-drivers=vxlan,flat \ --os-neutron-ml2-vni-ranges=100:900 --os-neutron-ml2-tenant-network-types=vxlan \ --os-heat-install=y \ --os-neutron-lbaas-install=y \ --default-password=default NOTE: “ens33” should be replaced with the name of your non-loopback interface. Use “ip addr” to find it. You can remove the --os-neutron-lbaas-install=y \ line if you don’t intend to use your portable OpenStack cloud to play with F5 LbaaS as well. Review the answer file ( my_answer_file.txt ) and correct it if needed. Apply the answer file ( my_answer_file.txt ). packstack --answer-file=my_answer_file.txt Now it’s time to take a long coffee or lunch break, because this part takes a while! Verify that a bridge has been properly created: Your non-loopback interface should be assigned to an OVS bridge.... [root@mitaka network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 DEVICE=ens33 NAME=ens33 DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex ONBOOT=yes BOOTPROTO=none .... and the new OVS bridge device should take over the IP settings: [root@mitaka network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex DEFROUTE=yes UUID=015b449c-4df4-497a-93fd-64764e80b31c ONBOOT=yes IPADDR=10.128.10.3 PREFIX=24 GATEWAY=10.128.10.2 DEVICE=br-ex NAME=br-ex DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge Change a virt type from qemu to kvm, otherwise F5 will not boot: openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm Restart the nova compute service for your changes to take effect. openstack-service restart openstack-nova-compute Mitaka RDO may not take a default DNS forwarder from the /etc/resolv.conf file, so you need to set it manually: openstack-config --set /etc/neutron/dhcp_agent.ini \ DEFAULT dnsmasq_dns_servers 10.128.10.2 openstack-service restart neutron-dhcp-agent In this example, 10.128.10.2 is the closest DNS resolver; that should be your VMware workstation or home router (see part 1). There are couple of CLI authentication methods in OpenStack (e.g. http://docs.openstack.org/developer/python-openstackclient/authentication.html). Let’s use the easiest – but not the most secure – one for the purpose of this exercise, which is setting bash environment variables with username, password, and tenant name. The packstack command has created a flat file containing environment variables that logs you in as the admin user and the admin tenant. You can load those bash environment variables by issuing: source keystonerc_admin Set Up OpenStack Networks Next, we’ll configure our networks in OpenStack Neutron. The “openstack” commands will be issued on behalf of the admin user, until you load different environment variables. It’s a good time to review the keystonerc_admin file, because soon you will have to create such a file for a regular (non-admin) user. Load admin credentials: source keystonerc_admin Configure an external provider network: neutron net-create external_network --provider:network_type flat \ --provider:physical_network extnet --router:external Configure an external provider subnet with an IP address allocation pool that will be used as OpenStack floating IP addresses: neutron subnet-create --name public_subnet --enable_dhcp=False \ --allocation-pool=start=10.128.10.30,end=10.128.10.100 \ --gateway=10.128.10.2 external_network 10.128.10.0/24 \ --dns-nameserver=10.128.10.2 In the example given, 10.128.10.0/24 is my external subnet (the one configured during CentOS installation); 10.128.10.2 is both a default gateway and DNS resolver. Please note that the OpenStack Floating IP address is a completely different concept than an F5 Floating IP address. F5 Floating IP address is an IP that floats between physical or virtual F5 entities in a cluster. OpenStack Floating IP address is a NAT address that is translated from a public IP address to a private (tenant) IP address. In this case, it is defined by --allocation-pool=start=10.128.10.30,end=10.128.10.100 To avoid typos and confusion, let’s make further changes via the CLI and review the results in GUI. In Chrome or Mozilla, enter the IP address you assigned to your CentOS host in your address bar: http://<IP-address> (http://10.128.10.3, in my case). NOTE: IE didn’t work with an OpenStack Horizon (GUI) Mitaka release on my laptop. Set up a User, Project, and VM in OpenStack Next, to make this exercise more realistic, let’s create a non-admin tenant, non-admin user, with non-admin privileges. Create a demo tenant (a.k.a. project): openstack project create demo Create a demo user and assign it to demo tenant: openstack user create --project demo --password default demo Create a keystone environment variable file for demo user that access demo tenant/project: cp keystonerc_admin keystonerc_demo sed -i 's/admin/demo/g' keystonerc_demo source keystonerc_demo Your bash prompt should now look like [root@mitaka ~(keystone_demo)]# , since from now on all the commands will be executed on behalf of the demo user, within the demo tenant/project. In addition, you should be able to log into the OpenStack Horizon dashboard with the demo/default credentials. Create a tenant router. Please double-check that your environment variables are set for demo user. neutron router-create router1 neutron router-gateway-set router1 external_network Create a tenant (internal) management network: neutron net-create management Create a tenant (internal) management subnet and attach it to the router: neutron subnet-create --name management_subnet management 10.0.1.0/24 neutron router-interface-add router1 management_subnet Your network topology should be similar to that shown below (Network-> Network Topology-> Toggle labels): Create an ssh keypair, and store the private key to your pc. You will use this key for a password-less logging to all the guest machines: openstack keypair create default > demo_default.pem Store demo_default.pem to your PC. Please remember to chmod 600 demo_default.pem before using it with ssh -i command. Create an “allow all” security group: openstack security group create allow_all openstack security group rule create --proto icmp \ --src-ip 0.0.0.0/0 --dst-port 0:255 allow_all openstack security group rule create --proto udp \ --src-ip 0.0.0.0/0 --dst-port 1:65535 allow_all openstack security group rule create --proto tcp \ --src-ip 0.0.0.0/0 --dst-port 1:65535 allow_all Again, it’s not the best practice to use such a loose access roles, but the goal of this exercise is to create easy to use demo/test environment. Download and install a Cirros image (12.6MB): curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | \ openstack image create --container-format bare \ --disk-format qcow2 "Cirros image" Check the image status: +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-08-29T19:08:58Z | | disk_format | qcow2 | | file | /v2/images/6811814b-9288-48cc-a15a-9496a14c1145/file | | id | 6811814b-9288-48cc-a15a-9496a14c1145 | | min_disk | 0 | | min_ram | 0 | | name | Cirros image | | owner | 65136fae365b4216837110178a7a3d66 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-08-29T19:12:46Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------+ Spinning up your first Guest VM in your OpenStack environment. Here you will use previously created ssh key and allow_all security group. The guest image should be connected to the tenant “management” network. openstack server create --image "Cirros image" --flavor m1.tiny \ --security-group allow_all --key-name default \ --nic net-id=management Cirros1 Now you should be able to access a Cirros1 console: If everything went well, you should be able to log in to the Cirros1 VM with the cirros/subswin:) credentials and you should be able to ping www.f5.com from Cirros1. If your ping works, it looks like Cirros1 has got a way out. Next, let’s create a way in, so we can ssh to the Cirros1 instance from outside. If the console doesn’t react to keystrokes, click the blue bar with “Connected (unencrypted)” first. Create an OpenStack floating IP address: openstack ip floating create external_network Observe a newly create IP address. You will use it in the next step: +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | fixed_ip | None | | id | 5dff365f-3cc8-4d61-8e7e-f964cef3bb8b | | instance_id | None | | ip | 10.128.10.31 | | pool | external_network | +-------------+--------------------------------------+ Now, the newly created “public” IP address may be attached to any Virtual Machine, Cirros1 in this case. Attaching an OpenStack floating IP address creates one-to-one network address translation between public IP address and tenant (private) IP address. Attach the OpenStack floating IP address to your Cirros1: openstack ip floating add 10.128.10.31 Cirros1 Test a way in: ssh from outside to your Cirros1: ssh -i demo_default.pem cirros@10.128.10.31 Or equivalent putty command. Check that you are really on Cirros1: $id uid=1000(cirros) gid=1000(cirros) groups=1000(cirros) If the test was successful, it looks like your basic OpenStack services are up and running. To spare a scarce CPU and memory resources, I’d strongly suggest to hibernate or remove Cirros1 VM. Hibernating: openstack server suspend Cirros1 Removing: openstack server delete Cirros1 Before you boot or shutdown the CentOS host, you should be aware of one CentOS/Mitaka specific issue: your hardware may be too slow to boot an httpd service within the default 90s. This is why I had to change this timeout: sed -i \ 's/^#DefaultTimeoutStartSec=[[:digit:]]\+s/DefaultTimeoutStartSec=270s/g' \ /etc/systemd/system.conf Useful troubleshooting commands: openstack-status , systemctl show httpd.service , systemctl status httpd.service . In the next (and final) article in this series, we’ll install the F5 Heat plugins and onboard the F5 qcow2 image using F5 Heat templates.811Views0likes1CommentOpenStack in a backpack - how to create a demo environment for F5 Heat Plugins, part 3
Part 3 – F5 Heat Plugins installation and BIG-IP VE onboarding This article demonstrates how to install the F5 Heat Plugins and onboard a BIG-IP VE image into OpenStack Mitaka using F5 Heat templates. Before you get started, please review the installation instructions for the F5 Heat plugins (https://f5-openstack-heat-plugins.readthedocs.io/en/latest/) to ensure you’re using the latest version. The instructions provided in this article are current at the time of posting. Install the F5 Heat Plugins Run the following commands on the heat service node. Root privileges are required. Install the Python installation tool (pip): yum -y install python-pip Install the F5 Heat plugins. pip install f5-openstack-heat-plugins Make the Heat plugins directory (NOTE: this may already exist). mkdir -p /usr/lib/heat Create a link to the F5 plugins in the Heat plugins directory. ln -s /usr/lib/python2.7/site-packages/f5_heat /usr/lib/heat/f5_heat Restart the Heat engine service: systemctl restart openstack-heat-engine.service Now, you should see the F5 Heat resources in the OpenStack Horizon dashboard, under “Orchestration->Resource Types”: Prepare Your Project to use the F5 Heat Template Next, you’ll use the F5 BIG-IP ‘Image Patch and Upload’ heat template (http://f5-openstack-heat.readthedocs.io/en/latest/templates/supported/ref_images_patch-upload-ve-image.html) to patch your VE image for use in OpenStack and onboard it. ‘Patching’ is modifying the BIG-IP QCOW2 image so it can run within OpenStack and make use of the OpenStack Metadata service, for licensing, setting networking parameters etc. This template requires a bit of preparation, described in the Prerequisites section of the template documentation. In particular, you need to create an OpenStack flavor for the BIG-IP appliance and create an SSH key to use to log in to the BIG-IP. You will also need to provide a link to a Ubuntu image, as the template utilizes an Ubuntu server to extract and patch the image. An appropriate tenant network was already created and tested in the previous article in this series, so no need to do it again. First, log in as the OpenStack admin user to create a new flavor for the BIG-IP and adjust the ‘demo’ user’s permissions. The default OpenStack non-admin privileges do not allow creation of OpenStack Flavors or Heat stacks. Create a new flavor via the command line: sourcekeystonerc_admin openstack flavor create --ram 4096 --disk 20 --vcpus 2 \ --public "F5-small 1 slot" Refer to http://f5-openstack-docs.readthedocs.io/en/latest/guides/openstack_big-ip_flavors.html for additional information regarding BIG-IP VE flavor requirements. Give permission to create Heat stacks to the demo user: openstack role add --project demo --user demo heat_stack_owner You can now switch back to your ‘demo’ user account and import an Ubuntu image to Glance. source keystonerc_demo curl http://uec-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img | \ openstack image create --container-format bare \ --disk-format qcow2 --min-disk 10 "Ubuntu 14.04 LTS" The Heat template requires that the BIG-IP image be hosted in a location accessible to the Heat engine via ‘http’. Assuming that you do not have any other place to host the BIG-IP qcow2 image, we’ll create it one on the Horizon Apache server. Create a new directory to store the image in. mkdir -p /home/openstack/bigipimages/ Add the following lines to /etc/httpd/conf.d/15-horizon_vhost.conf , just before closing </VirtualHost> : Alias /bigipimages "/home/openstack/bigipimages/" <Directory "/home/openstack/bigipimages/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted <Directory/> Restart the httpd service: systemctl restart httpd.service Download BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip from https://downloads.f5.com/ and upload it to /home/openstack/bigipimages/ . Direct link: https://downloads.f5.com/esd/serveDownload.jsp?path=/big-ip/big-ip_v11.x/11.6.1/english/virtual-edition_base-plus-hf1/&sw=BIG-IP&pro=big-ip_v11.x&ver=11.6.1&container=Virtual-Edition_Base-Plus-HF1&file=BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip Change the permissions so the file can be seen by Apache: chmod -x+r /home/openstack/bigipimages/*.zip The compressed qcow2 image should now be accessible at: http://<ip_address>/bigipimages/BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip Please replace <ip_address> with an IP address of your CentOS host. Launch Your Heat Stack Most of the Heat templates require you to supply some specific configuration parameters (e.g., which networks should be used, what flavor, security group etc). Those parameters are rendered as a questionnaire in the Horizon GUI, right after you specify the source heat template's file or url. Unfortunately, I noticed that Mitaka release doesn't always show external networks in the GUI correctly, while the external network's name can be provided via CLI without any problems. Another disadvantage of Orchestration section in Horizon is that if you make any mistakes or typos, you need to type all the parameters over again. So, the easiest and the most efficient method is to provide Heat parameters in an environment file. Create an environment file – for example, patch_upload_paremeters.yaml – that contains the following parameters: parameters: onboard_image: "Ubuntu 14.04 LTS" flavor: m1.medium # THE FLAVOR YOU CREATED private_network: management f5_image_import_auth_url: http://<ip_address>:5000/v2.0 # YOUR KEYSTONE AUTHENTICATION URL f5_image_import_tenant: admin # THE NAME OF YOUR PROJECT SPACE f5_image_import_user: admin # YOUR USERNAME f5_image_import_password: default # YOUR PASSWORD f5_ve_image_url: http://<ip_address>/bigipimages/BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip f5_ve_image_name: BIGIP-11.6.1.1.0.326.qcow2 image_prep_key: default Note: Leading white spaces are significant in the yaml file. Launch the stack: openstack stack create -e patch_upload_paremeters.yaml \ -t https://raw.githubusercontent.com/F5Networks/f5-openstack-heat/master/f5_supported/ve/images/patch_upload_ve_image.yaml \ F5_onboard Please note that we used the GitHub version of the template in the above command (https://raw...). You can also use the download link for the template provided in the documentation. Do not try to use the link for an html-formatted yaml file. You should see your stack being created in Horizon at "Orchestration -> Stacks -> F5_onboard". You can supervise the patching progress by clicking on a Compute instance log: At the end, the stack status should be "Status Create_Complete: Stack CREATE completed successfully". You can check it by clicking on "Orchestration -> Stacks -> F5_onboard -> Overview". If you can't see the stack in your account in Horizon, double-check which user you identified in the environment file; if you specified 'admin', you’ll need to log in as the admin user. Now that you have a BIG-IP image onboarded, you can use it with the F5-supported Heat Templates (http://f5-openstack-heat.readthedocs.io/en/latest/templates/templates_index.html#f5-supported) to deploy BIG-IP VE from any of your OpenStack user accounts. First, make sure the BIG-IP image is visible to all users in your OpenStack environment. source keystonerc_admin openstack image set --public BIGIP-11.6.1.1.0.326 source keystonerc_demo The patched BIG-IP image should be visible in “Compute -> Images” or via the openstack image-list command. As an exercise, you can change the Heat template to include this step. You can now safely delete the onboarding stack. Don’t worry, the BIG-IP image will stick around. openstack stack delete F5_onboard Now, let’s spin up a stand-alone BIG-IP VE with two production interfaces using the F5 BIG-IP VE: Standalone, 3-nic Heat template (http://f5-openstack-heat.readthedocs.io/en/latest/templates/supported/ref_common_f5-ve-standalone-3nic.html) We already have a Management subnet, but we’ll need to add the traffic subnets. Create client and server-side networks: neutron net-create client neutron subnet-create --name client_subnet client 10.0.2.0/24 neutron router-interface-add router1 client_subnet neutron net-create server neutron subnet-create --name server_subnet server 10.0.3.0/24 neutron router-interface-add router1 server_subnet Now your network topology should look as follows: Create a standalone_3_nic_paremeters.yaml environment file that contains: parameters: ve_image: BIGIP-11.6.1.1.0.326 ve_flavor: "F5-small 1 slot" admin_password: admin f5_ve_os_ssh_key: default root_password: default license: <<<your eval license goes here>>> external_network: external_network mgmt_network: management network_1: client network_1_name: client network_2: server network_2_name: server Please remember to insert your evaluation license key. Launch the stack: openstack stack create -e standalone_3_nic_paremeters.yaml \ -t https://raw.githubusercontent.com/F5Networks/f5-openstack-heat/master/f5_supported/ve/standalone/f5_ve_standalone_3_nic.yaml \ F5_standalong_3_nics Now you can find a floating IP address in "Orchestration -> Stacks -> F5_standalon_2nic -> Overview -> Floating IP". You should be able to observe the patching process by running the following command: ssh -i demo_default.pem root@<<floating IP>> tail -f /var/log/messages If the patching process is complete you should see something like this: Sep 14 11:21:05 host-10 notice openstack-init: Completed OpenStack auto-configuration in 167 seconds... You should now be able to log into the BIG-IP configuration utility at the [OpenStack] floating ip address allocated by Neutron. Be sure to use https (e.g., "https://<the_floating_IP>") and give the BIG-IP enough time to fully boot. The VE platform should be properly licensed and basic network configuration should be set up. If everything works fine, you are ready to play with other F5 heat templates, and maybe write your own F5 heat resources. Good starting points would be the GitHub repo (https://github.com/F5Networks/f5-openstack-heat) and the project docs (https://f5-openstack-heat.readthedocs.io/en/latest/). Special thanks to John Gruber, Laurent Boutet, Paul Breaux, Shawn Wormke, Jodie Putrino, and the whole F5 OpenStack PD team.692Views0likes2CommentsHow is SDN disrupting the way businesses develop technology?
You must have read so much about software-defined networking (SDN) by now that you probably think you know it inside and out. However, such a nascent industry is constantly evolving and there are always new aspects to discover and learn about. While much of the focus on SDN has focused on the technological benefits it brings, potential challenges are beginning to trouble some SDN watchers. While many businesses acknowledge that the benefits of SDN are too big to ignore, there are challenges to overcome, particularly with the cultural changes that it brings. In fact, according to attendees at the Open Networking Summit (ONS) recently the cultural changes required to embrace SDN outweigh the technological challenges. One example, outlined in this TechTarget piece, is that the (metaphorical) wall separating network operators and software developers needs to be torn down; network operators need coding skills and software developers will need to be able to program networking services into their applications. That’s because SDN represents a huge disruption to how organisations develop technology. With SDN, the speed of service provisioning is dramatically increased; provisioning networks becomes like setting up a VM... a few clicks of the button and you’re done. This centralised network provision means the networking element of development is no longer a bottleneck; it’s ready and available right when it’s needed. There’s another element to consider when it comes to SDN, tech development and its culture. Much of what drives software-defined networking is open source, and dealing with that is something many businesses may not have a lot of experience with. Using open source SDN technologies means a company will have to contribute something back to the community - that’s how open source works. But for some that may prove to be a bit of an issue: some SDN users such as banks or telecoms companies may feel protective of their technology and not want is source code to be released to the world. But that is the reality of the open source SDN market, so it is something companies will have to think carefully about. Are the benefits of SDN for tech development worth going down the open source route? That’s a question only the companies themselves can answer. Software-defined networking represents a huge disruption to the way businesses develop technology. It makes things faster, easier and more convenient during the process and from a management and scalability point of view going forward. There will be challenges - there always are when disruption is on the agenda - but if they can be overcome SDN could well usher in a new era of technological development.1KViews0likes6CommentsF5 Friday: Python SDK for BIG-IP
We know programmability is important. Whether we’re talking about networking and SDN, or DevOps and APIs and templates, the most impactful technologies and trends today are those involving programmability. F5 is, as you’re no doubt aware, no stranger to programmability. Since 2001 when we introduced iControl (API) and iRules (data path programmability) we’ve continued to improve, enhance, and expand the ability of partners, customers, and our own engineers and architects to programmatically control and redefine the application delivery experience. With the emphasis today on automation and orchestration as a means for ops (and through it, the business) to scale more quickly and efficiently, programmability has never before been so critical to both operational and business success. Which means we can’t stop improving and expanding the ways in which you (and us, too) can manage, extend, and deliver the app services everyone needs to keep their apps secure, fast, and available. Now, iControl and iControl REST are both APIs built on open standards like SOAP, JSON, and HTTP. That means anyone who knows how to use an API can sit down and start coding up scripts that automate provisioning, configuration, and general management of not just BIG-IP (the platform) but the app services that get deployed on that platform. And we’re not against that at all. But we also recognize that not everyone has the time to get intimately familiar with iControl in either of its forms. So we’re pretty much always actively developing new (and improving existing) software development kits (SDKs) that enable folks to start doing more faster. But so are you. We’ve got a metric ton of code samples, libraries, and solutions here on DevCentral that have been developed by customers and partners alike. They’re freely available and are being updated, optimized, extended and re-used every single day. We think that’s a big part of what an open community is – it’s about developing and sharing solutions to some of the industry’s greatest challenges. And that’s what brings us to today’s exciting news. Well, exciting if you’re a Python user, at least, because we’re happy to point out the availability of the F5 BIG-IP Python SDK. And not just available to download and use, but available as an open source project that you can actively add, enhance, fork, and improve. Because open source and open communities produce some amazing things. This project implements an SDK for the iControl REST interface for BIG-IP, which lets you create, edit, update, and delete (CRUD) configuration objects on a BIG-IP. Documentation is up to date and available here. The BIG-IP Python SDK layers an object model over the API and makes it simpler to develop scripts or integrate with other Python-based frameworks. The abstraction is nice (and I say that with my developer hat on) and certainly makes the code more readable (and maintainable, one would assume) which should help eliminate some of the technical debt that’s incurred whenever you write software, including operational scripts and software. Seriously, here’s a basic sample from the documentation: from f5.bigip import BigIP # Connect to the BigIP bigip = BigIP("bigip.example.com", "admin", "somepassword") # Get a list of all pools on the BigIP and print their name and their # members' name pools = bigip.ltm.pools.get_collection() for pool in pools: print pool.name for member in pool.members: print member.name # Create a new pool on the BigIP mypool = bigip.ltm.pools.pool.create(name='mypool', partition='Common') # Load an existing pool and update its description pool_a = bigip.ltm.pools.pool.load(name='mypool', partition='Common') pool_a.description = "New description" pool_a.update() # Delete a pool if it exists if bigip.ltm.pools.pool.exists(name='mypool', partition='Common'): pool_b = bigip.ltm.pools.pool.load(name='oldpool', partition='Common') pool_b.delete() Isn’t that nice? Neat, understandable, readable. That’s some nice code right there (and I’m not even a Python fan, so that’s saying something). Don’t let the OpenStack reference fool you. While the first “user” of the SDK is OpenStack, it is stand-alone and can be used on its own or incorporated into other Python-based frameworks. So if you’re using Python (or were thinking about) to manage, manipulate, or monitor your BIG-IPs, check this one out. Use it, extend it, improve it, and share it. Happy scripting!3.1KViews0likes36CommentsCustomizing OpenStack LBaaSv2 Using Enhanced Services Definitions
There are many load balancing configurations which have no direct implementation in the OpenStack LBaaSv2 specification. It is easy to directly customize BIG-IP traffic management using profiles, policies, and iRules, but LBaaSv2 has no way to apply these to BIG-IP virtual servers. To help users get the most from their BIG-IPs, F5 Networks extended its implementation of LBaaSv2 to add an Enhanced Service Definition (ESD) feature. With ESDs, you can deploy OpenStack load balancers customized for specific applications. How ESDs Work An ESD is a set of tags and values that define custom settings, typically one or more profiles, policies, or iRules which are applied to a BIG-IP virtual server. ESDs are stored in JSON files on the system running an F5 OpenStack LBaaSv2 agent. When the agent starts, it reads all ESD JSON files located in /etc/neutron/services/f5/esd/, ignoring files that are not valid JSON. A JSON file can contain multiple ESDs, and each ESD contains a set of predefined tags and values. The agent validates each tag and discards invalid tags. ESDs remain fixed in agent memory until an agent is restarted. You apply ESDs to virtual servers using LBaaSv2 L7 policy operations. When you create an LBaaSv2 L7 policy object, the agent checks if the policy name matches the name of an ESD. If it does, the agent applies the ESD to the virtual server associated with the policy. If the policy name does not match an ESD name, a standard L7 policy is created instead. Essentially, the F5 agent overloads the meaning of an L7 policy name. If an L7 policy name matches an ESD name, creating an L7 policy means “apply an ESD.” If an L7 policy name does not match an ESD name, creating an L7 policy means “create an L7 policy.” While you can apply multiple L7 policies, doing so will overwrite virtual server settings defined by previous ESDs. Deleting an L7 policy that matches an ESD will remove all ESD settings from the virtual server, and return the virtual server to its original state. For these reasons, it is best to define a single ESD that contains all settings you want to apply for a specific application, instead of applying multiple ESDs. Because L7 policies were introduced in the Mitaka release of OpenStack, you can only use ESDs with a Mitaka (or greater) version of the F5 LBaaSv2 agent. You cannot use ESDs with the Liberty version of the F5 agent. Release v9.3.0 is the first version of the agent that supports ESDs. Creating ESDs Create ESDs by editing files to define one or more ESDs in JSON format. Each file can contain any number of ESDs, and you can create any number of ESD files. Files can have any valid file name, but must end with a “.json” extension (e.g., mobile_app.json). Copy the ESD files to the /etc/neutron/services/f5/esd/ directory. ESDs must have a unique name across all files. If you define more than one ESD using the same name, the agent will retain one of the ESDs, but which one is undefined. Finally, restart the agent whenever you add or modify ESD files. ESD files have this general format: { "<ESD name>":{ "<tag name>":"<tag value>", "<tag name>": "<tag value>", … }, … } An example ESD file, demo.json, is installed with the agent and provides examples you can follow: { "esd_demo_1":{ "lbaas_ctcp":"tcp-mobile-optimized", "lbaas_stcp":"tcp-lan-optimized", "lbaas_cssl_profile":"clientssl", "lbaas_sssl_profile":"serverssl", "lbaas_irule": ["_sys_https_redirect"], "lbaas_policy": ["demo_policy"], "lbaas_persist": "hash", "lbaas_fallback_persist": "source_addr" }, "esd_demo_2": { "lbaas_irule": ["_sys_https_redirect","_sys_APM_ExchangeSupport_helper"] } } ESD files must be formatted as valid JSON. If your file is not valid JSON, all ESDs in the file will be ignored. Consider using a JSON lint application (e.g., jsonlint.com) to validate your JSON files. The following tags are supported for ESDs: Tag Description Example lbaas_ctcp Specify a named TCP profile for clients. This tag has a single value. tcp-mobile-optimized lbaas_stcp Specify a named TCP profile for servers. This tag has a single value. tcp-lan-optimized lbaas_cssl_profile Specify a named client SSL profile to implement SSL/TLS offload. This can replace the use of, or override the life-cycle management of certificates and keys in LBaaSv2 SSL termination support. This tag has a single value. clientssl lbaas_sssl_profile Specify a named server side SSL profile for re-encryption of traffic towards the pool member servers. This tag should only be present once. serverssl lbaas_irule (multiple) Specify a named iRule to attach to the virtual server. This tag can have multiple values. Any iRule priority must be defined within the iRule itself. base_sorry_page, base_80_443_redirect lbaas_policy (multiple) Specify a named policy to attach to the virtual server.This tag can have multiple values Any policy priority must be defined within the iRule itself. All L7 content policies are applied first, then these named policies. policy_asm_app1 lbaas_persist Specify a named fallback persistence profile for a virtual server. This tag has a single value. hash lbaas_fallback_persist Specify a named fallback persistence profile for a virtual server. This tag has a single value. source_addr An ESD does not need to include every tag. Only included tags will be applied to a virtual server. During startup, the F5 LBaaSv2 agent will read all ESD JSON files (any file with .json extension) and validate the ESD by ensuring: The ESD file is a valid JSON format. Any invalid JSON file will be ignored. The tag name is valid (i.e., one of the tags listed in the table above). The tag value is correctly defined: a single string (for most tags), or a comma delimited list using JSON [] notation (only for lbaas_irule and lbaas_policy tags). The tag value (profile, policy, or iRule) exists in the Common partition. Keep these rules in mind: Any profile, policy, or iRule used in an ESD must be created in the Common partition. Any profile, policy, or iRule must be pre-configured on your BIG-IP before re-starting the F5 LBaaSv2 agent. Any tag that does not pass the validation steps above will be ignored. An ESD that contains a mix of valid and invalid tags will still be used, but only valid tags will be applied. Using ESDs Follow this workflow for using ESDs. Pre-configure profiles, policies, and iRules on your BIG-IP. Create an ESD in a JSON file located in /etc/neutron/services/f5/esd/. Restart the F5 LBaaSv2 agent. Create a Neutron load balancer with a listener (and pool, members, monitor). Create a Neutron L7 policy object with a name parameter that matches your ESD name. You apply an ESD to a virtual server using L7 policy objects in LBaaSv2. Using the Neutron CLI, you can create an L7 policy like this: lbaas-l7policy-create --listener <name or ID> --name <ESD name> --action <action> The action parameter is ignored, but must be included for Neutron to accept the command. For example: lbaas-l7policy-create --listener vip1 --name mobile_app --action REJECT In this example, when the F5 agent receives the lbaas-l7policy-create command, it looks up the ESD name “mobile_app” in its table of ESDs. The agent applies each tag defined in the ESD named “mobile_app” to the virtual server created for the listener named “vip1”. The REJECT action is ignored. Use the L7 policy delete operation to remove an ESD: lbaas-l7policy-delete <ESD name or L7 policy ID> It is important to note that ESDs will overwrite any existing setting of a BIG-IP virtual server. For example, if you create an LBaaSv2 pool with cookie session persistence (which is applied to the virtual server fronting the pool) and then apply an ESD that uses hash persistence, cookie persistence will be replaced with hash persistence. Removing the ESD by deleting the L7 policy will restore the virtual server back to cookie persistence. Likewise, creating a pool with session persistence after applying an ESD will overwrite the ESD persist value, if defined. Order of operations is important – last one wins. Use Cases Following are examples of using ESDs to work around the limitations of LBaaSv2. Customizing Client-side SSL Termination LBaaSv2 supports client-side SSL termination by creating TLS listeners – listeners with TERMINATED_HTTPS protocol. Using TERMINATED_HTTPS in LBaaSv2 requires a certificate and key stored in the Barbican secret store service. While this satisfies many security requirements, you may want to use an SSL profile different from what is created with a Barbican certificate and key. To use a different profile, create a listener with an HTTPS protocol, and then create an L7 policy object using an ESD that has an lbaas_cssl_profile tag. For example: "lbaas_cssl_profile": "clientssl" Adding Server-side SSL Termination LBaaSv2 has no way of specifying server-side SSL termination, as TLS listeners only define a client-side SSL profile. You may need to also re-encrypt traffic between your BIG-IP and pool member servers. To add server-side SSL termination, use an ESD that includes an lbaas_sssl_profile tag. For example: "lbaas_sssl_profile": "serverssl" Customizing Session Persistence LBaaSv2 supports session persistence, though in the LBaaSv2 model persistence types are defined for pools not listeners. The F5 agent maps LBaaSv2 pool session persistence values to BIG-IP virtual servers associated with the pool. LBaaSv2 only supports three options (HTTP_COOKIE, APP_COOKIE, and SOURCE_ADDR), and these are mapped to either cookie or source_addr persistence on BIG-IPs. Many more persistence profiles are available on BIG-IPs, such as dest_addr, hash, ssl, sip, etc. To use these profiles, specify the lbaas_persist and lbaas_fallback_persist tags in an ESD. For example: "lbaas_persist": "hash", "lbaas_fallback_persist": "source_addr" It is good practice to define a fallback persistence profile as well in case a client does not support the persistence profile you specify. Adding iRules iRules are a powerful tool for customizing traffic management. As an example, you may want to re-write certificate values into request headers. Create an iRule that does this and use the tag lbaas_irule to add the iRule to a virtual server. Unlike other tags (except lbaas_policy), the lbaas_irule tag supports multiple values. You define values for lbaas_irule using JSON list notation (comma delimited strings within brackets, []). Use brackets [] even if you only define a single iRule. Here are two examples: one ESD applies a single iRule, the other applies two iRules. { "esd_demo_1":{ "lbaas_irule":["header_rewrite"] }, "esd_demo_2":{ "lbaas_irule":["header_rewrite", "remove_response_header"] } } When using iRules, be sure to define the iRule priority within the iRule itself. The order of application of iRules is not guaranteed, though the agent makes a best effort by adding iRules in the order they are defined in the tag. Adding Policies The Mitaka release of OpenStack LBaaSv2 introduced L7 policies to manage traffic based on L7 content. The LBaaSv2 L7 policy and rule model may work for your needs. If not, create a policy on your BIG-IP and apply that policy to your LBaaSv2 listener using the lbaas_policy tag. As with the lbaas_irule tag, the lbaas_policy tag requires brackets surrounding one or more policy names. For example: { "esd_demo_1": { "lbaas_policy":["custom_policy1"] }, "esd_demo_2": { "lbaas_policy":["custom_policy1","custom_policy2"] } } Using TCP Profiles ESDs allow you to define TCP profiles that determine how a server processes TCP traffic. These can be used to fine tune TCP performance for specific applications. For example, if your load balancer fronts an application used for mobile clients, you can use the ‘tcp_mobile_optimized’ client profile to optimize TCP processing. Of course, that profile may not be optimal for traffic between your BIG-IP and the pool member servers, so you can specify different profiles for client-side and server-side traffic. Use the lbaas_ctcp tag for client profiles and the lbaas_stcp tag for server profiles. If you only include the client tag, lbaas_ctcp, and not the server tag, lbaas_stcp, the client profile is used for both. Following are two examples. In the first, esd_demo_1, the tcp profile will be used for both client-side and server-side traffic. In the second, esd_demo_2, the tcp_mobile_optimized profile is used for client-side traffic, and tcp_lan_optimized profile is used for server-side traffic. { "esd_demo_1":{ "lbaas_ctcp":"tcp" }, "esd_demo_2":{ "lbaas_ctcp": "tcp_mobile_optimized", "lbaas_stcp":"tcp_lan_optimized" } } Helpful Hints Use a JSON lint application to validate your ESD files. Forgetting a quote, including a trailing comma, or not balancing braces/brackets are common mistakes that cause JSON validation errors. Restart the F5 LBaaSv2 agent (f5-openstack-agent) after adding or modifying ESD files. Use a unique name for each ESD you define. ESD names are case sensitive. Any profile, policy, or iRule referenced in your ESD must be pre-configured on your BIG-IP, and it must be created in the Common partition. ESDs overwrite any existing settings. For example, the lbaas_cssl_profile replaces the SSL profile created for TLS listeners. When using iRules and policies, remember that any iRule priority must be defined within the iRule itself. If DEBUG logging is enabled, check the agent log, /var/log/neutron/f5-openstack-agent.log, for statements that report whether a tag is valid or invalid.594Views0likes3CommentsOpenStack Heat Template Composition
Heat Orchestration Templates (HOT) for OpenStack's Heat service can quickly grow in length as users need to pile in ever more resources to define their applications. A simple Nova instance requires a small volume at first. Soon it needs a private network, a public network, a software configuration deployment, a load balancer, a deluxe karaoke machine. This means the HOT files get bloated and difficult to read for mere humans. Let's think about why that is.... A template defines the set of resources necessary and sufficient to describe an application. It also describes the dependencies that exist between the resources, if any. That way, Heat can manage the life-cycle of the application without you having to worry about it. Often, the resources in a long template need to be visually grouped together to alert the reader that these things depend on one another and share some common goal (e.g. create a network, subnet, and router). When templates get to this state, we need to start thinking about composition. Composition in HOT is done in a couple of ways. We will tackle the straight-forward approach first. Much of this material is presented in the official documentation for template composition. The Parent and Child Templates We'll often refer to a parent template and child template here. The parent template is the user's view into the Heat stack that defines our application. It's the entrypoint. A parent template contains all of the over-arching components of the application. The child on the other hand may describe a logically grouped chunk of that application. For example, one child template creates the Nova instances that launch BIG-IP devices while another child template creates all the network plumbing for those instances. Let's look at an example of this. Below is the parent template: heat_template_test.yaml heat_template_version: 2015-04-30 description: Setup infrastructure for testing F5 Heat Templates. parameters: ve_cluster_ready_image: type: string constraints: - custom_constraint: glance.image ve_standalone_image: type: string constraints: - custom_constraint: glance.image ve_flavor: type: string constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean default: false ssh_key: type: string constraints: - custom_constraint: nova.keypair sec_group_name: type: string admin_password: type: string label: F5 VE Admin User Password description: Password used to perform image import services root_password: type: string license: type: string external_network: type: string constraints: - custom_constraint: neutron.network resources: # Plumb the newtorking for the BIG-IP instances networking: type: heat_template_test_networks.yaml properties: external_network: { get_param: external_network } sec_group_name: { get_param: sec_group_name } # Wait for networking to come up and then launch two clusterable BIG-IPs two_bigip_devices: type: OS::Heat::ResourceGroup depends_on: networking properties: count: 2 resource_def: # Reference a child template in the same directly where the heat_template_test.yaml is located type: cluster_ready_ve_4_nic.yaml properties: ve_image: { get_param: ve_cluster_ready_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } ha_network: { get_attr: [networking, ha_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } # Wait for networking to come up and launch a standalone BIG-IP standalone_device: # Reference another child template in the local directory type: f5_ve_standalone_3_nic.yaml depends_on: networking properties: ve_image: { get_param: ve_standalone_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } Now that's still fairly verbose, but its doing some heavy lifting for us. It is creating almost all of the networking needed to launch a set of clusterable BIG-IP devices and a single standalone BIG-IP device. It takes in parameters from the user such as ve_standalone_image and external_network and passes them along to the child that requires them. The child stack then receives those parameters in the same way the template above does, by defining a parameter. The parent template references the heat_template_test_networks.yaml template directly, expecting the file to be in the same local directory where the parent template is located. This is always created in the type field of a resource. In addition to relative paths, you can also reference another template with an absolute path, or URL. You can also see the group of responsibilities here. One child stack (heat_template_test_networks.yaml) is building networks, another (cluster_ready_ve_4_nic.yaml) is launching a set of BIG-IP devices ready for clustering and yet another (f5_ve_standalone_4_nic.yaml) is launching a standalone BIG-IP device. Yet the dependencies are apparent in the depends_on property and the intrinsic functions called within that resource (more on that later). You will not successfully launch the standalone device without having the networking in place first, thus the standalone_device resource is dependent upon the networking resource. This means we can easily send data into the networking stack (as a parameter) and now we must get data out to be passed into the standalone_device stack. Let's look at the networking template and see what it defines as its outputs. heat_template_test_networks.yaml heat_template_version: 2015-04-30 description: > Create the four networks needed for the heat plugin tests along with their subnets and connect them to the testlab router. parameters: external_network: type: string sec_group_name: type: string resources: # Four networks client_data_network: type: OS::Neutron::Net properties: name: client_data_network server_data_network: type: OS::Neutron::Net properties: name: server_data_network mgmt_network: type: OS::Neutron::Net properties: name: mgmt_network ha_network: type: OS::Neutron::Net properties: name: ha_network # And four accompanying subnets client_data_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.1.0/24 dns_nameservers: [10.190.0.20] network: { get_resource: client_data_network } server_data_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.2.0/24 dns_nameservers: [10.190.0.20] network: {get_resource: server_data_network } mgmt_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.3.0/24 dns_nameservers: [10.190.0.20] network: { get_resource: mgmt_network } ha_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.4.0/24 dns_nameservers: [10.190.0.20] network: { get_resource: ha_network } # Create router for testlab testlab_router: type: OS::Neutron::Router properties: external_gateway_info: network: { get_param: external_network } # Connect networks to router interface client_data_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: client_data_subnet } server_data_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: server_data_subnet } mgmt_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: mgmt_subnet } ha_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: ha_subnet } open_sec_group: type: OS::Neutron::SecurityGroup properties: name: { get_param: sec_group_name } rules: - protocol: icmp direction: ingress - protocol: icmp direction: egress - protocol: udp direction: ingress - protocol: udp direction: egress - protocol: tcp direction: ingress port_range_min: 1 port_range_max: 65535 - protocol: tcp direction: egress port_range_min: 1 port_range_max: 65535 outputs: mgmt_network_name: value: { get_attr: [mgmt_network, name] } ha_network_name: value: { get_attr: [ha_network, name] } client_data_network_name: value: { get_attr: [client_data_network, name] } server_data_network_name: value: { get_attr: [server_data_network, name] } You can see the logical grouping here, and this is where template composition shines. This simple template creates a security group, four networks, four subnets, and ties them together with a router. Even though the heat_template_test.yaml parent template uses this to build its networks for defining its application, another user may decide they want the same networking infrastucture, but they want four standalone devices and eight pairs of clusterable devices. Their only modification would be in the parent template, because the heat_template_test_networks.yaml template describes the set of networks those devices need to connect to. It is important to note that the above template is a fully functioning HOT template all by itself. You can launch it and it will build those four networks. So how does the data get out of the networking template? The outputs section takes care of that. For attaching BIG-IP devices in the parent template to these networks, all we require is the network name, so the networking template kicks those back up to whomever is curious about such things. We saw the get_param function earlier, and now we can see the use of the get_attr function. In the two_bigip_devices resource, the parent template references the networking resource directly and then it accesses the client_data_network_name attribute (as seen below). This operation retrieves the network name for the client_data_network and passes it into the cluster_ready_ve_4_nic.yaml stack. network_1: { get_attr: [networking, client_data_network_name] } With that, we've successfully sent information into a child stack, retrieved it, then sent it into another child stack. This is very useful when working in large groups of users because my heat_template_test_networks.yaml template may benefit others. In time, you can build quite a collection of these concise HOT templates then use a parent template to orchestrate them in many complex ways. Keep in mind however, that HOT is declarative, meaning there are no looping constructs or if/else decisions to decide whether to create seven networks or four. For that, you would need to create two separate templates. As seen in the OS::Heat::ResourceGroup resource for two_bigip_devices however, we can toggle the number of instances launched by that resource at stack creation time. We can simply make the count property of that resource a parameter and the user can decide how many clusterable BIG-IP devices should be launched. To launch the above template with the python-heatclient: heat stack-create -f heat_template_test.yaml -P "ve_cluster_ready_image=cluster_ready-BIGIP-11.6.0.0.0.401.qcow2;ve_standalone_image=standalone-BIGIP-11.6.0.0.0.401.qcow2;ve_flavor=m1.small;ssh_key=my_key;sec_group_name=bigip_sec_group;admin_password=pass;root_password=pass;license=XXXXX;external_network=public_net" bigip_test_stack New Resource Types in Environment Files The second way to do template composition is to define a brand new resource type in an environment file. The environment in which a Heat stack is launched affects the behavior of that stack. To learn more about environment files, check out the official documentation. We will use them here to define a new resource type. The cool thing about environments is that the child templates looks exactly the same, and only one small change is needed in the parent template. It is the environment in which those stacks launch that changes. Below we are defining a Heat environment file and defining a new resource type in the attribute_registry section. heat_template_test_environment.yaml parameters: ve_cluster_ready_image: cluster_ready-BIG-IP-11.6.0.0.0.401.qcow2 ve_standalone_image: standalone-BIG-IP-11.6.0.0.0.401.qcow2 ve_flavor: m1.small ssh_key: my_key sec_group_name: bigip_sec_group admin_password: admin_password root_password: root_password license: XXXXX external_network: public_net resource_registry: OS::Neutron::FourNetworks: heat_template_test_networks.yaml F5::BIGIP::ClusterableDevice: cluster_ready_ve_4_nic.yaml F5::BIGIP::StandaloneDevice: f5_ve_standalone_3_nic.yaml The parent template will now reference the new resource types, which are merely mappings to the child templates. This means the parent template has three new resources to use which are not a part of the standard OpenStack Resource Types. The environment makes all this possible. heat_template_test_with_environment.yaml heat_template_version: 2015-04-30 description: Setup infrastructure for testing F5 Heat Templates. parameters: ve_cluster_ready_image: type: string constraints: - custom_constraint: glance.image ve_standalone_image: type: string constraints: - custom_constraint: glance.image ve_flavor: type: string constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean default: false ssh_key: type: string constraints: - custom_constraint: nova.keypair sec_group_name: type: string admin_password: type: string label: F5 VE Admin User Password description: Password used to perform image import services root_password: type: string license: type: string external_network: type: string constraints: - custom_constraint: neutron.network default_gateway: type: string default: None resources: networking: # A new resource type: OS::Neutron::FourNetworks properties: external_network: { get_param: external_network } sec_group_name: { get_param: sec_group_name } two_bigip_devices: type: OS::Heat::ResourceGroup depends_on: networking properties: count: 2 resource_def: # A new resource type: F5::BIGIP::ClusterableDevice properties: ve_image: { get_param: ve_cluster_ready_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } ha_network: { get_attr: [networking, ha_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } standalone_device: # A new resource type: F5::BIGIP::StandaloneDevice depends_on: networking properties: ve_image: { get_param: ve_standalone_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } And here is how we utilize those new resources defined in our environment file. Note that we no longer define all the parameters in the cli call to the Heat client (with the -P flag) because it is set in our environment file. heat stack-create -f heat_template_test_with_environment.yaml -e heat_template_test_environment.yaml test_env_stack Resources: For more information on Heat Resource Types, along with the possible inputs and outputs: http://docs.openstack.org/developer/heat/template_guide/openstack.html For more examples of how to prepare a BIG-IP image for booting in OpenStack and for clustering those clusterable instances: https://github.com/F5Networks/f5-openstack-heat Reference Templates: Below is the two child templates used in the above examples. We developed these on OpenStack Kilo with a BIG-IP image prepared from our github repo above. f5_ve_standalone_3_nic.yaml heat_template_version: 2015-04-30 description: This template deploys a standard f5 standalone VE. parameters: open_sec_group: type: string default: open_sec_group ve_image: type: string constraints: - custom_constraint: glance.image ve_flavor: type: string constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean default: false ssh_key: type: string constraints: - custom_constraint: nova.keypair admin_password: type: string hidden: true root_password: type: string hidden: true license: type: string hidden: true external_network: type: string default: test constraints: - custom_constraint: neutron.network mgmt_network: type: string default: test constraints: - custom_constraint: neutron.network network_1: type: string default: test constraints: - custom_constraint: neutron.network network_1_name: type: string default: network-1.1 network_2: type: string default: test constraints: - custom_constraint: neutron.network network_2_name: type: string default: network-1.2 default_gateway: type: string default: None resources: mgmt_port: type: OS::Neutron::Port properties: network: { get_param: mgmt_network } security_groups: [ { get_param: open_sec_group }] network_1_port: type: OS::Neutron::Port properties: network: { get_param: network_1 } security_groups: [ { get_param: open_sec_group }] floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: { get_param: external_network } port_id: { get_resource: mgmt_port } network_2_port: type: OS::Neutron::Port properties: network: {get_param: network_2 } security_groups: [ { get_param: open_sec_group }] ve_instance: type: OS::Nova::Server properties: image: { get_param: ve_image } flavor: { get_param: ve_flavor } key_name: { get_param: ssh_key } config_drive: { get_param: use_config_drive } networks: - port: {get_resource: mgmt_port} - port: {get_resource: network_1_port} - port: {get_resource: network_2_port} user_data_format: RAW user_data: str_replace: params: __admin_password__: { get_param: admin_password } __root_password__: { get_param: root_password } __license__: { get_param: license } __default_gateway__: { get_param: default_gateway } __network_1__: { get_param: network_1 } __network_1_name__: { get_param: network_1_name } __network_2__: { get_param: network_2 } __network_2_name__: { get_param: network_2_name } template: | { "bigip": { "f5_ve_os_ssh_key_inject": "true", "change_passwords": "true", "admin_password": "__admin_password__", "root_password": "__root_password__", "license": { "basekey": "__license__", "host": "None", "port": "8080", "proxyhost": "None", "proxyport": "443", "proxyscripturl": "None" }, "modules": { "auto_provision": "false", "ltm": "nominal" }, "network": { "dhcp": "true", "selfip_prefix": "selfip-", "vlan_prefix": "network-", "routes": [ { "destination": "0.0.0.0/0.0.0.0", "gateway": "__default_gateway__" } ], "interfaces": { "1.1": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_1_name__", "selfip_description": "Self IP address for BIG-IP __network_1_name__ network", "vlan_name": "__network_1_name__", "vlan_description": "VLAN for BIG-IP __network_1_name__ network traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" }, "1.2": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_2_name__", "selfip_description": "Self IP address for BIG-IP __network_2_name__ network", "vlan_name": "__network_2_name__", "vlan_description": "VLAN for BIG-IP __network_2_name__ network traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" } } } } } outputs: ve_instance_name: description: Name of the instance value: { get_attr: [ve_instance, name] } ve_instance_id: description: ID of the instance value: { get_resource: ve_instance } floating_ip: description: The Floating IP address of the VE value: { get_attr: [floating_ip, floating_ip_address] } mgmt_ip: description: The mgmt IP address of f5 ve instance value: { get_attr: [mgmt_port, fixed_ips, 0, ip_address] } mgmt_mac: description: The mgmt MAC address of f5 VE instance value: { get_attr: [mgmt_port, mac_address] } mgmt_port: description: The mgmt port id of f5 VE instance value: { get_resource: mgmt_port } network_1_ip: description: The 1.1 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_1_port, fixed_ips, 0, ip_address] } network_1_mac: description: The 1.1 MAC address of f5 VE instance value: { get_attr: [network_1_port, mac_address] } network_1_port: description: The 1.1 port id of f5 VE instance value: { get_resource: network_1_port } network_2_ip: description: The 1.2 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_2_port, fixed_ips, 0, ip_address] } network_2_mac: description: The 1.2 MAC address of f5 VE instance value: { get_attr: [network_2_port, mac_address] } network_2_port: description: The 1.2 port id of f5 VE instance value: { get_resource: network_2_port } cluster_ready_ve_4_nic.yaml heat_template_version: 2015-04-30 description: This template deploys a standard f5 VE ready for clustering. parameters: open_sec_group: type: string default: open_sec_group ve_image: type: string label: F5 VE Image description: The image to be used on the compute instance. constraints: - custom_constraint: glance.image ve_flavor: type: string label: F5 VE Flavor description: Type of instance (flavor) to be used for the VE. constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean label: Use Config Drive description: Use config drive to provider meta and user data. default: false ssh_key: type: string label: Root SSH Key Name description: Name of key-pair to be installed on the instances. constraints: - custom_constraint: nova.keypair admin_password: type: string label: F5 VE Admin User Password description: Password used to perform image import services root_password: type: string label: F5 VE Root User Password description: Password used to perform image import services license: type: string label: Primary VE License Base Key description: F5 TMOS License Basekey external_network: type: string label: External Network description: Network for Floating IPs default: test constraints: - custom_constraint: neutron.network mgmt_network: type: string label: VE Management Network description: Management Interface Network. default: test constraints: - custom_constraint: neutron.network ha_network: type: string label: VE HA Network description: HA Interface Network. default: test constraints: - custom_constraint: neutron.network network_1: type: string label: VE Network for the 1.2 Interface description: 1.2 TMM network. default: test constraints: - custom_constraint: neutron.network network_1_name: type: string label: VE Network Name for the 1.2 Interface description: TMM 1.2 network name. default: data1 network_2: type: string label: VE Network for the 1.3 Interface description: 1.3 TMM Network. default: test constraints: - custom_constraint: neutron.network network_2_name: type: string label: VE Network Name for the 1.3 Interface description: TMM 1.3 network name. default: data2 default_gateway: type: string label: Default Gateway IP default: None description: Upstream Gateway IP Address for VE instances resources: mgmt_port: type: OS::Neutron::Port properties: network: { get_param: mgmt_network } security_groups: [{ get_param: open_sec_group }] network_1_port: type: OS::Neutron::Port properties: network: {get_param: network_1 } security_groups: [{ get_param: open_sec_group }] floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: { get_param: external_network } port_id: { get_resource: mgmt_port } network_2_port: type: OS::Neutron::Port properties: network: {get_param: network_2 } security_groups: [{ get_param: open_sec_group }] ha_port: type: OS::Neutron::Port properties: network: {get_param: ha_network} security_groups: [{ get_param: open_sec_group }] ve_instance: type: OS::Nova::Server properties: image: { get_param: ve_image } flavor: { get_param: ve_flavor } key_name: { get_param: ssh_key } config_drive: { get_param: use_config_drive } networks: - port: {get_resource: mgmt_port} - port: {get_resource: ha_port} - port: {get_resource: network_1_port} - port: {get_resource: network_2_port} user_data_format: RAW user_data: str_replace: params: __admin_password__: { get_param: admin_password } __root_password__: { get_param: root_password } __license__: { get_param: license } __default_gateway__: { get_param: default_gateway } __network_1_name__: { get_param: network_1_name } __network_2_name__: { get_param: network_2_name } template: | { "bigip": { "ssh_key_inject": "true", "change_passwords": "true", "admin_password": "__admin_password__", "root_password": "__root_password__", "license": { "basekey": "__license__", "host": "None", "port": "8080", "proxyhost": "None", "proxyport": "443", "proxyscripturl": "None" }, "modules": { "auto_provision": "false", "ltm": "nominal" }, "network": { "dhcp": "true", "selfip_prefix": "selfip-", "routes": [ { "destination": "0.0.0.0/0.0.0.0", "gateway": "__default_gateway__" } ], "interfaces": { "1.1": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.HA", "selfip_description": "Self IP address for BIG-IP Cluster HA subnet", "vlan_name": "vlan.HA", "vlan_description": "VLAN for BIG-IP Cluster HA traffic", "is_failover": "true", "is_sync": "true", "is_mirror_primary": "true", "is_mirror_secondary": "false" }, "1.2": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_1_name__", "selfip_description": "Self IP address for BIG-IP __network_1_name__", "vlan_name": "__network_1_name__", "vlan_description": "VLAN for BIG-IP __network_1_name__ traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" }, "1.3": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_2_name__", "selfip_description": "Self IP address for BIG-IP __network_2_name__", "vlan_name": "__network_2_name__", "vlan_description": "VLAN for BIG-IP __network_2_name__ traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" } } } } } outputs: ve_instance_name: description: Name of the instance value: { get_attr: [ve_instance, name] } ve_instance_id: description: ID of the instance value: { get_resource: ve_instance } mgmt_ip: description: The mgmt IP address of f5 ve instance value: { get_attr: [mgmt_port, fixed_ips, 0, ip_address] } mgmt_mac: description: The mgmt MAC address of f5 VE instance value: { get_attr: [mgmt_port, mac_address] } mgmt_port: description: The mgmt port id of f5 VE instance value: { get_resource: mgmt_port } ha_ip: description: The HA IP address of f5 ve instance value: { get_attr: [ha_port, fixed_ips, 0, ip_address] } ha_mac: description: The HA MAC address of f5 VE instance value: { get_attr: [ha_port, mac_address] } ha_port: description: The ha port id of f5 VE instance value: { get_resource: ha_port } network_1_ip: description: The 1.2 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_1_port, fixed_ips, 0, ip_address] } network_1_mac: description: The 1.2 MAC address of f5 VE instance value: { get_attr: [network_1_port, mac_address] } network_1_port: description: The 1.2 port id of f5 VE instance value: { get_resource: network_1_port } network_2_ip: description: The 1.3 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_2_port, fixed_ips, 0, ip_address] } network_2_mac: description: The 1.3 MAC address of f5 VE instance value: { get_attr: [network_2_port, mac_address] } network_2_port: description: The 1.3 port id of f5 VE instance value: { get_resource: network_2_port } floating_ip: description: Floating IP address of VE servers value: { get_attr: [ floating_ip, floating_ip_address ] }1.4KViews0likes1CommentOpenStack Primer
Emerging technologies move at a rapid pace. Momentum created through partnerships between researchers, vendors, experts, practitioners, and adopters is vital to the success of emerging technologies. Such technological advances have a curve of adoption that eventually becomes appealing to the masses. However, exactly at such a time, there is a need to explain things in a simple manner. It is vitally important to begin explaining the technological vocabulary that is used for discussion and development within a community of collaborators, which enables self- teaching and self-learning. As a result, those beginning to look at these emerging areas do not get lost and miss out on the massive transformation. OpenStack is going through such a phase. This article is our attempt to provide a simple introduction to OpenStack, leaving the reader with resources to follow up and begin the education. Our objective is to briefly cover the history of OpenStack and help the reader acquire the vocabulary for the language OpenStack Community often speaks. In terms of expected audience, this article assumes you are part of an IT organization or a software development organization, and deploy software that is developed by you and/or provided by others (commercial vendor or open source). This article also assumes you have heard about and have some familiarity with cloud software platforms like Amazon Web Services (AWS), server virtualization technologies, and aware of the benefits they provide. This article should help you achieve the following goals: - Begin to understand what OpenStack is about - Manage an introductory conversation within your company, getting others in your team or other departments interested in OpenStack - Establish credibility within your organization that you are familiar with OpenStack, and you or your team can begin to dive deep Introduction OpenStack (http://www.openstack.org) is a software platform that enables any organization to transform their IT Data Center into programmable infrastructure. OpenStack is primarily a services provisioning layer accessible via REST (http://en.wikipedia.org/wiki/REST) based Application Programming Interfaces (APIs - http://en.wikipedia.org/wiki/Application_programming_interface). A customer can use OpenStack for their in-house Data Center, while also using a Public Cloud like AWS for other needs. It all depends on the business needs and the technology choices a customer will make. Generally, the most significant motivation to deploy OpenStack in-house is to get AWS-like behavior within their own Data Center, wherein a Developer or an IT person can dynamically provision any infrastructure service - E.g. spin up a server with a virtual machine running Linux, create a new network subnet with a specific VLAN, create a load balancer with a virtual IP address etc. - without having to cut a IT Ticket and wait for someone else to perform those operations. The main business goal is self-service provisioning of infrastructure services, which requires a programmatic interface in front of the infrastructure. OpenStack started within Rackspace, and in collaboration with NASA, the project was launched as an open source initiative. The project is now managed by the OpenStack Foundation with many entities on the Foundation's Board. The OpenStack project development is 100% community driven with no single vendor or organization influencing the decisions and the roadmap. All OpenStack software is developed using Python and is available under a non-commercial software license. Any customer can download the Community Edition of OpenStack and run it on a Linux platform. Vendors including Canonical, Red Hat, HP, IBM, Cisco, Mirantis, MetaCloud, Persistent Systems, and many others take the OpenStack Community Edition and test it for other Linux variants and also make it available as a Vendor Edition of OpenStack. Some of these vendors also add their deployment software (E.g. Mirantis FUEL, Canonical JUJU etc.) to make OpenStack installations easy, and provide systems integration and consulting services. Customers have a choice to use the Community or the Vendor edition of OpenStack along with additional consulting and support services. F5 joined the OpenStack Foundation as a Silver level sponsor in October 2013. F5 has been engaging with the community since early 2012 and decided to commit to developing OpenStack integrations in early 2014. In subsequent articles we will explain our integrations in detail. OpenStack Releases The OpenStack software platform is released twice a year, under a 6 month long development cycle. The most recent release was called HAVANA, launched in November 2013. The upcoming release in May 2014 is called ICEHOUSE. Notable recent releases were: GRIZZLY, FOLSOM, ESSEX F5 has committed to developing its integration for the HAVANA release and will update its support for future releases. OpenStack Services The OpenStack platform currently supports provisioning of the following services: Compute, Network, Storage, Identity and Orchestration (and more). The OpenStack development community uses code words for each of these areas. Most of the OpenStack collateral and conversations use these code words (instead of saying Compute, Network and such). These code words are part of the OpenStack vocabulary and to become familiar and dive deep on OpenStack you need to be aware of them. The list given below represents the OpenStack project names and the service areas they generally align with. You will also see some related AWS service names in parenthesis below, provided for comparison and education purposes only. Not all of the OpenStack services map directly to their AWS counterparts. In some cases the APIs are similar in concept, while in other cases they are different even when solving a similar problem (E.g. AWS Elastic Load Balancing service APIs and OpenStack Load Balancer As a Service APIs are significantly different). Neutron = Networking (L2/L3, DHCP, VPN, LB, FW etc.) Nova = Compute (Similar to AWS EC2) Cinder = Block Storage (Similar to AWS EBS) Swift = Object Store (Similar to AWS S3) Keystone = Identity (Similar to AWS IAM) Glance = Image Service (Similar to AWS AMIs) Ceilometer = Metering (mostly for Billing purposes, not monitoring) Heat = Deployment and Auto Scaling (Similar to AWS CloudFormation and Auto Scaling) Horizon = Single-pane-of-glass Management GUI (Similar to AWS Console) Tempest = Test Framework As a customer, when you ask your networking vendor about their integration with Neutron, you are asking for their integration with the Network provisioning services of OpenStack. You might also ask a vendor about their support for Nova. That could mean asking if their software can be installed on a VM running in the Compute layer of OpenStack. All these services, such as Neutron and Nova, are programmable via REST APIs. Your goal is to complement (or replace) point-and-click manual provisioning with automated API-driven provisioning of infrastructure services. By asking your infrastructure vendors for their OpenStack support, you are seeking confirmation on whether their technologies are available to be programmed over a OpenStack REST API, depending on the service, and in some cases can their software run in a VM (e.g. KVM instance) provisioned using the OpenStack Nova API. For most of these services, the OpenStack platform supports a Plug-In and Driver architecture. This allows the vendors to support their own technology within the OpenStack platform. For example, the OpenStack community distribution supports creating a L2 switch-based network using the Apache Open vSwitch software. However, using the Plug-In and Driver architecture, commercial vendors like Cisco have created support for their commercial switching products. As a result of this, a Cisco customer can now use OpenStack Neutron L2/L3 Plug-in to provision a new subnet into a Cisco switch that they already had installed in their network - thus making their Cisco networking layer programmable via a standard vendor-neutral REST API. Each vendor can decide the level of interoperability they want to provide between the capabilities they natively support and the capability that OpenStack allows to be programmed through it's APIs. The vendor can add extensions to the Plug-ins to expose additional functionality that is not yet part of the OpenStack community ratified API specification. Depending on how the community progresses in its development from release to release, some of these extensions could become part of the standard specifications, at which point they would become the official OpenStack APIs. In terms of F5's integration with OpenStack, the most important service is Neutron. The community has defined APIs for various network services, including Load Balancer As A Service (LBAAS). Additionally, the Neutron layer also provides default functionality to support services like DHCP and DNS. F5 has committed to developing LBAAS Plug-ins for the OpenStack HAVANA release. When the Plug-ins are deployed with an OpenStack distribution, a customer will be able to create a load balancer instance on a HW or VE BIG-IP and provision the following elements: VIP, Server Pool, Health Monitor, Load Balancing Method. A follow-up blog post will provide details of this integration. F5 BIG-IP and BIG-IQ provide much more functionality than what OpenStack LBAAS allows to be provisioned. F5 will choose to deliver additional capabilities as Plug-In extensions when requested by our customers. OpenStack Landscape OpenStack complements as well as creates open source alternatives in addition to commercial cloud software platforms. OpenStack creates choice for Enterprises and Service Providers to make provisioning of their existing or new Data Center infrastructure programmable via REST APIs. AWS and other Public Clouds are not necessarily a direct competition to the OpenStack cloud software platform used for in-house Data Center programmability. Vendors such as HP and Rackspace operate a Public Cloud using OpenStack software platform, which creates choice for customers when considering a Public Cloud solution for their business needs. Conclusion OpenStack is a rapidly emerging software platform, which is increasingly becoming stable release after release. As a result, we expect F5 customers to consider it for making their Data Center infrastructure programmable via REST APIs. Benefits of OpenStack include the open source nature supporting community development, with an increasing vendor-supported eco-system. F5 is committed to stay involved within the OpenStack community and will commit to launching integrations and solutions based on customer needs, as they look to integrate F5’s BIG-IP and BIG-IQ services into their OpenStack environments. You can visit F5's alliances page to learn about our OpenStack support. References http://en.wikipedia.org/wiki/OpenStack http://www.rackspace.com/cloud/openstack/ http://www.cisco.com/web/solutions/openstack/index.html http://www.redhat.com/openstack/ http://www8.hp.com/us/en/business-solutions/solution.html?compURI=1421776#.U0hKPtwuNkI http://www.mirantis.com http://www.persistent.com/technology/cloud-computing/OpenStack http://www.metacloud.com1.2KViews0likes2CommentsOpenStack in a backpack – how to create a demo environment for F5 Heat Plugins, part 1
Part 1 – Host environment preparation. If you are a Network Engineer, one day you may be asked to roll out F5 ADC in the OpenStack environment. But hey, OpenStack may look new and scary to you, as it was for me. You’ve probably learned on Devcentral (https://devcentral.f5.com/s/wiki/openstack.openstack_heat.ashx) that Heat is a primary OpenStack orchestration service, and that Heat and the F5 Heat Plugins are useful for rolling out and configuring F5 BIG-IP Virtual Edition (VE) on top of the OpenStack environment. Let’s say that you've decided to accustom yourself with Heat and F5 Heat Plugins, but you do not have access to any OpenStack test or development environments, and/or you do not have admin rights to install the F5 Heat Plugins in OpenStack. If so, this series of articles is for you. Please note, that OpenStack is a moving target. By the time I was writing this article, Mitaka was the supported OpenStack release. See https://releases.openstack.org/ for further information. This is what you need to create your small, single-host OpenStack lab environment: Some hardware or virtual machine with >=4 cores, 16G of RAM and Intel VT-x/EPT or AMD-V/RVI support. In my case, it was an old Dell laptop that IT department intended to scrap. ;-) A decent Internet connection. In case of bare metal deployment, you need a home router. In my case it was my old WRT54gl. Ask your F5 Partner or F5 representatives for a couple of VE 45-day evaluation licenses. To sum up, it cost me 0$ to build my own OpenStack lab environment. This is what you will get: Fully functional OpenStack Mitaka environment Heat and F5 Heat Plugins F5 VE as guest machine Direct access to F5 GUI, CLI, Virtual Servers from your laptop or any other devices connected to the home router Possibility to test F5 VE setup from inside or outside of OpenStack Naming convention: [HW] – actions to be done for the bare metal deployment [VMware] – actions to be done for the VMware guest machine deployment no square bracket – do it regardless of whether it’s a physical or VMware deployment. Host Preparation Download and Install the Operating System Check if you have Intel VT-x/EPT or AMD-V/RVI turned on in your bios. Sample screenshot: Download a CentOS v.7 64-bit minimal image: http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso [HW] Store the CentOS image on a USB stick. The simplest method is to use some other Linux box. dd if=CentOS-6.5-x86_64-bin-DVD1.iso of=/dev/sdb In the above command, /dev/sdb is an USB device; your USB device name will most likely be different. To find out the USB device name, observe /var/log/syslog , or /var/log/messages , while connecting the USB drive to the Linux machine. Further info at:https://wiki.centos.org/HowTos/InstallFromUSBkey [VMware] Create a custom VMware workstation virtual machine. Set everything to default except: Guest Operating system: RedHat Enterprise Linux 7 64-bit. Number of processors: 4 Memory for this Virtual Machine [RAM]: 16384MB (16G) Use Network address translation. You should review and alter the VMware network configuration. Let’s assume that the NAT network is 10.128.10.0/24, where a default route and DNS is set to 10.128.10.2, turn off DHCP. Maximum disk size: 200GB IMPORTANT: Turn on a nested VM support by: Edit virtual machine settings -> Processors -> enable Virtualize Intel VT-x/EPT or AMD-V/RVI NOTE: If you see different GUI options, read this article: https://communities.vmware.com/docs/DOC-8970 For the sake of performance, disable memory page trimming: Install CentOS.Boot the USB drive [HW] or connect the CentoOS .iso image to the VMware guest machine [VMware]. Configure the OS Choose English as the language. Set your local time zone. Correct time will ease troubleshooting. Disable a Security Policy: Network settings: Set your hostname to: mitaka.f5demo.com and configure the network interface: Turn on “Automatically connect to this network when it is available”: Configure IPv4 address, mask, default route and DNS: Disable IPv6: Check if the network interface is switched on: [optional] Disable KDUMP (from the main screen) Open INSTALLATION DESTINATION from the main screen and choose “Partition disk automatically”: Hit “Begin Installation”. Set root password to “default”. You need to hit “DONE” button twice to make CentOS accept the weak password. A user account is not required. CentOS will prompt you to reboot. Permit ssh root login. Uncomment PermitRootLogin yes at /etc/ssh/sshd_config Reboot the ssh service systemctl restart sshd.service Configure the host Now, you should be able to log in to your OpenStack host with an ssh client (e.g., putty or iTerm2), so you can copy/paste the rest of the commands. Generate ssh keyset: ssh-keygen (empty passwords recommended) [HW] By default, the CentOS will suspend on a lid close. To turn this off: In /etc/systemd/logind.conf , set HandleLidSwitch=ignore Further details at http://askubuntu.com/questions/360615/ubuntu-server-13-10-now-goes-to-sleep-when-closing-laptop-lid Turn off Network Manager: systemctl disable NetworkManager systemctl stop NetworkManager Set the hostnames as shown below: [root@mitaka ~]# cat /etc/hosts |grep mitaka 10.128.10.3 mitaka.f5demo.com mitaka and cat /etc/hostname mitaka.f5demo.com Fill in the /etc/sysconfig/network as depicted: # Created by anaconda NETWORKING=yes HOSTNAME=mitaka.f5demo.com GATEWAY=10.128.10.2 Where 10.128.10.2 is a gateway IP address. Switch off selinux: sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config Set English locales: [root@openstack01 ~]# cat /etc/environment LANG=en_US.utf-8 LC_ALL=en_US.utf-8 Update the system: yum update -y Reboot the system: reboot Next, you should be ready to install OpenStack RDO. Stay tuned for instructions in the next installment of OpenStack in a Backpack!775Views0likes4CommentsLightboard Lessons: BIG-IP in the private cloud
In this episode, Jason transitions from last week’s public cloud overview to discuss private cloud options. Resources BIG-IP deployments using Ansible in private and public clouds Flashback Friday: The Many Faces of Cloud F5 Private Cloud Solutions Overview DevCentral Openstack Articles Openstack CloudDocs DevCentral VMware Articles DevCentral Cisco Articles F5 Github Repository (Openstack drivers, agents, templates, etc)351Views0likes0Comments