Release Announcement: F5 OpenStack Product Suite
Release Announcement 12 July 2016 We are pleased to announce that F5's OpenStack product suite has been functionally tested and released for use in OpenStack Mitaka. Information regarding the Mitaka releases, as well as releases for the F5 OpenStackproduct suite for Kilo and Liberty, is provided below. F5 LBaaSv1 Release v9.0.1 introduces functionality in OpenStack Mitaka for Neutron LBaaSv1. F5 LBaaSv2 Release v9.0.1 of the f5-openstack-agent and f5-openstack-lbaasv2-driver introduces functionality in OpenStack Mitaka for Neutron LBaaSv2. Release v8.0.4 contains quality improvements for Neutron LBaaSv2 in OpenStack Liberty. F5 OpenStack Heat Plugins Release v9.0.1 of the F5 Heat Plugins introduced functionality in OpenStack Mitaka. Release v9.0.2 introduces two (2) new resource plugins used in conjunction with clustering: F5::Cm::Syncand F5::Sys::Save. Release v7.0.4 and v8.0.3 introduce the F5::Cm::Sync and F5::Sys::Save plugins for OpenStack Kilo and LIberty, respectively. Templates Release v9.01 introduces functionality for the F5 Heat templates in OpenStack Mitaka. Release v8.0.1 introduces functionality for the F5 Heat templates in OpenStack Liberty. Release v7.0.3 adds the new sync and save heat plugins to the ‘F5-supported’ Active-Standby Cluster template. Issues We welcome bug reports via the Issuespage for each project on GitHub:https://github.com/F5Networks. - The F5 OpenStack Product Team286Views0likes0CommentsOpenStack in a backpack – how to create a demo environment for F5 Heat Plugins, part 2
Part 2 – OpenStack installation and testing After “Part 1 – Host environment preparation” you should have your CentOS v7, 64-bit instance up and running. This article will walk you through the OpenStack RDO installation and testing process. Every command requires a Linux root privileges and is issued from the /root/ directory. If you visited the RDO project website, you might have seen the Packstack quickstart installation instructions, recommending you issue a simple packstack --allinone command. Unfortunately, packstack --allinone does not build out OpenStack networking in a way that gives us external access to the guest machines. It doesn’t install the Heat project either. This is why we need to follow a bit more complex route. Install and Set Up OpenStack RDO Install and update the RDO repos: yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-5.noarch.rpm yum update -y Install the OpenStack RDO Mitaka packages: yum install -y centos-release-openstack-mitaka yum install -y openstack-packstack I strongly recommend that you generate a Packstack answer file first, review it, and correct it if needed, then apply it. It’s less error-prone and it expedites the process of future re-installation (or later installation on a second host). Generate the Packstack answer file: packstack --gen-answer-file=my_answer_file.txt \ --allinone --provision-demo=n --os-neutron-ovs-bridge-mappings=extnet:br-ex \ --os-neutron-ovs-bridge-interfaces=br-ex:ens33 --os-neutron-ml2-type-drivers=vxlan,flat \ --os-neutron-ml2-vni-ranges=100:900 --os-neutron-ml2-tenant-network-types=vxlan \ --os-heat-install=y \ --os-neutron-lbaas-install=y \ --default-password=default NOTE: “ens33” should be replaced with the name of your non-loopback interface. Use “ip addr” to find it. You can remove the --os-neutron-lbaas-install=y \ line if you don’t intend to use your portable OpenStack cloud to play with F5 LbaaS as well. Review the answer file ( my_answer_file.txt ) and correct it if needed. Apply the answer file ( my_answer_file.txt ). packstack --answer-file=my_answer_file.txt Now it’s time to take a long coffee or lunch break, because this part takes a while! Verify that a bridge has been properly created: Your non-loopback interface should be assigned to an OVS bridge.... [root@mitaka network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 DEVICE=ens33 NAME=ens33 DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex ONBOOT=yes BOOTPROTO=none .... and the new OVS bridge device should take over the IP settings: [root@mitaka network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex DEFROUTE=yes UUID=015b449c-4df4-497a-93fd-64764e80b31c ONBOOT=yes IPADDR=10.128.10.3 PREFIX=24 GATEWAY=10.128.10.2 DEVICE=br-ex NAME=br-ex DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge Change a virt type from qemu to kvm, otherwise F5 will not boot: openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm Restart the nova compute service for your changes to take effect. openstack-service restart openstack-nova-compute Mitaka RDO may not take a default DNS forwarder from the /etc/resolv.conf file, so you need to set it manually: openstack-config --set /etc/neutron/dhcp_agent.ini \ DEFAULT dnsmasq_dns_servers 10.128.10.2 openstack-service restart neutron-dhcp-agent In this example, 10.128.10.2 is the closest DNS resolver; that should be your VMware workstation or home router (see part 1). There are couple of CLI authentication methods in OpenStack (e.g. http://docs.openstack.org/developer/python-openstackclient/authentication.html). Let’s use the easiest – but not the most secure – one for the purpose of this exercise, which is setting bash environment variables with username, password, and tenant name. The packstack command has created a flat file containing environment variables that logs you in as the admin user and the admin tenant. You can load those bash environment variables by issuing: source keystonerc_admin Set Up OpenStack Networks Next, we’ll configure our networks in OpenStack Neutron. The “openstack” commands will be issued on behalf of the admin user, until you load different environment variables. It’s a good time to review the keystonerc_admin file, because soon you will have to create such a file for a regular (non-admin) user. Load admin credentials: source keystonerc_admin Configure an external provider network: neutron net-create external_network --provider:network_type flat \ --provider:physical_network extnet --router:external Configure an external provider subnet with an IP address allocation pool that will be used as OpenStack floating IP addresses: neutron subnet-create --name public_subnet --enable_dhcp=False \ --allocation-pool=start=10.128.10.30,end=10.128.10.100 \ --gateway=10.128.10.2 external_network 10.128.10.0/24 \ --dns-nameserver=10.128.10.2 In the example given, 10.128.10.0/24 is my external subnet (the one configured during CentOS installation); 10.128.10.2 is both a default gateway and DNS resolver. Please note that the OpenStack Floating IP address is a completely different concept than an F5 Floating IP address. F5 Floating IP address is an IP that floats between physical or virtual F5 entities in a cluster. OpenStack Floating IP address is a NAT address that is translated from a public IP address to a private (tenant) IP address. In this case, it is defined by --allocation-pool=start=10.128.10.30,end=10.128.10.100 To avoid typos and confusion, let’s make further changes via the CLI and review the results in GUI. In Chrome or Mozilla, enter the IP address you assigned to your CentOS host in your address bar: http://<IP-address> (http://10.128.10.3, in my case). NOTE: IE didn’t work with an OpenStack Horizon (GUI) Mitaka release on my laptop. Set up a User, Project, and VM in OpenStack Next, to make this exercise more realistic, let’s create a non-admin tenant, non-admin user, with non-admin privileges. Create a demo tenant (a.k.a. project): openstack project create demo Create a demo user and assign it to demo tenant: openstack user create --project demo --password default demo Create a keystone environment variable file for demo user that access demo tenant/project: cp keystonerc_admin keystonerc_demo sed -i 's/admin/demo/g' keystonerc_demo source keystonerc_demo Your bash prompt should now look like [root@mitaka ~(keystone_demo)]# , since from now on all the commands will be executed on behalf of the demo user, within the demo tenant/project. In addition, you should be able to log into the OpenStack Horizon dashboard with the demo/default credentials. Create a tenant router. Please double-check that your environment variables are set for demo user. neutron router-create router1 neutron router-gateway-set router1 external_network Create a tenant (internal) management network: neutron net-create management Create a tenant (internal) management subnet and attach it to the router: neutron subnet-create --name management_subnet management 10.0.1.0/24 neutron router-interface-add router1 management_subnet Your network topology should be similar to that shown below (Network-> Network Topology-> Toggle labels): Create an ssh keypair, and store the private key to your pc. You will use this key for a password-less logging to all the guest machines: openstack keypair create default > demo_default.pem Store demo_default.pem to your PC. Please remember to chmod 600 demo_default.pem before using it with ssh -i command. Create an “allow all” security group: openstack security group create allow_all openstack security group rule create --proto icmp \ --src-ip 0.0.0.0/0 --dst-port 0:255 allow_all openstack security group rule create --proto udp \ --src-ip 0.0.0.0/0 --dst-port 1:65535 allow_all openstack security group rule create --proto tcp \ --src-ip 0.0.0.0/0 --dst-port 1:65535 allow_all Again, it’s not the best practice to use such a loose access roles, but the goal of this exercise is to create easy to use demo/test environment. Download and install a Cirros image (12.6MB): curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | \ openstack image create --container-format bare \ --disk-format qcow2 "Cirros image" Check the image status: +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-08-29T19:08:58Z | | disk_format | qcow2 | | file | /v2/images/6811814b-9288-48cc-a15a-9496a14c1145/file | | id | 6811814b-9288-48cc-a15a-9496a14c1145 | | min_disk | 0 | | min_ram | 0 | | name | Cirros image | | owner | 65136fae365b4216837110178a7a3d66 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-08-29T19:12:46Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------+ Spinning up your first Guest VM in your OpenStack environment. Here you will use previously created ssh key and allow_all security group. The guest image should be connected to the tenant “management” network. openstack server create --image "Cirros image" --flavor m1.tiny \ --security-group allow_all --key-name default \ --nic net-id=management Cirros1 Now you should be able to access a Cirros1 console: If everything went well, you should be able to log in to the Cirros1 VM with the cirros/subswin:) credentials and you should be able to ping www.f5.com from Cirros1. If your ping works, it looks like Cirros1 has got a way out. Next, let’s create a way in, so we can ssh to the Cirros1 instance from outside. If the console doesn’t react to keystrokes, click the blue bar with “Connected (unencrypted)” first. Create an OpenStack floating IP address: openstack ip floating create external_network Observe a newly create IP address. You will use it in the next step: +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | fixed_ip | None | | id | 5dff365f-3cc8-4d61-8e7e-f964cef3bb8b | | instance_id | None | | ip | 10.128.10.31 | | pool | external_network | +-------------+--------------------------------------+ Now, the newly created “public” IP address may be attached to any Virtual Machine, Cirros1 in this case. Attaching an OpenStack floating IP address creates one-to-one network address translation between public IP address and tenant (private) IP address. Attach the OpenStack floating IP address to your Cirros1: openstack ip floating add 10.128.10.31 Cirros1 Test a way in: ssh from outside to your Cirros1: ssh -i demo_default.pem cirros@10.128.10.31 Or equivalent putty command. Check that you are really on Cirros1: $id uid=1000(cirros) gid=1000(cirros) groups=1000(cirros) If the test was successful, it looks like your basic OpenStack services are up and running. To spare a scarce CPU and memory resources, I’d strongly suggest to hibernate or remove Cirros1 VM. Hibernating: openstack server suspend Cirros1 Removing: openstack server delete Cirros1 Before you boot or shutdown the CentOS host, you should be aware of one CentOS/Mitaka specific issue: your hardware may be too slow to boot an httpd service within the default 90s. This is why I had to change this timeout: sed -i \ 's/^#DefaultTimeoutStartSec=[[:digit:]]\+s/DefaultTimeoutStartSec=270s/g' \ /etc/systemd/system.conf Useful troubleshooting commands: openstack-status , systemctl show httpd.service , systemctl status httpd.service . In the next (and final) article in this series, we’ll install the F5 Heat plugins and onboard the F5 qcow2 image using F5 Heat templates.808Views0likes1CommentOpenStack in a backpack - how to create a demo environment for F5 Heat Plugins, part 3
Part 3 – F5 Heat Plugins installation and BIG-IP VE onboarding This article demonstrates how to install the F5 Heat Plugins and onboard a BIG-IP VE image into OpenStack Mitaka using F5 Heat templates. Before you get started, please review the installation instructions for the F5 Heat plugins (https://f5-openstack-heat-plugins.readthedocs.io/en/latest/) to ensure you’re using the latest version. The instructions provided in this article are current at the time of posting. Install the F5 Heat Plugins Run the following commands on the heat service node. Root privileges are required. Install the Python installation tool (pip): yum -y install python-pip Install the F5 Heat plugins. pip install f5-openstack-heat-plugins Make the Heat plugins directory (NOTE: this may already exist). mkdir -p /usr/lib/heat Create a link to the F5 plugins in the Heat plugins directory. ln -s /usr/lib/python2.7/site-packages/f5_heat /usr/lib/heat/f5_heat Restart the Heat engine service: systemctl restart openstack-heat-engine.service Now, you should see the F5 Heat resources in the OpenStack Horizon dashboard, under “Orchestration->Resource Types”: Prepare Your Project to use the F5 Heat Template Next, you’ll use the F5 BIG-IP ‘Image Patch and Upload’ heat template (http://f5-openstack-heat.readthedocs.io/en/latest/templates/supported/ref_images_patch-upload-ve-image.html) to patch your VE image for use in OpenStack and onboard it. ‘Patching’ is modifying the BIG-IP QCOW2 image so it can run within OpenStack and make use of the OpenStack Metadata service, for licensing, setting networking parameters etc. This template requires a bit of preparation, described in the Prerequisites section of the template documentation. In particular, you need to create an OpenStack flavor for the BIG-IP appliance and create an SSH key to use to log in to the BIG-IP. You will also need to provide a link to a Ubuntu image, as the template utilizes an Ubuntu server to extract and patch the image. An appropriate tenant network was already created and tested in the previous article in this series, so no need to do it again. First, log in as the OpenStack admin user to create a new flavor for the BIG-IP and adjust the ‘demo’ user’s permissions. The default OpenStack non-admin privileges do not allow creation of OpenStack Flavors or Heat stacks. Create a new flavor via the command line: sourcekeystonerc_admin openstack flavor create --ram 4096 --disk 20 --vcpus 2 \ --public "F5-small 1 slot" Refer to http://f5-openstack-docs.readthedocs.io/en/latest/guides/openstack_big-ip_flavors.html for additional information regarding BIG-IP VE flavor requirements. Give permission to create Heat stacks to the demo user: openstack role add --project demo --user demo heat_stack_owner You can now switch back to your ‘demo’ user account and import an Ubuntu image to Glance. source keystonerc_demo curl http://uec-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img | \ openstack image create --container-format bare \ --disk-format qcow2 --min-disk 10 "Ubuntu 14.04 LTS" The Heat template requires that the BIG-IP image be hosted in a location accessible to the Heat engine via ‘http’. Assuming that you do not have any other place to host the BIG-IP qcow2 image, we’ll create it one on the Horizon Apache server. Create a new directory to store the image in. mkdir -p /home/openstack/bigipimages/ Add the following lines to /etc/httpd/conf.d/15-horizon_vhost.conf , just before closing </VirtualHost> : Alias /bigipimages "/home/openstack/bigipimages/" <Directory "/home/openstack/bigipimages/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted <Directory/> Restart the httpd service: systemctl restart httpd.service Download BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip from https://downloads.f5.com/ and upload it to /home/openstack/bigipimages/ . Direct link: https://downloads.f5.com/esd/serveDownload.jsp?path=/big-ip/big-ip_v11.x/11.6.1/english/virtual-edition_base-plus-hf1/&sw=BIG-IP&pro=big-ip_v11.x&ver=11.6.1&container=Virtual-Edition_Base-Plus-HF1&file=BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip Change the permissions so the file can be seen by Apache: chmod -x+r /home/openstack/bigipimages/*.zip The compressed qcow2 image should now be accessible at: http://<ip_address>/bigipimages/BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip Please replace <ip_address> with an IP address of your CentOS host. Launch Your Heat Stack Most of the Heat templates require you to supply some specific configuration parameters (e.g., which networks should be used, what flavor, security group etc). Those parameters are rendered as a questionnaire in the Horizon GUI, right after you specify the source heat template's file or url. Unfortunately, I noticed that Mitaka release doesn't always show external networks in the GUI correctly, while the external network's name can be provided via CLI without any problems. Another disadvantage of Orchestration section in Horizon is that if you make any mistakes or typos, you need to type all the parameters over again. So, the easiest and the most efficient method is to provide Heat parameters in an environment file. Create an environment file – for example, patch_upload_paremeters.yaml – that contains the following parameters: parameters: onboard_image: "Ubuntu 14.04 LTS" flavor: m1.medium # THE FLAVOR YOU CREATED private_network: management f5_image_import_auth_url: http://<ip_address>:5000/v2.0 # YOUR KEYSTONE AUTHENTICATION URL f5_image_import_tenant: admin # THE NAME OF YOUR PROJECT SPACE f5_image_import_user: admin # YOUR USERNAME f5_image_import_password: default # YOUR PASSWORD f5_ve_image_url: http://<ip_address>/bigipimages/BIGIP-11.6.1.1.0.326.LTM_1SLOT.qcow2.zip f5_ve_image_name: BIGIP-11.6.1.1.0.326.qcow2 image_prep_key: default Note: Leading white spaces are significant in the yaml file. Launch the stack: openstack stack create -e patch_upload_paremeters.yaml \ -t https://raw.githubusercontent.com/F5Networks/f5-openstack-heat/master/f5_supported/ve/images/patch_upload_ve_image.yaml \ F5_onboard Please note that we used the GitHub version of the template in the above command (https://raw...). You can also use the download link for the template provided in the documentation. Do not try to use the link for an html-formatted yaml file. You should see your stack being created in Horizon at "Orchestration -> Stacks -> F5_onboard". You can supervise the patching progress by clicking on a Compute instance log: At the end, the stack status should be "Status Create_Complete: Stack CREATE completed successfully". You can check it by clicking on "Orchestration -> Stacks -> F5_onboard -> Overview". If you can't see the stack in your account in Horizon, double-check which user you identified in the environment file; if you specified 'admin', you’ll need to log in as the admin user. Now that you have a BIG-IP image onboarded, you can use it with the F5-supported Heat Templates (http://f5-openstack-heat.readthedocs.io/en/latest/templates/templates_index.html#f5-supported) to deploy BIG-IP VE from any of your OpenStack user accounts. First, make sure the BIG-IP image is visible to all users in your OpenStack environment. source keystonerc_admin openstack image set --public BIGIP-11.6.1.1.0.326 source keystonerc_demo The patched BIG-IP image should be visible in “Compute -> Images” or via the openstack image-list command. As an exercise, you can change the Heat template to include this step. You can now safely delete the onboarding stack. Don’t worry, the BIG-IP image will stick around. openstack stack delete F5_onboard Now, let’s spin up a stand-alone BIG-IP VE with two production interfaces using the F5 BIG-IP VE: Standalone, 3-nic Heat template (http://f5-openstack-heat.readthedocs.io/en/latest/templates/supported/ref_common_f5-ve-standalone-3nic.html) We already have a Management subnet, but we’ll need to add the traffic subnets. Create client and server-side networks: neutron net-create client neutron subnet-create --name client_subnet client 10.0.2.0/24 neutron router-interface-add router1 client_subnet neutron net-create server neutron subnet-create --name server_subnet server 10.0.3.0/24 neutron router-interface-add router1 server_subnet Now your network topology should look as follows: Create a standalone_3_nic_paremeters.yaml environment file that contains: parameters: ve_image: BIGIP-11.6.1.1.0.326 ve_flavor: "F5-small 1 slot" admin_password: admin f5_ve_os_ssh_key: default root_password: default license: <<<your eval license goes here>>> external_network: external_network mgmt_network: management network_1: client network_1_name: client network_2: server network_2_name: server Please remember to insert your evaluation license key. Launch the stack: openstack stack create -e standalone_3_nic_paremeters.yaml \ -t https://raw.githubusercontent.com/F5Networks/f5-openstack-heat/master/f5_supported/ve/standalone/f5_ve_standalone_3_nic.yaml \ F5_standalong_3_nics Now you can find a floating IP address in "Orchestration -> Stacks -> F5_standalon_2nic -> Overview -> Floating IP". You should be able to observe the patching process by running the following command: ssh -i demo_default.pem root@<<floating IP>> tail -f /var/log/messages If the patching process is complete you should see something like this: Sep 14 11:21:05 host-10 notice openstack-init: Completed OpenStack auto-configuration in 167 seconds... You should now be able to log into the BIG-IP configuration utility at the [OpenStack] floating ip address allocated by Neutron. Be sure to use https (e.g., "https://<the_floating_IP>") and give the BIG-IP enough time to fully boot. The VE platform should be properly licensed and basic network configuration should be set up. If everything works fine, you are ready to play with other F5 heat templates, and maybe write your own F5 heat resources. Good starting points would be the GitHub repo (https://github.com/F5Networks/f5-openstack-heat) and the project docs (https://f5-openstack-heat.readthedocs.io/en/latest/). Special thanks to John Gruber, Laurent Boutet, Paul Breaux, Shawn Wormke, Jodie Putrino, and the whole F5 OpenStack PD team.690Views0likes2CommentsChecksums for F5 Supported OpenStack Heat Orchestration templates on GitHub
Problem this snippet solves: Checksums for F5 supported OpenStack Heat Orchestration templates F5 Networks provides checksums for all of our supported OpenStack Heat Orchestration templates. You can find the OpenStack Heat Orchestration templates in the dist directory on GitHub: https://github.com/F5Networks/f5-openstack-hot/tree/master/dist For OpenStack, all of the templates themselves are located in the f5-openstack-hot-supported.tar.gz file. See the individual README files for information about individual supported templates: https://github.com/F5Networks/f5-openstack-hot/tree/master/supported/templates. You can get a checksum for a particular template by running one of the following commands, depending on your operating system: Linux: sha512sum <path_to_template> Windows using CertUtil: CertUtil –hashfile <path_to_template> SHA512 You can compare the checksum produced by that command against the following list. OpenStack Heat Orchestration Templates Release 4.0.0 f5-openstack-hot-supported.tar.gz fc3c1bb04cb26a2ef94b45c3f5554d77e6b19d5eac79296a155d5001e04c4d2c28f203a7dd4ac5bbb0c0e358203be85a863b6c368f1159816d5a4cf0b8f6d89b Release 3.1.0 f5-openstack-hot-supported.tar.gz aaac9adee4af968bc5110428ad19a95bceb4c0fbd9d8fa985d6db076149b55470cd0428e69b5d8e0ecb1befe06d00e062bffd9f629f07ffdea8a0abf23d2bd00 Release 3.0.0 f5-openstack-hot-supported.tar.gz 159e991e5215336697f6baf596fb417094c2603fb834385476bd26c1ae11a15d1156d35e8584519494b7f6099de4ad1b726c1d3493c8d1f7ddbc403503a907cd Release 2.0.0 f5-openstack-hot-supported.tar.gz 1c8c4ca73aba5c3ca97ab03560254bea04d82c7e306495bc2dfaee11850c9121d88b5467fd1cb198de429bd97beaabd64897e92fa9d85e6a52eed5ba9041725b Code : You can get a checksum for a particular template by running one of the following commands, depending on your operating system: * **Linux**: `sha512sum ` * **Windows using CertUtil**: `CertUtil –hashfile SHA512`274Views0likes0CommentsHad anyone test Openstack LBaaS multi agent F5 solution ?
Hi, We are testing Openstack LBaaS plugin to manage F5 VE/Hardware LTM . Customer is using Centos kilo version openstack and installed F5 LBaaS v1.0.10 plugin. By now , we had tested multi agent support for F5 LBaaS plugin , here is the status: 1> 1 controller with 1 LBaaS agent to manage 1 LTM This is ok 2> 1 controller with 1 LBaaS agent to manage 1 pair HA LTM This is not work , we can adding pools and succeed in deliver to LTM , but when we try to adding vip , the system will automatic delete the whole partition on LTM after deliver to LTM . 3> 2 controller each had 1 LBaaS agent to manage 1 LTM This is also not work , we can adding pools but failed to adding VIP , same result , VIP was deliver to LTM once , and after 10-30s , LTM will delete the VIP. We had try to enable evironment_prefix == uuid , still get the same issue. And also , for step3 , we double confirmed if enable each of the controller only , everything is OK. So , I just wonder , have anyone here test multi agent solution for Openstack LBaaS ? Raymond348Views0likes2CommentsBackup script for SCP for-V11
Hi Team, Please help me to write bash script, which will copy bigip.conf file & save to local unix system on weekly basic on Weekly folder. Currently i am doing manually through unix command locally, which is taking long time. Local_Unix_System scp root@:/config/bigip.conf /home/12-MAY-2015/host_name.conf Thanks for your help.229Views0likes1Commentexpires header for images iRule v11.x
For all images, we need to implement the expires tag to tell the users’ browser if they already have the image, that they can use that cached version for X hours. For images, the expires tag should live for 7 days. For css/js files, the expires tag should live for 24 hours. Does anyone have an example iRule that will do this? I found these two links which look close : Would there be a simpler way to do this rather then the examples here? Would both of these work for 11.x? https://devcentral.f5.com/wiki/iRules.Expires-iRule.ashx https://devcentral.f5.com/questions/irule-to-set-expires-headers-for-static-conent Would something like this work? 86400 seconds = 24 hours 604800 seconds = 7 days ltm data-group internal /Common/class_cacheable_file_types{ records { ".css 86400" ".js 86400" ".bmp 604800" ".gif 604800" ".jpeg 604800" ".jpg 604800" ".png 604800" ".ico 604800" } type string } rule irule_manipulate_client_cache { when HTTP_REQUEST { Set the cache timeout for root requests. Check that the request is a GET and that there is not a query string. if {[HTTP::method] eq "GET" and [HTTP::uri] eq "/"}{ set expire_content_timeout 300 } else { Set the tiemout based on the class entry if it exists for this request. set expire_content_timeout [class match [string tolower [string range [HTTP::path] [string last . [HTTP::path]] end]] class_cacheable_file_types " "] } } when HTTP_RESPONSE { if { $expire_content_timeout ne "" } { HTTP::header replace "Cache-Control" "max-age=$expire_content_timeout, public" HTTP::header replace "Expires" "[clock format [expr ([clock seconds]+$expire_content_timeout)] -format "%a, %d %h %Y %T GMT" -gmt true]" } else { HTTP::header replace "Cache-Control" "private" HTTP::header replace Expires {-1} } } } Thanks!428Views0likes3CommentsUnable to Upgrade or Apply Hotfix on BIG-IQ VE
Hi all, I'm running BIG-IQ v4.4.0 on OpenStack, using KVM and RHEL. I'm unable to install hotfixes or images. Using [tmsh] install sys software... has no affect. [tmsh] show sys software status output is as follows, very odd; ------------------------------------------------ Sys::Software Status Volume Product Version Build Active Status ------------------------------------------------ HD1.1 none none none no audited HD1.2 none none none no audited Anyone had this issue? Thanks in advance.387Views0likes2CommentsWhat do I need to setup a working F5 lab
Hello, I am planning to buy 2 license lab for 2 VMs (active and standby) on ESXi. I want to practice with different virtual servers. The issue is, I am a Networking guy and trying to setup different virtual servers has been a pain. It's either I don't know what i am doing or what I am doing wrong (confused). I need help with regards to how to setup a working lab, test the things the I read online/youtube as I go along. I will really appreciate any help I can find610Views0likes3Comments