openstack
66 TopicsF5 Friday: Python SDK for BIG-IP
We know programmability is important. Whether we’re talking about networking and SDN, or DevOps and APIs and templates, the most impactful technologies and trends today are those involving programmability. F5 is, as you’re no doubt aware, no stranger to programmability. Since 2001 when we introduced iControl (API) and iRules (data path programmability) we’ve continued to improve, enhance, and expand the ability of partners, customers, and our own engineers and architects to programmatically control and redefine the application delivery experience. With the emphasis today on automation and orchestration as a means for ops (and through it, the business) to scale more quickly and efficiently, programmability has never before been so critical to both operational and business success. Which means we can’t stop improving and expanding the ways in which you (and us, too) can manage, extend, and deliver the app services everyone needs to keep their apps secure, fast, and available. Now, iControl and iControl REST are both APIs built on open standards like SOAP, JSON, and HTTP. That means anyone who knows how to use an API can sit down and start coding up scripts that automate provisioning, configuration, and general management of not just BIG-IP (the platform) but the app services that get deployed on that platform. And we’re not against that at all. But we also recognize that not everyone has the time to get intimately familiar with iControl in either of its forms. So we’re pretty much always actively developing new (and improving existing) software development kits (SDKs) that enable folks to start doing more faster. But so are you. We’ve got a metric ton of code samples, libraries, and solutions here on DevCentral that have been developed by customers and partners alike. They’re freely available and are being updated, optimized, extended and re-used every single day. We think that’s a big part of what an open community is – it’s about developing and sharing solutions to some of the industry’s greatest challenges. And that’s what brings us to today’s exciting news. Well, exciting if you’re a Python user, at least, because we’re happy to point out the availability of the F5 BIG-IP Python SDK. And not just available to download and use, but available as an open source project that you can actively add, enhance, fork, and improve. Because open source and open communities produce some amazing things. This project implements an SDK for the iControl REST interface for BIG-IP, which lets you create, edit, update, and delete (CRUD) configuration objects on a BIG-IP. Documentation is up to date and available here. The BIG-IP Python SDK layers an object model over the API and makes it simpler to develop scripts or integrate with other Python-based frameworks. The abstraction is nice (and I say that with my developer hat on) and certainly makes the code more readable (and maintainable, one would assume) which should help eliminate some of the technical debt that’s incurred whenever you write software, including operational scripts and software. Seriously, here’s a basic sample from the documentation: from f5.bigip import BigIP # Connect to the BigIP bigip = BigIP("bigip.example.com", "admin", "somepassword") # Get a list of all pools on the BigIP and print their name and their # members' name pools = bigip.ltm.pools.get_collection() for pool in pools: print pool.name for member in pool.members: print member.name # Create a new pool on the BigIP mypool = bigip.ltm.pools.pool.create(name='mypool', partition='Common') # Load an existing pool and update its description pool_a = bigip.ltm.pools.pool.load(name='mypool', partition='Common') pool_a.description = "New description" pool_a.update() # Delete a pool if it exists if bigip.ltm.pools.pool.exists(name='mypool', partition='Common'): pool_b = bigip.ltm.pools.pool.load(name='oldpool', partition='Common') pool_b.delete() Isn’t that nice? Neat, understandable, readable. That’s some nice code right there (and I’m not even a Python fan, so that’s saying something). Don’t let the OpenStack reference fool you. While the first “user” of the SDK is OpenStack, it is stand-alone and can be used on its own or incorporated into other Python-based frameworks. So if you’re using Python (or were thinking about) to manage, manipulate, or monitor your BIG-IPs, check this one out. Use it, extend it, improve it, and share it. Happy scripting!3KViews0likes36CommentsOpenStack Heat Template Composition
Heat Orchestration Templates (HOT) for OpenStack's Heat service can quickly grow in length as users need to pile in ever more resources to define their applications. A simple Nova instance requires a small volume at first. Soon it needs a private network, a public network, a software configuration deployment, a load balancer, a deluxe karaoke machine. This means the HOT files get bloated and difficult to read for mere humans. Let's think about why that is.... A template defines the set of resources necessary and sufficient to describe an application. It also describes the dependencies that exist between the resources, if any. That way, Heat can manage the life-cycle of the application without you having to worry about it. Often, the resources in a long template need to be visually grouped together to alert the reader that these things depend on one another and share some common goal (e.g. create a network, subnet, and router). When templates get to this state, we need to start thinking about composition. Composition in HOT is done in a couple of ways. We will tackle the straight-forward approach first. Much of this material is presented in the official documentation for template composition. The Parent and Child Templates We'll often refer to a parent template and child template here. The parent template is the user's view into the Heat stack that defines our application. It's the entrypoint. A parent template contains all of the over-arching components of the application. The child on the other hand may describe a logically grouped chunk of that application. For example, one child template creates the Nova instances that launch BIG-IP devices while another child template creates all the network plumbing for those instances. Let's look at an example of this. Below is the parent template: heat_template_test.yaml heat_template_version: 2015-04-30 description: Setup infrastructure for testing F5 Heat Templates. parameters: ve_cluster_ready_image: type: string constraints: - custom_constraint: glance.image ve_standalone_image: type: string constraints: - custom_constraint: glance.image ve_flavor: type: string constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean default: false ssh_key: type: string constraints: - custom_constraint: nova.keypair sec_group_name: type: string admin_password: type: string label: F5 VE Admin User Password description: Password used to perform image import services root_password: type: string license: type: string external_network: type: string constraints: - custom_constraint: neutron.network resources: # Plumb the newtorking for the BIG-IP instances networking: type: heat_template_test_networks.yaml properties: external_network: { get_param: external_network } sec_group_name: { get_param: sec_group_name } # Wait for networking to come up and then launch two clusterable BIG-IPs two_bigip_devices: type: OS::Heat::ResourceGroup depends_on: networking properties: count: 2 resource_def: # Reference a child template in the same directly where the heat_template_test.yaml is located type: cluster_ready_ve_4_nic.yaml properties: ve_image: { get_param: ve_cluster_ready_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } ha_network: { get_attr: [networking, ha_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } # Wait for networking to come up and launch a standalone BIG-IP standalone_device: # Reference another child template in the local directory type: f5_ve_standalone_3_nic.yaml depends_on: networking properties: ve_image: { get_param: ve_standalone_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } Now that's still fairly verbose, but its doing some heavy lifting for us. It is creating almost all of the networking needed to launch a set of clusterable BIG-IP devices and a single standalone BIG-IP device. It takes in parameters from the user such as ve_standalone_image and external_network and passes them along to the child that requires them. The child stack then receives those parameters in the same way the template above does, by defining a parameter. The parent template references the heat_template_test_networks.yaml template directly, expecting the file to be in the same local directory where the parent template is located. This is always created in the type field of a resource. In addition to relative paths, you can also reference another template with an absolute path, or URL. You can also see the group of responsibilities here. One child stack (heat_template_test_networks.yaml) is building networks, another (cluster_ready_ve_4_nic.yaml) is launching a set of BIG-IP devices ready for clustering and yet another (f5_ve_standalone_4_nic.yaml) is launching a standalone BIG-IP device. Yet the dependencies are apparent in the depends_on property and the intrinsic functions called within that resource (more on that later). You will not successfully launch the standalone device without having the networking in place first, thus the standalone_device resource is dependent upon the networking resource. This means we can easily send data into the networking stack (as a parameter) and now we must get data out to be passed into the standalone_device stack. Let's look at the networking template and see what it defines as its outputs. heat_template_test_networks.yaml heat_template_version: 2015-04-30 description: > Create the four networks needed for the heat plugin tests along with their subnets and connect them to the testlab router. parameters: external_network: type: string sec_group_name: type: string resources: # Four networks client_data_network: type: OS::Neutron::Net properties: name: client_data_network server_data_network: type: OS::Neutron::Net properties: name: server_data_network mgmt_network: type: OS::Neutron::Net properties: name: mgmt_network ha_network: type: OS::Neutron::Net properties: name: ha_network # And four accompanying subnets client_data_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.1.0/24 dns_nameservers: [10.190.0.20] network: { get_resource: client_data_network } server_data_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.2.0/24 dns_nameservers: [10.190.0.20] network: {get_resource: server_data_network } mgmt_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.3.0/24 dns_nameservers: [10.190.0.20] network: { get_resource: mgmt_network } ha_subnet: type: OS::Neutron::Subnet properties: cidr: 10.1.4.0/24 dns_nameservers: [10.190.0.20] network: { get_resource: ha_network } # Create router for testlab testlab_router: type: OS::Neutron::Router properties: external_gateway_info: network: { get_param: external_network } # Connect networks to router interface client_data_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: client_data_subnet } server_data_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: server_data_subnet } mgmt_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: mgmt_subnet } ha_router_interface: type: OS::Neutron::RouterInterface properties: router: { get_resource: testlab_router } subnet: { get_resource: ha_subnet } open_sec_group: type: OS::Neutron::SecurityGroup properties: name: { get_param: sec_group_name } rules: - protocol: icmp direction: ingress - protocol: icmp direction: egress - protocol: udp direction: ingress - protocol: udp direction: egress - protocol: tcp direction: ingress port_range_min: 1 port_range_max: 65535 - protocol: tcp direction: egress port_range_min: 1 port_range_max: 65535 outputs: mgmt_network_name: value: { get_attr: [mgmt_network, name] } ha_network_name: value: { get_attr: [ha_network, name] } client_data_network_name: value: { get_attr: [client_data_network, name] } server_data_network_name: value: { get_attr: [server_data_network, name] } You can see the logical grouping here, and this is where template composition shines. This simple template creates a security group, four networks, four subnets, and ties them together with a router. Even though the heat_template_test.yaml parent template uses this to build its networks for defining its application, another user may decide they want the same networking infrastucture, but they want four standalone devices and eight pairs of clusterable devices. Their only modification would be in the parent template, because the heat_template_test_networks.yaml template describes the set of networks those devices need to connect to. It is important to note that the above template is a fully functioning HOT template all by itself. You can launch it and it will build those four networks. So how does the data get out of the networking template? The outputs section takes care of that. For attaching BIG-IP devices in the parent template to these networks, all we require is the network name, so the networking template kicks those back up to whomever is curious about such things. We saw the get_param function earlier, and now we can see the use of the get_attr function. In the two_bigip_devices resource, the parent template references the networking resource directly and then it accesses the client_data_network_name attribute (as seen below). This operation retrieves the network name for the client_data_network and passes it into the cluster_ready_ve_4_nic.yaml stack. network_1: { get_attr: [networking, client_data_network_name] } With that, we've successfully sent information into a child stack, retrieved it, then sent it into another child stack. This is very useful when working in large groups of users because my heat_template_test_networks.yaml template may benefit others. In time, you can build quite a collection of these concise HOT templates then use a parent template to orchestrate them in many complex ways. Keep in mind however, that HOT is declarative, meaning there are no looping constructs or if/else decisions to decide whether to create seven networks or four. For that, you would need to create two separate templates. As seen in the OS::Heat::ResourceGroup resource for two_bigip_devices however, we can toggle the number of instances launched by that resource at stack creation time. We can simply make the count property of that resource a parameter and the user can decide how many clusterable BIG-IP devices should be launched. To launch the above template with the python-heatclient: heat stack-create -f heat_template_test.yaml -P "ve_cluster_ready_image=cluster_ready-BIGIP-11.6.0.0.0.401.qcow2;ve_standalone_image=standalone-BIGIP-11.6.0.0.0.401.qcow2;ve_flavor=m1.small;ssh_key=my_key;sec_group_name=bigip_sec_group;admin_password=pass;root_password=pass;license=XXXXX;external_network=public_net" bigip_test_stack New Resource Types in Environment Files The second way to do template composition is to define a brand new resource type in an environment file. The environment in which a Heat stack is launched affects the behavior of that stack. To learn more about environment files, check out the official documentation. We will use them here to define a new resource type. The cool thing about environments is that the child templates looks exactly the same, and only one small change is needed in the parent template. It is the environment in which those stacks launch that changes. Below we are defining a Heat environment file and defining a new resource type in the attribute_registry section. heat_template_test_environment.yaml parameters: ve_cluster_ready_image: cluster_ready-BIG-IP-11.6.0.0.0.401.qcow2 ve_standalone_image: standalone-BIG-IP-11.6.0.0.0.401.qcow2 ve_flavor: m1.small ssh_key: my_key sec_group_name: bigip_sec_group admin_password: admin_password root_password: root_password license: XXXXX external_network: public_net resource_registry: OS::Neutron::FourNetworks: heat_template_test_networks.yaml F5::BIGIP::ClusterableDevice: cluster_ready_ve_4_nic.yaml F5::BIGIP::StandaloneDevice: f5_ve_standalone_3_nic.yaml The parent template will now reference the new resource types, which are merely mappings to the child templates. This means the parent template has three new resources to use which are not a part of the standard OpenStack Resource Types. The environment makes all this possible. heat_template_test_with_environment.yaml heat_template_version: 2015-04-30 description: Setup infrastructure for testing F5 Heat Templates. parameters: ve_cluster_ready_image: type: string constraints: - custom_constraint: glance.image ve_standalone_image: type: string constraints: - custom_constraint: glance.image ve_flavor: type: string constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean default: false ssh_key: type: string constraints: - custom_constraint: nova.keypair sec_group_name: type: string admin_password: type: string label: F5 VE Admin User Password description: Password used to perform image import services root_password: type: string license: type: string external_network: type: string constraints: - custom_constraint: neutron.network default_gateway: type: string default: None resources: networking: # A new resource type: OS::Neutron::FourNetworks properties: external_network: { get_param: external_network } sec_group_name: { get_param: sec_group_name } two_bigip_devices: type: OS::Heat::ResourceGroup depends_on: networking properties: count: 2 resource_def: # A new resource type: F5::BIGIP::ClusterableDevice properties: ve_image: { get_param: ve_cluster_ready_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } ha_network: { get_attr: [networking, ha_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } standalone_device: # A new resource type: F5::BIGIP::StandaloneDevice depends_on: networking properties: ve_image: { get_param: ve_standalone_image } ve_flavor: { get_param: ve_flavor } ssh_key: { get_param: ssh_key } use_config_drive: { get_param: use_config_drive } open_sec_group: { get_param: sec_group_name } admin_password: { get_param: admin_password } root_password: { get_param: root_password } license: { get_param: license } external_network: { get_param: external_network } mgmt_network: { get_attr: [networking, mgmt_network_name] } network_1: { get_attr: [networking, client_data_network_name] } network_2: { get_attr: [networking, server_data_network_name] } And here is how we utilize those new resources defined in our environment file. Note that we no longer define all the parameters in the cli call to the Heat client (with the -P flag) because it is set in our environment file. heat stack-create -f heat_template_test_with_environment.yaml -e heat_template_test_environment.yaml test_env_stack Resources: For more information on Heat Resource Types, along with the possible inputs and outputs: http://docs.openstack.org/developer/heat/template_guide/openstack.html For more examples of how to prepare a BIG-IP image for booting in OpenStack and for clustering those clusterable instances: https://github.com/F5Networks/f5-openstack-heat Reference Templates: Below is the two child templates used in the above examples. We developed these on OpenStack Kilo with a BIG-IP image prepared from our github repo above. f5_ve_standalone_3_nic.yaml heat_template_version: 2015-04-30 description: This template deploys a standard f5 standalone VE. parameters: open_sec_group: type: string default: open_sec_group ve_image: type: string constraints: - custom_constraint: glance.image ve_flavor: type: string constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean default: false ssh_key: type: string constraints: - custom_constraint: nova.keypair admin_password: type: string hidden: true root_password: type: string hidden: true license: type: string hidden: true external_network: type: string default: test constraints: - custom_constraint: neutron.network mgmt_network: type: string default: test constraints: - custom_constraint: neutron.network network_1: type: string default: test constraints: - custom_constraint: neutron.network network_1_name: type: string default: network-1.1 network_2: type: string default: test constraints: - custom_constraint: neutron.network network_2_name: type: string default: network-1.2 default_gateway: type: string default: None resources: mgmt_port: type: OS::Neutron::Port properties: network: { get_param: mgmt_network } security_groups: [ { get_param: open_sec_group }] network_1_port: type: OS::Neutron::Port properties: network: { get_param: network_1 } security_groups: [ { get_param: open_sec_group }] floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: { get_param: external_network } port_id: { get_resource: mgmt_port } network_2_port: type: OS::Neutron::Port properties: network: {get_param: network_2 } security_groups: [ { get_param: open_sec_group }] ve_instance: type: OS::Nova::Server properties: image: { get_param: ve_image } flavor: { get_param: ve_flavor } key_name: { get_param: ssh_key } config_drive: { get_param: use_config_drive } networks: - port: {get_resource: mgmt_port} - port: {get_resource: network_1_port} - port: {get_resource: network_2_port} user_data_format: RAW user_data: str_replace: params: __admin_password__: { get_param: admin_password } __root_password__: { get_param: root_password } __license__: { get_param: license } __default_gateway__: { get_param: default_gateway } __network_1__: { get_param: network_1 } __network_1_name__: { get_param: network_1_name } __network_2__: { get_param: network_2 } __network_2_name__: { get_param: network_2_name } template: | { "bigip": { "f5_ve_os_ssh_key_inject": "true", "change_passwords": "true", "admin_password": "__admin_password__", "root_password": "__root_password__", "license": { "basekey": "__license__", "host": "None", "port": "8080", "proxyhost": "None", "proxyport": "443", "proxyscripturl": "None" }, "modules": { "auto_provision": "false", "ltm": "nominal" }, "network": { "dhcp": "true", "selfip_prefix": "selfip-", "vlan_prefix": "network-", "routes": [ { "destination": "0.0.0.0/0.0.0.0", "gateway": "__default_gateway__" } ], "interfaces": { "1.1": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_1_name__", "selfip_description": "Self IP address for BIG-IP __network_1_name__ network", "vlan_name": "__network_1_name__", "vlan_description": "VLAN for BIG-IP __network_1_name__ network traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" }, "1.2": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_2_name__", "selfip_description": "Self IP address for BIG-IP __network_2_name__ network", "vlan_name": "__network_2_name__", "vlan_description": "VLAN for BIG-IP __network_2_name__ network traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" } } } } } outputs: ve_instance_name: description: Name of the instance value: { get_attr: [ve_instance, name] } ve_instance_id: description: ID of the instance value: { get_resource: ve_instance } floating_ip: description: The Floating IP address of the VE value: { get_attr: [floating_ip, floating_ip_address] } mgmt_ip: description: The mgmt IP address of f5 ve instance value: { get_attr: [mgmt_port, fixed_ips, 0, ip_address] } mgmt_mac: description: The mgmt MAC address of f5 VE instance value: { get_attr: [mgmt_port, mac_address] } mgmt_port: description: The mgmt port id of f5 VE instance value: { get_resource: mgmt_port } network_1_ip: description: The 1.1 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_1_port, fixed_ips, 0, ip_address] } network_1_mac: description: The 1.1 MAC address of f5 VE instance value: { get_attr: [network_1_port, mac_address] } network_1_port: description: The 1.1 port id of f5 VE instance value: { get_resource: network_1_port } network_2_ip: description: The 1.2 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_2_port, fixed_ips, 0, ip_address] } network_2_mac: description: The 1.2 MAC address of f5 VE instance value: { get_attr: [network_2_port, mac_address] } network_2_port: description: The 1.2 port id of f5 VE instance value: { get_resource: network_2_port } cluster_ready_ve_4_nic.yaml heat_template_version: 2015-04-30 description: This template deploys a standard f5 VE ready for clustering. parameters: open_sec_group: type: string default: open_sec_group ve_image: type: string label: F5 VE Image description: The image to be used on the compute instance. constraints: - custom_constraint: glance.image ve_flavor: type: string label: F5 VE Flavor description: Type of instance (flavor) to be used for the VE. constraints: - custom_constraint: nova.flavor use_config_drive: type: boolean label: Use Config Drive description: Use config drive to provider meta and user data. default: false ssh_key: type: string label: Root SSH Key Name description: Name of key-pair to be installed on the instances. constraints: - custom_constraint: nova.keypair admin_password: type: string label: F5 VE Admin User Password description: Password used to perform image import services root_password: type: string label: F5 VE Root User Password description: Password used to perform image import services license: type: string label: Primary VE License Base Key description: F5 TMOS License Basekey external_network: type: string label: External Network description: Network for Floating IPs default: test constraints: - custom_constraint: neutron.network mgmt_network: type: string label: VE Management Network description: Management Interface Network. default: test constraints: - custom_constraint: neutron.network ha_network: type: string label: VE HA Network description: HA Interface Network. default: test constraints: - custom_constraint: neutron.network network_1: type: string label: VE Network for the 1.2 Interface description: 1.2 TMM network. default: test constraints: - custom_constraint: neutron.network network_1_name: type: string label: VE Network Name for the 1.2 Interface description: TMM 1.2 network name. default: data1 network_2: type: string label: VE Network for the 1.3 Interface description: 1.3 TMM Network. default: test constraints: - custom_constraint: neutron.network network_2_name: type: string label: VE Network Name for the 1.3 Interface description: TMM 1.3 network name. default: data2 default_gateway: type: string label: Default Gateway IP default: None description: Upstream Gateway IP Address for VE instances resources: mgmt_port: type: OS::Neutron::Port properties: network: { get_param: mgmt_network } security_groups: [{ get_param: open_sec_group }] network_1_port: type: OS::Neutron::Port properties: network: {get_param: network_1 } security_groups: [{ get_param: open_sec_group }] floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: { get_param: external_network } port_id: { get_resource: mgmt_port } network_2_port: type: OS::Neutron::Port properties: network: {get_param: network_2 } security_groups: [{ get_param: open_sec_group }] ha_port: type: OS::Neutron::Port properties: network: {get_param: ha_network} security_groups: [{ get_param: open_sec_group }] ve_instance: type: OS::Nova::Server properties: image: { get_param: ve_image } flavor: { get_param: ve_flavor } key_name: { get_param: ssh_key } config_drive: { get_param: use_config_drive } networks: - port: {get_resource: mgmt_port} - port: {get_resource: ha_port} - port: {get_resource: network_1_port} - port: {get_resource: network_2_port} user_data_format: RAW user_data: str_replace: params: __admin_password__: { get_param: admin_password } __root_password__: { get_param: root_password } __license__: { get_param: license } __default_gateway__: { get_param: default_gateway } __network_1_name__: { get_param: network_1_name } __network_2_name__: { get_param: network_2_name } template: | { "bigip": { "ssh_key_inject": "true", "change_passwords": "true", "admin_password": "__admin_password__", "root_password": "__root_password__", "license": { "basekey": "__license__", "host": "None", "port": "8080", "proxyhost": "None", "proxyport": "443", "proxyscripturl": "None" }, "modules": { "auto_provision": "false", "ltm": "nominal" }, "network": { "dhcp": "true", "selfip_prefix": "selfip-", "routes": [ { "destination": "0.0.0.0/0.0.0.0", "gateway": "__default_gateway__" } ], "interfaces": { "1.1": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.HA", "selfip_description": "Self IP address for BIG-IP Cluster HA subnet", "vlan_name": "vlan.HA", "vlan_description": "VLAN for BIG-IP Cluster HA traffic", "is_failover": "true", "is_sync": "true", "is_mirror_primary": "true", "is_mirror_secondary": "false" }, "1.2": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_1_name__", "selfip_description": "Self IP address for BIG-IP __network_1_name__", "vlan_name": "__network_1_name__", "vlan_description": "VLAN for BIG-IP __network_1_name__ traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" }, "1.3": { "dhcp": "true", "selfip_allow_service": "default", "selfip_name": "selfip.__network_2_name__", "selfip_description": "Self IP address for BIG-IP __network_2_name__", "vlan_name": "__network_2_name__", "vlan_description": "VLAN for BIG-IP __network_2_name__ traffic", "is_failover": "false", "is_sync": "false", "is_mirror_primary": "false", "is_mirror_secondary": "false" } } } } } outputs: ve_instance_name: description: Name of the instance value: { get_attr: [ve_instance, name] } ve_instance_id: description: ID of the instance value: { get_resource: ve_instance } mgmt_ip: description: The mgmt IP address of f5 ve instance value: { get_attr: [mgmt_port, fixed_ips, 0, ip_address] } mgmt_mac: description: The mgmt MAC address of f5 VE instance value: { get_attr: [mgmt_port, mac_address] } mgmt_port: description: The mgmt port id of f5 VE instance value: { get_resource: mgmt_port } ha_ip: description: The HA IP address of f5 ve instance value: { get_attr: [ha_port, fixed_ips, 0, ip_address] } ha_mac: description: The HA MAC address of f5 VE instance value: { get_attr: [ha_port, mac_address] } ha_port: description: The ha port id of f5 VE instance value: { get_resource: ha_port } network_1_ip: description: The 1.2 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_1_port, fixed_ips, 0, ip_address] } network_1_mac: description: The 1.2 MAC address of f5 VE instance value: { get_attr: [network_1_port, mac_address] } network_1_port: description: The 1.2 port id of f5 VE instance value: { get_resource: network_1_port } network_2_ip: description: The 1.3 Nonfloating SelfIP address of f5 ve instance value: { get_attr: [network_2_port, fixed_ips, 0, ip_address] } network_2_mac: description: The 1.3 MAC address of f5 VE instance value: { get_attr: [network_2_port, mac_address] } network_2_port: description: The 1.3 port id of f5 VE instance value: { get_resource: network_2_port } floating_ip: description: Floating IP address of VE servers value: { get_attr: [ floating_ip, floating_ip_address ] }1.4KViews0likes1CommentOpenStack Primer
Emerging technologies move at a rapid pace. Momentum created through partnerships between researchers, vendors, experts, practitioners, and adopters is vital to the success of emerging technologies. Such technological advances have a curve of adoption that eventually becomes appealing to the masses. However, exactly at such a time, there is a need to explain things in a simple manner. It is vitally important to begin explaining the technological vocabulary that is used for discussion and development within a community of collaborators, which enables self- teaching and self-learning. As a result, those beginning to look at these emerging areas do not get lost and miss out on the massive transformation. OpenStack is going through such a phase. This article is our attempt to provide a simple introduction to OpenStack, leaving the reader with resources to follow up and begin the education. Our objective is to briefly cover the history of OpenStack and help the reader acquire the vocabulary for the language OpenStack Community often speaks. In terms of expected audience, this article assumes you are part of an IT organization or a software development organization, and deploy software that is developed by you and/or provided by others (commercial vendor or open source). This article also assumes you have heard about and have some familiarity with cloud software platforms like Amazon Web Services (AWS), server virtualization technologies, and aware of the benefits they provide. This article should help you achieve the following goals: - Begin to understand what OpenStack is about - Manage an introductory conversation within your company, getting others in your team or other departments interested in OpenStack - Establish credibility within your organization that you are familiar with OpenStack, and you or your team can begin to dive deep Introduction OpenStack (http://www.openstack.org) is a software platform that enables any organization to transform their IT Data Center into programmable infrastructure. OpenStack is primarily a services provisioning layer accessible via REST (http://en.wikipedia.org/wiki/REST) based Application Programming Interfaces (APIs - http://en.wikipedia.org/wiki/Application_programming_interface). A customer can use OpenStack for their in-house Data Center, while also using a Public Cloud like AWS for other needs. It all depends on the business needs and the technology choices a customer will make. Generally, the most significant motivation to deploy OpenStack in-house is to get AWS-like behavior within their own Data Center, wherein a Developer or an IT person can dynamically provision any infrastructure service - E.g. spin up a server with a virtual machine running Linux, create a new network subnet with a specific VLAN, create a load balancer with a virtual IP address etc. - without having to cut a IT Ticket and wait for someone else to perform those operations. The main business goal is self-service provisioning of infrastructure services, which requires a programmatic interface in front of the infrastructure. OpenStack started within Rackspace, and in collaboration with NASA, the project was launched as an open source initiative. The project is now managed by the OpenStack Foundation with many entities on the Foundation's Board. The OpenStack project development is 100% community driven with no single vendor or organization influencing the decisions and the roadmap. All OpenStack software is developed using Python and is available under a non-commercial software license. Any customer can download the Community Edition of OpenStack and run it on a Linux platform. Vendors including Canonical, Red Hat, HP, IBM, Cisco, Mirantis, MetaCloud, Persistent Systems, and many others take the OpenStack Community Edition and test it for other Linux variants and also make it available as a Vendor Edition of OpenStack. Some of these vendors also add their deployment software (E.g. Mirantis FUEL, Canonical JUJU etc.) to make OpenStack installations easy, and provide systems integration and consulting services. Customers have a choice to use the Community or the Vendor edition of OpenStack along with additional consulting and support services. F5 joined the OpenStack Foundation as a Silver level sponsor in October 2013. F5 has been engaging with the community since early 2012 and decided to commit to developing OpenStack integrations in early 2014. In subsequent articles we will explain our integrations in detail. OpenStack Releases The OpenStack software platform is released twice a year, under a 6 month long development cycle. The most recent release was called HAVANA, launched in November 2013. The upcoming release in May 2014 is called ICEHOUSE. Notable recent releases were: GRIZZLY, FOLSOM, ESSEX F5 has committed to developing its integration for the HAVANA release and will update its support for future releases. OpenStack Services The OpenStack platform currently supports provisioning of the following services: Compute, Network, Storage, Identity and Orchestration (and more). The OpenStack development community uses code words for each of these areas. Most of the OpenStack collateral and conversations use these code words (instead of saying Compute, Network and such). These code words are part of the OpenStack vocabulary and to become familiar and dive deep on OpenStack you need to be aware of them. The list given below represents the OpenStack project names and the service areas they generally align with. You will also see some related AWS service names in parenthesis below, provided for comparison and education purposes only. Not all of the OpenStack services map directly to their AWS counterparts. In some cases the APIs are similar in concept, while in other cases they are different even when solving a similar problem (E.g. AWS Elastic Load Balancing service APIs and OpenStack Load Balancer As a Service APIs are significantly different). Neutron = Networking (L2/L3, DHCP, VPN, LB, FW etc.) Nova = Compute (Similar to AWS EC2) Cinder = Block Storage (Similar to AWS EBS) Swift = Object Store (Similar to AWS S3) Keystone = Identity (Similar to AWS IAM) Glance = Image Service (Similar to AWS AMIs) Ceilometer = Metering (mostly for Billing purposes, not monitoring) Heat = Deployment and Auto Scaling (Similar to AWS CloudFormation and Auto Scaling) Horizon = Single-pane-of-glass Management GUI (Similar to AWS Console) Tempest = Test Framework As a customer, when you ask your networking vendor about their integration with Neutron, you are asking for their integration with the Network provisioning services of OpenStack. You might also ask a vendor about their support for Nova. That could mean asking if their software can be installed on a VM running in the Compute layer of OpenStack. All these services, such as Neutron and Nova, are programmable via REST APIs. Your goal is to complement (or replace) point-and-click manual provisioning with automated API-driven provisioning of infrastructure services. By asking your infrastructure vendors for their OpenStack support, you are seeking confirmation on whether their technologies are available to be programmed over a OpenStack REST API, depending on the service, and in some cases can their software run in a VM (e.g. KVM instance) provisioned using the OpenStack Nova API. For most of these services, the OpenStack platform supports a Plug-In and Driver architecture. This allows the vendors to support their own technology within the OpenStack platform. For example, the OpenStack community distribution supports creating a L2 switch-based network using the Apache Open vSwitch software. However, using the Plug-In and Driver architecture, commercial vendors like Cisco have created support for their commercial switching products. As a result of this, a Cisco customer can now use OpenStack Neutron L2/L3 Plug-in to provision a new subnet into a Cisco switch that they already had installed in their network - thus making their Cisco networking layer programmable via a standard vendor-neutral REST API. Each vendor can decide the level of interoperability they want to provide between the capabilities they natively support and the capability that OpenStack allows to be programmed through it's APIs. The vendor can add extensions to the Plug-ins to expose additional functionality that is not yet part of the OpenStack community ratified API specification. Depending on how the community progresses in its development from release to release, some of these extensions could become part of the standard specifications, at which point they would become the official OpenStack APIs. In terms of F5's integration with OpenStack, the most important service is Neutron. The community has defined APIs for various network services, including Load Balancer As A Service (LBAAS). Additionally, the Neutron layer also provides default functionality to support services like DHCP and DNS. F5 has committed to developing LBAAS Plug-ins for the OpenStack HAVANA release. When the Plug-ins are deployed with an OpenStack distribution, a customer will be able to create a load balancer instance on a HW or VE BIG-IP and provision the following elements: VIP, Server Pool, Health Monitor, Load Balancing Method. A follow-up blog post will provide details of this integration. F5 BIG-IP and BIG-IQ provide much more functionality than what OpenStack LBAAS allows to be provisioned. F5 will choose to deliver additional capabilities as Plug-In extensions when requested by our customers. OpenStack Landscape OpenStack complements as well as creates open source alternatives in addition to commercial cloud software platforms. OpenStack creates choice for Enterprises and Service Providers to make provisioning of their existing or new Data Center infrastructure programmable via REST APIs. AWS and other Public Clouds are not necessarily a direct competition to the OpenStack cloud software platform used for in-house Data Center programmability. Vendors such as HP and Rackspace operate a Public Cloud using OpenStack software platform, which creates choice for customers when considering a Public Cloud solution for their business needs. Conclusion OpenStack is a rapidly emerging software platform, which is increasingly becoming stable release after release. As a result, we expect F5 customers to consider it for making their Data Center infrastructure programmable via REST APIs. Benefits of OpenStack include the open source nature supporting community development, with an increasing vendor-supported eco-system. F5 is committed to stay involved within the OpenStack community and will commit to launching integrations and solutions based on customer needs, as they look to integrate F5’s BIG-IP and BIG-IQ services into their OpenStack environments. You can visit F5's alliances page to learn about our OpenStack support. References http://en.wikipedia.org/wiki/OpenStack http://www.rackspace.com/cloud/openstack/ http://www.cisco.com/web/solutions/openstack/index.html http://www.redhat.com/openstack/ http://www8.hp.com/us/en/business-solutions/solution.html?compURI=1421776#.U0hKPtwuNkI http://www.mirantis.com http://www.persistent.com/technology/cloud-computing/OpenStack http://www.metacloud.com1.2KViews0likes2CommentsHow is SDN disrupting the way businesses develop technology?
You must have read so much about software-defined networking (SDN) by now that you probably think you know it inside and out. However, such a nascent industry is constantly evolving and there are always new aspects to discover and learn about. While much of the focus on SDN has focused on the technological benefits it brings, potential challenges are beginning to trouble some SDN watchers. While many businesses acknowledge that the benefits of SDN are too big to ignore, there are challenges to overcome, particularly with the cultural changes that it brings. In fact, according to attendees at the Open Networking Summit (ONS) recently the cultural changes required to embrace SDN outweigh the technological challenges. One example, outlined in this TechTarget piece, is that the (metaphorical) wall separating network operators and software developers needs to be torn down; network operators need coding skills and software developers will need to be able to program networking services into their applications. That’s because SDN represents a huge disruption to how organisations develop technology. With SDN, the speed of service provisioning is dramatically increased; provisioning networks becomes like setting up a VM... a few clicks of the button and you’re done. This centralised network provision means the networking element of development is no longer a bottleneck; it’s ready and available right when it’s needed. There’s another element to consider when it comes to SDN, tech development and its culture. Much of what drives software-defined networking is open source, and dealing with that is something many businesses may not have a lot of experience with. Using open source SDN technologies means a company will have to contribute something back to the community - that’s how open source works. But for some that may prove to be a bit of an issue: some SDN users such as banks or telecoms companies may feel protective of their technology and not want is source code to be released to the world. But that is the reality of the open source SDN market, so it is something companies will have to think carefully about. Are the benefits of SDN for tech development worth going down the open source route? That’s a question only the companies themselves can answer. Software-defined networking represents a huge disruption to the way businesses develop technology. It makes things faster, easier and more convenient during the process and from a management and scalability point of view going forward. There will be challenges - there always are when disruption is on the agenda - but if they can be overcome SDN could well usher in a new era of technological development.999Views0likes6CommentsCloud bursting, the hybrid cloud, and why cloud-agnostic load balancers matter
Cloud Bursting and the Hybrid Cloud When researching cloud bursting, there are many directions Google may take you. Perhaps you come across services for airplanes that attempt to turn cloudy wedding days into memorable events. Perhaps you'd rather opt for a service that helps your IT organization avoid rainy days. Enter cloud bursting ... yes, the one involving computers and networks instead of airplanes. Cloud bursting is a term that has been around in the tech realm for quite a few years. It, in essence, is the ability to allocate resources across various public and private clouds as an organization's needs change. These needs could be economic drivers such as Cloud 2 having lower cost than Cloud 1, or perhaps capacity drivers where additional resources are needed during business hours to handle traffic. For intelligent applications, other interesting things are possible with cloud bursting where, for example, demand in a geographical region suddenly needs capacity that is not local to the primary, private cloud. Here, one can spin up resources to locally serve the demand and provide a better user experience.Nathan Pearcesummarizes some of the aspects of cloud bursting inthis minute long video, which is a great resource to remind oneself of some of the nuances of this architecture. While Cloud Bursting is a term that is generally accepted by the industry as an "on-demand capacity burst,"Lori MacVittiepoints out that this architectural solution eventually leads to aHybrid Cloudwhere multiple compute centers are employed to serve demand among both private-based resources are and public-based resources, or clouds, all the time. The primary driver for this: practically speaking,there are limitations around how fast data that is critical to one's application (think databases, for example) can be replicated across the internet to different data centers.Thus, the promises of "on-demand" cloud bursting scenarios may be short lived, eventually leaning in favor of multiple "always-on compute capacity centers"as loads increase for a given application.In any case, it is important to understand thatthat multiple locations, across multiple clouds will ultimately be serving application content in the not-too-distant future. An example hybrid cloud architecture where services are deployed across multiple clouds. The "application stack" remains the same, using LineRate in each cloud to balance the local application, while a BIG-IP Local Traffic Manager balances application requests across all of clouds. Advantages of cloud-agnostic Load Balancing As one might conclude from the Cloud Bursting and Hybrid Cloud discussion above, having multiple clouds running an application creates a need for user requests to be distributed among the resources and for automated systems to be able to control application access and flow. In order to provide the best control over how one's application behaves, it is optimal to use a load balancer to serve requests. No DNS or network routing changes need to be made and clients continue using the application as they always did as resources come online or go offline; many times, too, these load balancers offer advanced functionality alongside the load balancing service that provide additional value to the application. Having a load balancer that operates the same way no matter where it is deployed becomes important when resources are distributed among many locations. Understanding expectations around configuration, management, reporting, and behavior of a system limits issues for application deployments and discrepancies between how one platform behaves versus another. With a load balancer like F5's LineRate product line, anyone can programmatically manage the servers providing an application to users. Leveraging this programatic control, application providers have an easy way spin up and down capacity in any arbitrary cloud, retain a familiar yet powerful feature-set for their load balancer, ultimately redistribute resources for an application, and provide a seamless experience back to the user. No matter where the load balancer deployment is, LineRate can work hand-in-hand with any web service provider, whether considered a cloud or not. Your data, and perhaps more importantly cost-centers, are no longer locked down to one vendor or one location. With the right application logic paired with LineRate Precision's scripting engine, an application can dynamically react to take advantage of market pricing or general capacity needs. Consider the following scenarios where cloud-agnostic load balancer have advantages over vendor-specific ones: Economic Drivers Time-dependent instance pricing Spot instances with much lower cost becoming available at night Example: my startup's billing system can take advantage in better pricing per unit of work in the public cloud at night versus the private datacenter Multiple vendor instance pricing Cloud 2 just dropped their high-memory instance pricing lower than Cloud 1's Example: Useful for your workload during normal business hours; My application's primary workload is migrated to Cloud 2 with a simple config change Competition Having multiple cloud deployments simultaneously increases competition, and thusyour organization's negotiated pricing contracts become more attractiveover time Computational Drivers Traffic Spikes Someone in marketing just tweeted about our new product. All of a sudden, the web servers that traditionally handled all the loads thrown at them just fine are gettingslashdottedby people all around North America placing orders. Instead of having humans react to the load and spin up new instances to handle the load - or even worse: doing nothing - your LineRate system and application worked hand-in-hand to spin up a few instances in Microsoft Azure's Texas location and a few more in Amazon's Virginia region. This helps you distribute requests from geographically diverse locations: your existing datacenter in Oregon, the central US Microsoft Cloud, and the east-coast based Amazon Cloud. Orders continue to pour in without any system downtime, or worse: lost customers. Compute Orchestration A mission-critical application in your organization's private cloud unexpectedly needs extra computer power, but needs to stay internal for compliance reasons. Fortunately, your application can spin up public cloud instances and migrate traffic out of the private datacenter without affecting any users or data integrity. Your LineRate instance reaches out to Amazon to boot instances and migrate important data. More importantly, application developers and system administrators don't even realize the application has migrated since everything behaves exactly the same in the cloud location. Once the cloud systems boot, alerts are made to F5's LTM and LineRate instances that migrate traffic to the new servers, allowing the mission-critical app to compute away. You just saved the day! The benefit to having a cloud-agnostic load balancing solution for connecting users with an organization's applications not only provides a unified user experience, but provides powerful, unified way of controlling the application for its administrators as well. If all of a sudden an application needs to be moved from, say, aprivate datacenter with a 100 Mbps connection to a public cloud with a GigE connection, this can easily be done without having to relearn a new load balancing solution. F5's LineRate product is available for bare-metal deployments on x86 hardware, virtual machine deployments, and has recently deployed anAmazon Machine Image (AMI). All of these deployment types leverage the same familiar, powerful tools that LineRate offers:lightweight and scalable load balancing, modern management through its intuitive GUI or the industry-standard CLI, and automated control via itscomprehensive REST API.LineRate Point Load Balancerprovides hardened, enterprise-grade load balancing and availability services whereasLineRate Precision Load Balanceradds powerful Node.js programmability, enabling developers and DevOps teams to leveragethousands of Node.js modulesto easily create custom controlsfor application network traffic. Learn about some of LineRate'sadvanced scripting and functionalityhere, ortry it out for freeto see if LineRate is the right cloud-agnostic load balancing solution for your organization.900Views0likes0CommentsDeploying OpenStack DevStack on VMware Fusion
For developing and testing the F5 OpenStack agent and driver, I created aDevStackvirtual machine on my MacBook Pro using VMware Fusion Pro for OSX version 7.1.3. DevStack is a collection of scripts that allows users to rapidly deploy an OpenStack environment. I run this DevStack virtual machine along with a BIG-IP Virtual Edition (VE) as a self-contained environment for quick development and test. If you would like to install this along with a BIG-IP Virtual Edition, I recommend readingDeploying F5 BIG-IP Virtual Edition on VMware Fusionby Chase Abbott. I will build upon his article and use the same networks so that the reader can integrate the BIG-IP into an OpenStack environment. In order to install both a DevStack VM and F5 BIG-IP Virtual Edition on the same host, I would recommend using a host with 16GB of RAM. DevStack Install Create VMware networks Install Ubuntu 14.04 Install DevStack Create Custom API and Provider Networks If you have followed theinstructionson how to install and configure additional networking, you can usevmnet2andvmnet4as OpenStack public and API networks. If you do not wish to install VE and only need to set up VMware networks, then do the following: Start VMware Fusion Pro, and select the menuVMware Fusion > Preferences Click theNetworkicon Click the lock icon to authenticate and create additional networks Click the+icon to create an additional network, for my example I will usevmnet2 and vmnet4. Selectvmnet2and configure the following (provider) network: Select the option,"Allow virtual machines on this network to connect to external networks (using NAT).” Select the option,“Connect the host Mac to this network.” Select“Provide addresses on this network via DHCP.” In theSubnet IPfield, enter10.128.1.0 In theSubnet Maskfield, enter255.255.255.0 Selectvmnet4and configure the following (OpenStack API) network: Select the option,"Allow virtual machines on this network to connect to external networks (using NAT).” Select the option,“Connect the host Mac to this network.” Select“Provide addresses on this network via DHCP.” In theSubnet IPfield, enter10.128.1.0 In theSubnet Maskfield, enter255.255.255.0 Create Ubuntu 14.04 Instance Download an ISO imageUbuntu Server 14.04.5 Trustyfor the DevStack VM. There are more recent versions of Ubuntu available; however, these may not be supported by the OpenStack Mitaka release. The VM that I create has 2 processor cores and 4GB RAM of -- it is not a very beefy setup, and more resources could be added, but this is good enough for a small development environment. If you want to add a BIG-IP VE on the same development host, you should stay within these constraints. Provision Processors and Memory Start VMware Fusion Pro, and select the menuFile > New, and click Continue. Chose to install from theubuntu-14.04.5-server-amd64.iso SelectCustomize Settings Chose a name for the virtual machine and clickSave SelectProcessors & Memory Select 2 processor cores from theProcessorsdrop down Change theMemoryamount to4096 Click onAdvanced optionsand selectEnable hypervisor applications in this virtual machine Connect Network Adapters Click Network Adapter, and clickvmnet2 ClickShow All, and clickAdd Device Click Network Adapter, andAdd… Clickvmnet4, and clickShow All Close the Settings window and start the install. Install the Ubuntu 14.04.5 Server VM Chose all the defaults during install until the network configuration. Selecteth0as the primary network interface. This will be the management interface. Enter whatever you would like for hostname Add a new user and add a password. For disk partitioning: Select theGuided – use entire disk and set up LVMmethod SelectYes,when prompted,Write the changes to disks and configure LVM? Accept the default when prompted,Amount of volume group to use for guided partitioning.ClickContinue SelectYes,when prompted;Write the changes to disks? ChooseContinuewhen prompted for a proxy. When prompted; “How do you want to manage upgrades on this system?”Select:No automatic updates SelectOpenSSH serverin theSoftware Selectionscreen. Wait for software to install. SelectYes,when prompted;Install the GRUB boot loader to the master boot record? To finish the install selectContinue Configure the Virtual DevStack VM After the DevStack VM reboots, you’ll be presented with a login screen. Log in to the Ubuntu server with the username and password you created during the install Update and install packages $ sudo apt-get update $ sudo apt-get upgrade -y $ sudo apt-get install git -y Reboot the host. Configure the VM networking It is a good idea to create static IP addresses on the guest interfaces for the OpenStack service API endpoints. Edit the network configuration, /etc/network/interfaces. I modified mine based on the VM’s network adapter configuration. # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.128.1.128 netmask 255.255.255.0 gateway 10.128.1.2 auto eth1 iface eth1 inet static address 10.128.20.150 netmask 255.255.255.0 network 10.128.20.0 You must also update resolver configuration, /etc/resolvconf/resolv.conf.d/base nameserver 8.8.8.8 search localdomain Once you have finished, you can either restart networking and resolvconf or just reboot the guest. Install DevStack These instructions are taken from the DevStack website: http://docs.openstack.org/developer/devstack Add Stack User $ sudo adduser stack $ sudo echo “stack ALL=(ALL) NOPASSWD: ALL” >> /etc/sudoers Download DevStack Login as the stack user or as the user you created when installing Ubuntu and clone the devstack repository. To work on Mitaka, checkout the stable mitaka branch. $ git clone https://git.openstack.org/openstack-dev/devstack $ cd devstack $ git checkout –b stable/mitaka origin/stable/mitaka Create a local.conf The local.conf is used as configuration input to the DevStack deployment scripts. It is very configurable, but not well documented. Most of the default values in the configuration file are acceptable; however, if you want to explore the options available you will very likely find yourself delving into the deployment scripts. What I am providing is a sparse configuration to getKeystone, Glance, Nova, Neutron and Neutron-LBaaSrunning. The devstack repository contains a sample local.conf, devstack/samples/local.conf. Copy the sample configuration into the devstack directory. $ cp devstack/samples/local.conf devstack Modify the HOST_IP and HOST_IP_IFACE to reference the API endpoint. In the case of this example we useeth1 HOST_IP=10.128.20.150 HOST_IP_IFACE=eth1 Append a neutron configuration: # Neutron # ------- disable_service n-net enable_service neutron enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service q-lbaasv2 OVS_ENABLE_TUNNELING=True NEUTRON_CREATE_INITIAL_NETWORKS=False Q_USE_SECGROUP=True PUBLIC_INTERFACE=eth0 PUBLIC_BRIDGE=br-ex Q_USE_PROVIDERNET_FOR_PUBLIC=True OVS_PHYSICAL_BRIDGE=$PUBLIC_BRIDGE PHYSICAL_NETWORK=physnet PROVIDER_NETWORK_TYPE=flat Disableswift, tempestandhorizonand fix the noVNC version to 0.6.0 # Swift # ----- disable_service s-proxy disable_service s-object disable_service s-container disable_service s-account # Disable services disable_service horizon disable_service tempest # Fix for console issues on noVNC NOVNC_BRANCH=v0.6.0 # Make noVNC available on eth0 NOVNCPROXY_URL="http://10.128.1.128:6080/vnc_auto.html" The swift project can be enabled, but I find it superfluous to manage object storage on a laptop virtual machine. The current version of Horizon exposes a bug in theopenstack-sdkpackage that breaks the installation. It can be added back in on a subsequent build, but a change to the Mitaka requirements must be made first. I will add details below. The noVNC branch must be fixed at 0.6.0 or you will experience problems with console access to VM’s. Deploy Stack Now run the stack script. This will take about 15 minutes the first time to install, but once the service repositories are cloned will be much shorter on subsequent runs. $ cd devstack $ ./stack.sh Congratuations! You should now have a running Mitaka stack. ======================== DevStack Components Timed ======================== run_process - 46 secs test_with_retry - 3 secs apt-get-update - 6 secs pip_install - 238 secs restart_apache_server - 6 secs wait_for_service - 8 secs git_timed - 134 secs apt-get - 101 secs Teardown and cleanup stack Once you are finished with the deployment you should teardown the services and cleanup the stack. $ cd devstack $ ./unstack.sh $ ./clean.sh Install Horizon In order to get Horizon to work on the current stable mitaka branch, you will need to teardown your current stack and restack. Once the stack is cleaned up, edit the following file,/opt/stack/requirements/upper-constraints.txtby changing the upper constraint on openstacksdk. You can uncomment thedisable_service horizonline inlocal.confand then restack. Change the line: openstacksdk===0.8.1 To: openstacksdk===0.9.6807Views0likes2CommentsOpenStack in a backpack – how to create a demo environment for F5 Heat Plugins, part 2
Part 2 – OpenStack installation and testing After “Part 1 – Host environment preparation” you should have your CentOS v7, 64-bit instance up and running. This article will walk you through the OpenStack RDO installation and testing process. Every command requires a Linux root privileges and is issued from the /root/ directory. If you visited the RDO project website, you might have seen the Packstack quickstart installation instructions, recommending you issue a simple packstack --allinone command. Unfortunately, packstack --allinone does not build out OpenStack networking in a way that gives us external access to the guest machines. It doesn’t install the Heat project either. This is why we need to follow a bit more complex route. Install and Set Up OpenStack RDO Install and update the RDO repos: yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-5.noarch.rpm yum update -y Install the OpenStack RDO Mitaka packages: yum install -y centos-release-openstack-mitaka yum install -y openstack-packstack I strongly recommend that you generate a Packstack answer file first, review it, and correct it if needed, then apply it. It’s less error-prone and it expedites the process of future re-installation (or later installation on a second host). Generate the Packstack answer file: packstack --gen-answer-file=my_answer_file.txt \ --allinone --provision-demo=n --os-neutron-ovs-bridge-mappings=extnet:br-ex \ --os-neutron-ovs-bridge-interfaces=br-ex:ens33 --os-neutron-ml2-type-drivers=vxlan,flat \ --os-neutron-ml2-vni-ranges=100:900 --os-neutron-ml2-tenant-network-types=vxlan \ --os-heat-install=y \ --os-neutron-lbaas-install=y \ --default-password=default NOTE: “ens33” should be replaced with the name of your non-loopback interface. Use “ip addr” to find it. You can remove the --os-neutron-lbaas-install=y \ line if you don’t intend to use your portable OpenStack cloud to play with F5 LbaaS as well. Review the answer file ( my_answer_file.txt ) and correct it if needed. Apply the answer file ( my_answer_file.txt ). packstack --answer-file=my_answer_file.txt Now it’s time to take a long coffee or lunch break, because this part takes a while! Verify that a bridge has been properly created: Your non-loopback interface should be assigned to an OVS bridge.... [root@mitaka network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 DEVICE=ens33 NAME=ens33 DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex ONBOOT=yes BOOTPROTO=none .... and the new OVS bridge device should take over the IP settings: [root@mitaka network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex DEFROUTE=yes UUID=015b449c-4df4-497a-93fd-64764e80b31c ONBOOT=yes IPADDR=10.128.10.3 PREFIX=24 GATEWAY=10.128.10.2 DEVICE=br-ex NAME=br-ex DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge Change a virt type from qemu to kvm, otherwise F5 will not boot: openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm Restart the nova compute service for your changes to take effect. openstack-service restart openstack-nova-compute Mitaka RDO may not take a default DNS forwarder from the /etc/resolv.conf file, so you need to set it manually: openstack-config --set /etc/neutron/dhcp_agent.ini \ DEFAULT dnsmasq_dns_servers 10.128.10.2 openstack-service restart neutron-dhcp-agent In this example, 10.128.10.2 is the closest DNS resolver; that should be your VMware workstation or home router (see part 1). There are couple of CLI authentication methods in OpenStack (e.g. http://docs.openstack.org/developer/python-openstackclient/authentication.html). Let’s use the easiest – but not the most secure – one for the purpose of this exercise, which is setting bash environment variables with username, password, and tenant name. The packstack command has created a flat file containing environment variables that logs you in as the admin user and the admin tenant. You can load those bash environment variables by issuing: source keystonerc_admin Set Up OpenStack Networks Next, we’ll configure our networks in OpenStack Neutron. The “openstack” commands will be issued on behalf of the admin user, until you load different environment variables. It’s a good time to review the keystonerc_admin file, because soon you will have to create such a file for a regular (non-admin) user. Load admin credentials: source keystonerc_admin Configure an external provider network: neutron net-create external_network --provider:network_type flat \ --provider:physical_network extnet --router:external Configure an external provider subnet with an IP address allocation pool that will be used as OpenStack floating IP addresses: neutron subnet-create --name public_subnet --enable_dhcp=False \ --allocation-pool=start=10.128.10.30,end=10.128.10.100 \ --gateway=10.128.10.2 external_network 10.128.10.0/24 \ --dns-nameserver=10.128.10.2 In the example given, 10.128.10.0/24 is my external subnet (the one configured during CentOS installation); 10.128.10.2 is both a default gateway and DNS resolver. Please note that the OpenStack Floating IP address is a completely different concept than an F5 Floating IP address. F5 Floating IP address is an IP that floats between physical or virtual F5 entities in a cluster. OpenStack Floating IP address is a NAT address that is translated from a public IP address to a private (tenant) IP address. In this case, it is defined by --allocation-pool=start=10.128.10.30,end=10.128.10.100 To avoid typos and confusion, let’s make further changes via the CLI and review the results in GUI. In Chrome or Mozilla, enter the IP address you assigned to your CentOS host in your address bar: http://<IP-address> (http://10.128.10.3, in my case). NOTE: IE didn’t work with an OpenStack Horizon (GUI) Mitaka release on my laptop. Set up a User, Project, and VM in OpenStack Next, to make this exercise more realistic, let’s create a non-admin tenant, non-admin user, with non-admin privileges. Create a demo tenant (a.k.a. project): openstack project create demo Create a demo user and assign it to demo tenant: openstack user create --project demo --password default demo Create a keystone environment variable file for demo user that access demo tenant/project: cp keystonerc_admin keystonerc_demo sed -i 's/admin/demo/g' keystonerc_demo source keystonerc_demo Your bash prompt should now look like [root@mitaka ~(keystone_demo)]# , since from now on all the commands will be executed on behalf of the demo user, within the demo tenant/project. In addition, you should be able to log into the OpenStack Horizon dashboard with the demo/default credentials. Create a tenant router. Please double-check that your environment variables are set for demo user. neutron router-create router1 neutron router-gateway-set router1 external_network Create a tenant (internal) management network: neutron net-create management Create a tenant (internal) management subnet and attach it to the router: neutron subnet-create --name management_subnet management 10.0.1.0/24 neutron router-interface-add router1 management_subnet Your network topology should be similar to that shown below (Network-> Network Topology-> Toggle labels): Create an ssh keypair, and store the private key to your pc. You will use this key for a password-less logging to all the guest machines: openstack keypair create default > demo_default.pem Store demo_default.pem to your PC. Please remember to chmod 600 demo_default.pem before using it with ssh -i command. Create an “allow all” security group: openstack security group create allow_all openstack security group rule create --proto icmp \ --src-ip 0.0.0.0/0 --dst-port 0:255 allow_all openstack security group rule create --proto udp \ --src-ip 0.0.0.0/0 --dst-port 1:65535 allow_all openstack security group rule create --proto tcp \ --src-ip 0.0.0.0/0 --dst-port 1:65535 allow_all Again, it’s not the best practice to use such a loose access roles, but the goal of this exercise is to create easy to use demo/test environment. Download and install a Cirros image (12.6MB): curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | \ openstack image create --container-format bare \ --disk-format qcow2 "Cirros image" Check the image status: +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-08-29T19:08:58Z | | disk_format | qcow2 | | file | /v2/images/6811814b-9288-48cc-a15a-9496a14c1145/file | | id | 6811814b-9288-48cc-a15a-9496a14c1145 | | min_disk | 0 | | min_ram | 0 | | name | Cirros image | | owner | 65136fae365b4216837110178a7a3d66 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-08-29T19:12:46Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------+ Spinning up your first Guest VM in your OpenStack environment. Here you will use previously created ssh key and allow_all security group. The guest image should be connected to the tenant “management” network. openstack server create --image "Cirros image" --flavor m1.tiny \ --security-group allow_all --key-name default \ --nic net-id=management Cirros1 Now you should be able to access a Cirros1 console: If everything went well, you should be able to log in to the Cirros1 VM with the cirros/subswin:) credentials and you should be able to ping www.f5.com from Cirros1. If your ping works, it looks like Cirros1 has got a way out. Next, let’s create a way in, so we can ssh to the Cirros1 instance from outside. If the console doesn’t react to keystrokes, click the blue bar with “Connected (unencrypted)” first. Create an OpenStack floating IP address: openstack ip floating create external_network Observe a newly create IP address. You will use it in the next step: +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | fixed_ip | None | | id | 5dff365f-3cc8-4d61-8e7e-f964cef3bb8b | | instance_id | None | | ip | 10.128.10.31 | | pool | external_network | +-------------+--------------------------------------+ Now, the newly created “public” IP address may be attached to any Virtual Machine, Cirros1 in this case. Attaching an OpenStack floating IP address creates one-to-one network address translation between public IP address and tenant (private) IP address. Attach the OpenStack floating IP address to your Cirros1: openstack ip floating add 10.128.10.31 Cirros1 Test a way in: ssh from outside to your Cirros1: ssh -i demo_default.pem cirros@10.128.10.31 Or equivalent putty command. Check that you are really on Cirros1: $id uid=1000(cirros) gid=1000(cirros) groups=1000(cirros) If the test was successful, it looks like your basic OpenStack services are up and running. To spare a scarce CPU and memory resources, I’d strongly suggest to hibernate or remove Cirros1 VM. Hibernating: openstack server suspend Cirros1 Removing: openstack server delete Cirros1 Before you boot or shutdown the CentOS host, you should be aware of one CentOS/Mitaka specific issue: your hardware may be too slow to boot an httpd service within the default 90s. This is why I had to change this timeout: sed -i \ 's/^#DefaultTimeoutStartSec=[[:digit:]]\+s/DefaultTimeoutStartSec=270s/g' \ /etc/systemd/system.conf Useful troubleshooting commands: openstack-status , systemctl show httpd.service , systemctl status httpd.service . In the next (and final) article in this series, we’ll install the F5 Heat plugins and onboard the F5 qcow2 image using F5 Heat templates.802Views0likes1CommentOpenStack in a backpack – how to create a demo environment for F5 Heat Plugins, part 1
Part 1 – Host environment preparation. If you are a Network Engineer, one day you may be asked to roll out F5 ADC in the OpenStack environment. But hey, OpenStack may look new and scary to you, as it was for me. You’ve probably learned on Devcentral (https://devcentral.f5.com/s/wiki/openstack.openstack_heat.ashx) that Heat is a primary OpenStack orchestration service, and that Heat and the F5 Heat Plugins are useful for rolling out and configuring F5 BIG-IP Virtual Edition (VE) on top of the OpenStack environment. Let’s say that you've decided to accustom yourself with Heat and F5 Heat Plugins, but you do not have access to any OpenStack test or development environments, and/or you do not have admin rights to install the F5 Heat Plugins in OpenStack. If so, this series of articles is for you. Please note, that OpenStack is a moving target. By the time I was writing this article, Mitaka was the supported OpenStack release. See https://releases.openstack.org/ for further information. This is what you need to create your small, single-host OpenStack lab environment: Some hardware or virtual machine with >=4 cores, 16G of RAM and Intel VT-x/EPT or AMD-V/RVI support. In my case, it was an old Dell laptop that IT department intended to scrap. ;-) A decent Internet connection. In case of bare metal deployment, you need a home router. In my case it was my old WRT54gl. Ask your F5 Partner or F5 representatives for a couple of VE 45-day evaluation licenses. To sum up, it cost me 0$ to build my own OpenStack lab environment. This is what you will get: Fully functional OpenStack Mitaka environment Heat and F5 Heat Plugins F5 VE as guest machine Direct access to F5 GUI, CLI, Virtual Servers from your laptop or any other devices connected to the home router Possibility to test F5 VE setup from inside or outside of OpenStack Naming convention: [HW] – actions to be done for the bare metal deployment [VMware] – actions to be done for the VMware guest machine deployment no square bracket – do it regardless of whether it’s a physical or VMware deployment. Host Preparation Download and Install the Operating System Check if you have Intel VT-x/EPT or AMD-V/RVI turned on in your bios. Sample screenshot: Download a CentOS v.7 64-bit minimal image: http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso [HW] Store the CentOS image on a USB stick. The simplest method is to use some other Linux box. dd if=CentOS-6.5-x86_64-bin-DVD1.iso of=/dev/sdb In the above command, /dev/sdb is an USB device; your USB device name will most likely be different. To find out the USB device name, observe /var/log/syslog , or /var/log/messages , while connecting the USB drive to the Linux machine. Further info at:https://wiki.centos.org/HowTos/InstallFromUSBkey [VMware] Create a custom VMware workstation virtual machine. Set everything to default except: Guest Operating system: RedHat Enterprise Linux 7 64-bit. Number of processors: 4 Memory for this Virtual Machine [RAM]: 16384MB (16G) Use Network address translation. You should review and alter the VMware network configuration. Let’s assume that the NAT network is 10.128.10.0/24, where a default route and DNS is set to 10.128.10.2, turn off DHCP. Maximum disk size: 200GB IMPORTANT: Turn on a nested VM support by: Edit virtual machine settings -> Processors -> enable Virtualize Intel VT-x/EPT or AMD-V/RVI NOTE: If you see different GUI options, read this article: https://communities.vmware.com/docs/DOC-8970 For the sake of performance, disable memory page trimming: Install CentOS.Boot the USB drive [HW] or connect the CentoOS .iso image to the VMware guest machine [VMware]. Configure the OS Choose English as the language. Set your local time zone. Correct time will ease troubleshooting. Disable a Security Policy: Network settings: Set your hostname to: mitaka.f5demo.com and configure the network interface: Turn on “Automatically connect to this network when it is available”: Configure IPv4 address, mask, default route and DNS: Disable IPv6: Check if the network interface is switched on: [optional] Disable KDUMP (from the main screen) Open INSTALLATION DESTINATION from the main screen and choose “Partition disk automatically”: Hit “Begin Installation”. Set root password to “default”. You need to hit “DONE” button twice to make CentOS accept the weak password. A user account is not required. CentOS will prompt you to reboot. Permit ssh root login. Uncomment PermitRootLogin yes at /etc/ssh/sshd_config Reboot the ssh service systemctl restart sshd.service Configure the host Now, you should be able to log in to your OpenStack host with an ssh client (e.g., putty or iTerm2), so you can copy/paste the rest of the commands. Generate ssh keyset: ssh-keygen (empty passwords recommended) [HW] By default, the CentOS will suspend on a lid close. To turn this off: In /etc/systemd/logind.conf , set HandleLidSwitch=ignore Further details at http://askubuntu.com/questions/360615/ubuntu-server-13-10-now-goes-to-sleep-when-closing-laptop-lid Turn off Network Manager: systemctl disable NetworkManager systemctl stop NetworkManager Set the hostnames as shown below: [root@mitaka ~]# cat /etc/hosts |grep mitaka 10.128.10.3 mitaka.f5demo.com mitaka and cat /etc/hostname mitaka.f5demo.com Fill in the /etc/sysconfig/network as depicted: # Created by anaconda NETWORKING=yes HOSTNAME=mitaka.f5demo.com GATEWAY=10.128.10.2 Where 10.128.10.2 is a gateway IP address. Switch off selinux: sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config Set English locales: [root@openstack01 ~]# cat /etc/environment LANG=en_US.utf-8 LC_ALL=en_US.utf-8 Update the system: yum update -y Reboot the system: reboot Next, you should be ready to install OpenStack RDO. Stay tuned for instructions in the next installment of OpenStack in a Backpack!759Views0likes4CommentsInstalling, Configuring and Running Tempest tests against Openstack
The intent of this article is to give detailed information about how to install and configure tempest and run tempest tests. Tempest is the integration test suit of Openstack which is responsible for validating a stack. To achieve this, tempest hasintegration test suit in the tempest project and also tempest unit tests inside each individual project like neutron-lbaas. In this article, tempest is configured to run against a live stack that has F5 Openstack Agent and F5 Opesntack LBaaSv2 driver running. For more information about how the F5 OpenStack development team uses Tempest tests, see F5 OpenStack Testing Methodology. Most of the information here is from Tempest - The Openstack Integration Test Suite. Installing Tempest Installing tempest is the first step. To do that you can create a virtual environment: pip install virtualenv # Change to a directory where virtual environments directories can be created. virtualenv <test-env> source <test-env>/bin/activate After installing and activating virtualenv, you need to check out the tempest repo and install it using pip: git clone http://git.openstack.org/openstack/tempest pip install tempest/ Installing tempest will create a /etc/tempest directory in your virtualenv path and it will contain the sample config file packaged with tempest. After you fill out the content of tempest.conf and accounts.yaml, you need to copy both files to this directory and run the following command: export TEMPEST_CONFIG_DIR=<test-env>/etc/tempest/ Running Tempest test for neutron_lbaas project To run the neutron_lbaas tempest unit tests, you need to clone the neutron_lbaas project: git clone https://github.com/openstack/neutron-lbaas.git Also you can use tox for testing. You can install tox using pip: pip install tox Configuring tox.ini file Installing tox will create a tox.ini file in the root directory of neutron_lbaas project.inside the tox.ini file, there are multiple envlist options. In order to run tox against lbaasv2, modify the envlist and only include apiv2 in the list. Running tox command will take about 15 minutes for neutron_lbaas package for first time. The reason is, it will create a virtualenv in the .tox directory inside the project and it will install all the dependency packages there. So next time, it will go and activate the virtualenv and it will run much faster. envlist=apiv2 In this article, I will run the tests using py.test. So for this reason, you need to add pytest to the list of deps in the tox.ini file: deps=<already defined dependencies> Pytest And also modify the commands section of [testenv:apiv2] and include pytest: commands={posargs:py.test} Configuring tempest.conf file I added comments before the required fields in the tempest.conf and what their values should look like. Make sure to uncomment (remove the preceding #) the fields you want to include in your tempest.conf file, otherwise it will be substituted with the default value. [auth] ###################### # fill out accounts.yaml file with the right credentials # and save it in the same directory as tempest.conf file ###################### #test_accounts_file = <None> ###################### # change this field to admin ###################### #admin_username = <None> ###################### # Openstack environment project name ###################### #admin_project_name = <None> ###################### # admin user password in Openstack environment ###################### #admin_password = <None> [compute] ###################### # This filed is required for scenario tests, uncomment image_ref # field and pass the image_id for the cirros image in the Glance ###################### #image_ref = <None> ###################### # Pass the flavor_ref from the listed flavors in the Openstack environment # for scenario tests ###################### #flavor_ref = 1 ###################### # network name from the admin tenant that has ports # in both router and external_network ###################### #fixed_network_name = <None> [identity] ###################### # uri field needs to be in the following format # http://<CONTROLLER_IP>:35357/v2.0 ###################### #uri = <None> [identity-feature-enabled] ###################### # api_v2 needs to be set to true ###################### # Is the v2 identity API enabled (boolean value) #api_v2 = true [network] ###################### # use the network cidr from your "fixed_network_name" ###################### #project_network_cidr = 10.100.0.0/16 ##################### # project network mask bits for "fixed_network_name" ##################### #project_network_mask_bits = 28 ###################### # This is the ID of your External network. ###################### #public_network_id = ###################### # This is the name of your External network. ###################### #floating_network_name = <None> ###################### # This is the ID of the public router. ###################### #public_router_id = ###################### # DNS server address ###################### #dns_servers = 8.8.8.8,8.8.4.4 ###################### # This is address of fixed network ###################### #default_network = 1.0.0.0/16,2.0.0.0/16 [service_available] ###################### # neutron is available and this field needs to be set to true. ###################### # Whether or not neutron is expected to be available (boolean value) #neutron = false [validation] ###################### # This is the username of the image you passed its ID in the image_ref field. # for cirros image this is "cirros" ###################### #image_ssh_user = root ###################### # This is the password of the image you passed its ID in the image_ref field. # for cirros image this is "cubswin:)" ###################### #image_ssh_password = password ###################### # Name of the network in the admin tenant that can be used for SSH connection ###################### network_for_ssh = tempest-mgmt-network Configuring accounts.yaml file fill in the username, tenant name and password from the Openstack environment. username: 'admin' tenant_name: 'admin' password: 'changeme' Running Tests Neutron_lbaashas tempest api and scenario unit test. Api tests are responsible for examining the Openstack API whereas scenario tests are "through pass " tests as described in theOpenstack documentation. After configuring tempest.conf and accounts.yaml file, you should be able to run the test using the following command: cd neutron_lbaas/ tox -- py.test -lvv neutron_lbaas/tests/tempest/v2/api/test_health_monitor_admin.py Now that you have Tempest set up, you can move on to writing your own Tempest tests to check your code. See F5 Tempest Plugin and Writing Tempest Tests for more information.686Views0likes0CommentsMonitoring BIG-IP on Microsoft’s System Center with the Comtrade Management Pack for F5 BIG-IQ
Comtrade has released a Management Pack (MP) for Microsoft Systems Center (SCOM ) that uses F5’s BIG-IQ to monitor F5 BIG-IP devices and the applications they are helping deliver. The MP allows users to view all BIG-IP objects and see key information about their performance and health. This management pack will be of great interest to all customers using Microsoft Systems Center. What are the requirements for this solution? Microsoft System Center Operations Manager 2012 or Microsoft System Center Operations Manager 2012 SP1 or Microsoft System Center Operations Manager 2012 R2 F5 BIG-IQ 4.3.0 or BIG-IQ 4.4.0 Comtrade F5 BIG-IQ MP requires .NET Framework version 3.5 SP1 installed TCP 443 opened to the BIG-IQ devices Administrator account in BIG-IQ What is available in the MP? Discovery, visualization and dynamic update of F5 BIG-IQ appliances topology Discovery of F5 BIG-IQ appliance objects BIG-IQ Tenants Catalogs – Applications Virtual Servers BIG-IP Devices Cloud Connectors Nodes CPU Memory Disk Partitions SSL Certificates What will the MP Monitor? BIG-IQ (Availability, CPU utilization, Disk partition available space, Disk partition utilization, memory utilization) BIG-IP (Availability, CPU utilization, Disk partition available space, Disk partition utilization, memory utilization) Cloud Connectors (Cloud connector availability) Tenants iApp Catalogs – Applications (Application availability status, application’s active member count) Virtual Servers (Virtual server availability, server-side connection number for virtual server, client-side connection number for virtual server) Nodes (Availability of tenant nodes, server-side number of connections on nodes for port 80, server-side number of connections on nodes for port general, total number of connections on nodes) BIG-IQ SSL certificates (Availability and validity) Statistics BIG-IQ (CPU utilization, memory utilization, disk partition free space and utilization) BIG-IP (CPU utilization, memory utilization, disk partition free space and utilization) Tenants Catalogs – Applications (application availability and active members) Virtual Servers (server-side connection number on virtual servers, client-side connection number on virtual servers) Nodes (server-side connection number on node port 80, server-side connection number on node port general, server-side in-packets on node port general, server-side out-packets on node port general, server-side bits-in on node port general, server-side bits-out on node port general) Views (Diagram, Alert, State and Dashboard) How it works – main steps? 1. Comtrade F5 BIG-IQ is installed on one SCOM Management Server. The installation provides the MP and Comtrade MPBIG-IQ Agent. The agent is used for communicating with the REST API of F5 BIG-IQ. 2. Comtrade MPBIG-IQ Agent is installed on every SCOM Management server that will participate in the BIG-IQ monitoring. SCOM Management Servers are designated trough Resource Pool in SCOM. 3. Create an SNMP based Network device discovery and include the BIG-IQ IP address. Create a new SCOM Resource Pool. Assign the discovery to the Resource Pool. 4. Create Run As account and enter the account with administrator rights for the BIG-IQ device/s. For distribution choose more secure and add the resource pool. 5. Assign the run as account to F5 BIG-IQ Appliance Action Account profile. With these easy steps you are ready for monitoring. Here are some screenshots to give you more detailed view for the solution: Figure 1: Diagram View (Topology) of BIG-IQ infrastructure Figure 2: BIG-IP appliance and its components being monitored Figure 3: Tenants & Applications dashboard view Figure 4: View of all active alerts for BIG-IQ Figure 5: Alert details offers additional information about the issue Figure 6: On demand monitoring – Health recalculation Figure 7: Administration View Here is also a good video that shows the installation and configuration steps as well as overview: Product video: http://www.youtube.com/embed/yAhBk8cSPn0 Product page: www.comtradeproducts.com/f5 Microsoft Blog about the MP for F5: http://www.systemcentercentral.com/sneak-peak-at-comtrade-management-pack-for-f5-big-iq/685Views0likes0Comments