Evolving Programmability with Pivotal Cloud Foundry
Network automation is critical to provide; repeatability, reducing deployment time, security, and cloud enabled services. Achieving the nirvana of an automated app deployment can be broken into several phases that enables one enough time to adapt existing tools and operations to an automated workflow. We’ll use Pivotal Cloud Foundry (PCF) as an example of how to approach the different phases of achieving a continuous delivery app platform.
The three phases of evolution are:
- Automating Tasks
- Continuous Delivery
To illustrate this evolution we’ll look at Pivotal Cloud Foundry (PCF), an application platform, and a few use-cases that you might encounter when looking to automate your infrastructure. Although this article focuses on PCF as one example, this could apply to other ecosystems like ACI, NSX, OpenStack, Kubernetes, etc…
PCF provides a platform that makes it easy for a programmer to create an application/droplet (i.e. Spring Boot) and launch the application into the infrastructure regardless of whether it is VMware VSphere/Photon, OpenStack, AWS, or Azure. It is a challenge to ensure app and security best practices are being utilized (i.e. app monitoring / SSL ciphers) across multiple disparate infrastructures.
For this example; there’s a requirement to automate the process of creating a custom HTTP monitor for each PCF application and routing traffic to the appropriate PCF environment running in an active-standby configuration.
1. Automating Tasks
The first phase of automation involves getting away from the GUI and manual configuraiton. F5 BIG-IP provides an API that can reduce the risk of fat-fingering a config and providing consistent deployments across environments.
For the PCF example an operator has to manually create a custom HTTP monitor, pool, priority group on the pool member, and create a local traffic policy match/action to content-route traffic. This can be an error prone process utilizing the GUI. Each step involves multiple custom inputs, a total of ~21 for this example.
- Create a new monitor [appname]_monitor (5 custom inputs)
a. Custom interval 30, timeout 91
b. Custom receive/send string
- Create a new pool [appname]_pool (3 custom inputs)
a. Custom min-active-members (Active/Standby)
b. Custom monitor (from step 1)
- Add pool member (6 custom inputs, two members)
a. Custom IP, Port
b. Custom priority-group (Active/Standby)
- Create a new policy to route traffic to [appname] to [appname]_pool (7 custom inputs)
a. Custom rule name, criteria, parameter, and value
b. Custom action name, method, target
Using the F5 Python SDK we can create a simple command line tool that can be run by an operator that will complete the following tasks outlined before as 1 task that requires 2 custom inputs.
The following command executes the 4 actions (Source code here):
% python pcf-phase1.py -a create 10.1.1.245 dora.local.pcfdev.io 10.0.2.10:80,10.0.12.10:80 Created pool /Common/dora.local.pcfdev.io_pool Added member 10.0.12.10:80 Added member 10.0.2.10:80 Created policy rule dora.local.pcfdev.io
You can also add some code to make it easy to delete as well.
% python pcf-phase1.py -a delete 10.1.1.245 dora.local.pcfdev.io Deleted policy rule dora.local.pcfdev.io Deleted pool /Common/dora.local.pcfdev.io_pool Deleted monitor /Common/dora.local.pcfdev.io_monitor
Now you just improved your workflow by reducing 4 tasks to 1 (20%) and 21 custom inputs to 2 (%5)! The operator tasks have been reduced from 4 independent actions that could be performed in the wrong order and/or 21 inputs that could be entered incorrectly. If the previous process took 5 minutes per task/input the delivery time would go from 125 minutes (2 hours) to 15 minutes. That’s a pretty good ROI.
2. Collaboration / Integration
In phase 1, staff time is reduced in generating configurations, but it still involves a manual step of retrieving a list of applications from PCF that need to be served up. Ideally the second phase will solve this problem by communicating directly with the PCF API.
Utilizing this PCF API we can remove the need to specify the appname and query PCF directly. We’ll also set a “special” environment variable “F5” to distinguish apps that should be published vs. ones that should be kept private. This enables the app developers to publish their desired app state without the need to engage with the network team to make the configuration.
% python pcf-phase2.py -a create 10.1.1.245 10.0.2.10:80,10.0.12.10:80 Created pool /Common/dora.local.pcfdev.io_pool Added member 10.0.12.10:80 Added member 10.0.2.10:80 Created policy rule dora.local.pcfdev.io Created pool /Common/dora2.local.pcfdev.io_pool Added member 10.0.12.10:80 Added member 10.0.2.10:80 Created policy rule dora2.local.pcfdev.io
This removes the overhead of keeping track of applications and empowering the developer to determine whether an application should be published or kept internal.
3. Continuous Delivery
In the previous example a custom solution was developed to meet a specific business requirement. Ideally this could be generalized to provide common templates for different requirements and build a service catalog of these options.
Using iWorkflow we can create a service catalog that includes the predefined settings that we want. This allows an F5 architect/SME to select the desired tasks/inputs from the iWorkflow GUI to provide an API endpoint that can be consumed by other forms of automation moving the burden of customization from automation script to iWorkflow.
The following is an example of customizing step 3, content-routing, using a host header as the matching criteria and select a pool when the host header matches.
Creating iApp template as iWorkflow administrator. Administrator specifies mandatory/preset inputs and selectively enables tenant editable inputs.
This is built leveraging an iApp that can found here: https://github.com/0xHiteshPatel/appsvcs_integration_iapp.
After configuring a service template you can also create an application using the iWorkflow tenant interface as shown below or using the REST API.
Creating iApp from iWorkflow as a tenant. Tenant restricted to only inputs exposed by administrator
The following code perfoms the same function as phase 2, but the script is now focused only on providing the inputs and the tasks of executing the inputs in the correct order are handled by iWorkflow.
% python pcf-phase3.py 10.1.1.246 10.0.2.10,10.0.12.10 \ -a create_iapp \ --iapp dora_app_v1.0 \ --service_name "dora_template_v2.0" Created iApp dora_app_v1.0
View of iApp as seen by iWorkflow administrator
In the previous example a change to the load-balancing method (i.e. round-robin to least-connection-member) would require modifying the Python script and having to manually go back and identity resources that had been created using the previous version of the script. Using an iApp the deployment is versioned and you can migrate the configuration by updating the service_template that is in use by the iApp.
% python pcf-phase3.py 10.1.1.246 10.0.2.10,10.0.12.10 \ -a update_iapp_service \ --iapp dora_app_v1.0 \ --service_name "dora_template_v2.0" Updated iApp dora_app_v1.0 Service to dora_template_v2.0
iApp as seen on BIG-IP. Note the load balancing is now least connection
To review, the three phases of evolution are:
- Automating Tasks
- Continuous Delivery
Phase 1, Automating Tasks reduces the number of tasks that are performed manually and reducing the number of custom inputs. Phase 2, Collaboration/Integration, reduces the amount of manual inputs by pulling in data directly from external sources/APIs. Focus can be moved back to delivering business value instead of being mired in a process of multiple ticket requests and change management windows. Phase 3, Continuous Delivering, extends the automation to the consumer to provide self-service capabilities that can be done in a consistent and secure manner.
For more information about programmability be sure to check out the following related F5 resources: