Scenarios to automate BIG-IP with Puppet

DevOps is the best practice for businesses to pursue digital transformation strategies to venture into new revenue streams or simply remain competitive. It’s hard, though. One of the biggest challenges DevOps practitioners are facing is the disparity between the production infrastructure and that of dev/test. Many network and application services are still deployed in the production environment only. For a complete DevOps practice, these teams need a way to turn what they manage into “code”.

To create that infrastructure-as-code platform, Puppet provides a common, model-driven language. By taking advantage of Puppet’s extensibility and F5 programmability with iControl APIs, F5 and Puppet put together a solution to treat BIG-IP application services as “code”, and shift the integration to the left in CI/CD process. With the same tool in Puppet, we get a common interface for deploying and configuring application, application services, and other pieces of the application infrastructure.

Next let me take you through some of the scenarios or how customers are using this joint solution.

Use case 1: Application rollout and update

Generally, people who manage F5 devices are dealing with large enterprise applications such as Microsoft Exchange, Oracle EBS, SAP HANA, and a variety of large internal web-based applications.

Imagine an average enterprise with 500+ applications under management, there is army of resource needed to manage not only the devices, but to provision, configure, and manage each and every policy specifically for an application. Furthermore, it’s not only about deployment, but update. There is a greater likelihood that when an app is upgraded or a fix is introduced its associated application service policies may call for updates too. And with organizations update more frequently, configuring services and polices in BIG-IP to deliver applications is not trivial.

Puppet F5 modules offers the solution to automate operations like creating and updating BIG-IP objects—including server nodes, pools, virtual servers—that are required for typical application deployment.

Example:

f5_node { '/Common/web_server_1':
   ensure                          => 'present',
   address                         => '10.1.20.11',
   description                     => 'Web Server Node 1',
   availability_requirement        => 'all',
   health_monitors                 => ['/Common/icmp'],
}

f5_node { '/Common/web_server_2':
   ensure                          => 'present',
   address                         => '10.1.20.12',
   description                     => 'Web Server Node 2',
   availability_requirement        => 'all',
   health_monitors                 => ['/Common/icmp'],
}

f5_node { '/Common/web_server_3':
   ensure                          => 'present',
   address                         => '10.1.20.13',
   description                     => 'Web Server Node 3',
   availability_requirement        => 'all',
   health_monitors                 => ['/Common/icmp'],
}

f5_pool { '/Common/web_pool':
   ensure                          => 'present',
   members                         => [
        { name => '/Common/web_server_1', port => '80', },
        { name => '/Common/web_server_2', port => '80', },
        { name => '/Common/web_server_3', port => '80', },
   ],
   availability_requirement        => 'all',
   health_monitors                 => ['/Common/http_head_f5'],
   require                         => [
        F5_node['/Common/web_server_1'],
        F5_node['/Common/web_server_2'],
        F5_node['/Common/web_server_3'],
   ],
}

f5_virtualserver { '/Common/http_vs':
   ensure                          => 'present',
   provider                        => 'standard',
   default_pool                    => '/Common/web_pool',
   destination_address             => '10.1.10.240',
   destination_mask                => '255.255.255.255',
   http_profile                    => '/Common/http',
   service_port                    => '80',
   protocol                        => 'tcp',
   source                          => '0.0.0.0/0',
   source_address_translation      => 'automap',
   require                         => F5_pool['/Common/web_pool'],
}

 

Use case 2: Onboarding and HA Clustering

DevOps in the network and application services is not only about changing configurations in high frequency, but also about scaling operations. The ability to write the configuration once and apply to large scale of systems may not seem as relevant to a physical infrastructure, but as virtual and cloud deployments become more utilized, it certainly creates ripple effect in terms of the overall infrastructure responsiveness.

When it comes to deployment of multiple physical or virtual BIG-IP devices, organizations can use Puppet F5 modules to automate all the initial BIG-IP onboarding tasks such as device licensing, DNS and NTP settings, internal and external VLANs, self-IPs, and route domains. In addition, Puppet F5 modules automate the entire process of High Availability (HA) clustering.

Example:

f5_license { '/Common/license':
  registration_key => "xxxxx-xxxxx-xxxxx-xxxxx-xxxxxxx"
}

f5_root { '/Common/root':
  old_password => 'default',
  new_password => 'default',
}

f5_user { 'admin':
  ensure   => 'present',
  password => 'admin',
}

f5_globalsetting { '/Common/globalsetting':
  hostname  => "bigip-a.f5.local",
  gui_setup => "disabled",
}

f5_dns { '/Common/dns':
  name_servers => ["4.2.2.2", "8.8.8.8"],
  search       => ["localhost","f5.local"],
}

f5_ntp { '/Common/ntp':
  servers  => ['0.pool.ntp.org', '1.pool.ntp.org'],
  timezone => 'UTC',
}

 

Use Case 3: Deploying consistent policies across different environment using F5 iApps

In a typical CI/CD pipeline, an organization has many different environments (Development/Test/Production), and each environment is a replica of the other. Imagine, if you will, a new application is added to one environment and it needs to be replicated to other environments in a fast and secure manner. In addition, we also need to make sure the application bought online adheres to all the traffic rules and security policies.

F5 iApps templates encapsulate all the necessary configuration of objects required by an application deployment to ensure consistent policies of the application.

Example:

f5_iapp { '/Common/MicrosoftLync.app/MicrosoftLync':
  ensure    => 'present',
  tables    => {'director_ip__snatpool_members' => [], 'director_ip_server_pools__servers' => [], 'edge_external_ip__snatpool_members' => [], 'edge_external_ip_reverse_proxy__snatpool_members' => [], 'edge_external_ip_server_pools__access_servers' => [], 'edge_external_ip_server_pools__av_servers' => [], 'edge_external_ip_server_pools__conf_servers' => [], 'edge_internal_ip__snatpool_members' => [], 'edge_internal_ip_reverse_proxy__snatpool_members' => [], 'edge_internal_ip_server_pools__servers' => [], 'front_end_ip__snatpool_members' => [], 'front_end_ip_mediation_server_pools__servers' => [], 'front_end_ip_server_pools__servers' => [{'addr' => '100.1.1.1', 'connection_limit' => '0'}]},
  template  => '/Common/f5.microsoft_lync_server_2010',
  variables => {'director_ip__deploying_director_ip' => 'No', 'edge_external_ip__deploying_edge_external_ip' => 'No', 'edge_internal_ip__deploying_edge_internal_ip' => 'No', 'edge_internal_ip_reverse_proxy__deploying_reverse_proxy' => 'No', 'front_end_ip__addr' => '1.1.1.1', 'front_end_ip__cert' => '/Common/default.crt', 'front_end_ip__deploying_front_end_ip' => 'Yes', 'front_end_ip__deploying_mediation' => 'No', 'front_end_ip__key' => '/Common/default.key', 'front_end_ip__sip_monitoring' => 'No', 'front_end_ip__snat' => 'No', 'front_end_ip__snatpool' => 'No', 'front_end_ip_server_pools__lb_method_choice' => 'least-connections-node'},
}

 

Use case 4: Removing configuration drift

Another issue with infrastructure or application services is that of configuration drift. Consider the situations when out-of-band temporary changes are made by admin, or you have a new requirement and you have to modify the configuration. As the result, the state of the BIG-IP devices deviates, or drifts, from the baseline due to manual changes and updates.

To help combat this configuration drift, Puppet can run frequently to keep BIG-IP systems in line with desired state, and help you to revert back to the desired BIG-IP device configurations automatically when temporary changes are made.

Conclusion

These are just some of the use cases that can be tackled using our F5 modules, and we have several more modules in Puppet F5 1.6.0 release. To view a complete list of BIG-IP modules available in Puppet F5 1.6.0 release click here.

Continue on to the following articles in this series as we dive into the architecture and deployment of this F5 Puppet solution.

F5 will be at PuppetConf in San Francisco on October 10-12, showing BIG-IP automation with Puppet. Hope to see you all there.

 

Published Oct 05, 2017
Version 1.0

Was this article helpful?

2 Comments

  • BB16's avatar
    BB16
    Icon for Nimbostratus rankNimbostratus

    Good Article, please explain how to implement this on device/box. Also how we can work with automation on BIGIP F5. Thank you.

     

  • can you please advise how I can use this module with hiera data? I'm trying to leverage hiera data to pass in values to to manage a pair of f5 modules.

     

    another question, how can I pass in the name of the node as a variable in hiera? I tried to use "node $nodename" and it failed to interpret the data.

     

    In addition, whenever I tried to user a $variable as a value to a parameter, the value is null ex: hostname => $node1_hostname, vs only works with hostname => 'host.a.b.c'