Docker to BIG-IP: Create/Update/Delete pools of Docker containers
Problem this snippet solves:
Python script that utilizes the Docker / BIG-IP APIs to create/update/delete pools. Requires the use of "magical" strings to map containers to pools. See associated DevCentral article: https://devcentral.f5.com/articles/connecting-docker-to-big-ip
How to use this snippet:
Run on a host that has access to the remote Docker API and BIG-IP Rest API.
Code :
from docker import Client import requests, json # define program-wide variables BIGIP_ADDRESS = '[Address of BIG-IP]' BIGIP_USER = '[Admin User]' BIGIP_PASS = '[Admin Password]' DOCKER_HOSTS = ['[List of Docker Hosts]'] HTTP_PORT = '80' HTTP_PROTOCOL = '%s/tcp' %(HTTP_PORT) #requests.packages.urllib3.disable_warnings() clients = [] pools = {} data_group = {} # # functions # def create_pool(bigip, name, members): payload = {} # convert member format payload_members = [ { 'name' : member } for member in members ] # define test pool payload['name'] = name payload['description'] = 'built by docker_to_f5_bigip.py' payload['loadBalancingMode'] = 'least-connections-member' payload['monitor'] = 'http' payload['members'] = members req = bigip.post('%s/ltm/pool' % BIGIP_URL_BASE, data=json.dumps(payload)) def update_pool(bigip, name, members): payload = {} # convert member format payload_members = [ { 'name' : member } for member in members ] # define test pool payload['name'] = name payload['members'] = members req = bigip.patch('%s/ltm/pool/%s' % (BIGIP_URL_BASE, name) , data=json.dumps(payload)) #update DataGroup def update_dg(bigip, name, data_group): payload = {} payload['records'] = [{'data':r[1],'name':r[0]} for r in data_group.items()] req = bigip.patch('%s/ltm/data-group/internal/%s' % (BIGIP_URL_BASE, name), data=json.dumps(payload)) # # connect to docker hosts # for host in DOCKER_HOSTS: try: cli = Client(host) cli.info() except: print "failled to connect to",host continue clients.append(cli) containers = {} # # grab info about containers # for cli in clients: tmp = [ c['Id'] for c in cli.containers()] for cid in tmp: details = cli.inspect_container(cid) containers[cid[:12]] = {'Name': details['Name'][1:], 'IPv4': details['NetworkSettings']['IPAddress'], 'Ports': details['NetworkSettings']['Ports'].keys(), } # # build list of HTTP services # for cnt in containers.values(): ports = cnt['Ports'] if HTTP_PROTOCOL in ports: ip_port = '%s:%s' %(cnt['IPv4'],HTTP_PORT) con_name = cnt['Name'] pool_name = 'docker_%s_pool' %(con_name.split('-')[0]) pool = pools.get(pool_name,[]) pool.append(ip_port) pools[pool_name] = pool data_group[cnt['Name']] = ip_port # REST resource for BIG-IP that all other requests will use bigip = requests.session() bigip.auth = (BIGIP_USER, BIGIP_PASS) bigip.verify = False bigip.headers.update({'Content-Type' : 'application/json'}) # Requests requires a full URL to be sent as arg for every request, define base URL globally here BIGIP_URL_BASE = 'https://%s/mgmt/tm' % BIGIP_ADDRESS # # grab all pool names # req = bigip.get('%s/ltm/pool' % BIGIP_URL_BASE) pool_json = req.json() pool_names = [a['name'] for a in pool_json['items'] if a['name'].startswith('docker_')] local_pools = set(pool_names) remote_pools = set(pools.keys()) to_delete = local_pools - remote_pools to_add = remote_pools - local_pools to_update = remote_pools & local_pools for pname in to_delete: req = bigip.delete('%s/ltm/pool/%s' % (BIGIP_URL_BASE, pname)) for pname in to_add: payload = {} create_pool(bigip, pname, pools[pname]) for pname in to_update: payload = {} update_pool(bigip, pname, pools[pname]) update_dg(bigip, 'dg_docker_container', data_group) update_dg(bigip, 'dg_docker_pool', dict((a[7:-5],a) for a in pools.keys()))
Tested this on version:
11.5Published Sep 01, 2015
Version 1.0Eric_Chen
Employee
Joined May 16, 2013
Eric_Chen
Employee
Joined May 16, 2013
- Marcello_de_SalNimbostratusHi there, there are a few improvements I'd like to propose for this: 1. Why don't you parameterize this script so that the docker pull is provided as a parameter and/or Environment variable? 2. Why don't you dockerize deploy it on Docker Hub? That way, that a universal "Docker" way to deploy the script in any given host and other users can benefit from that. 3. Add examples of using a Software-defined load balancer using HAProxy so we can reuse existing Dockerized applications with the setup I work at a big F5 customer and I'm helping the Network team to get familiar with Docker (and getting myself familiar with F5)... thanks Marcello
- Marcello_de_SalNimbostratusAnother suggestion: 4. The variable "DOCKER_HOSTS" could be retrieved from a Discovery Service such as Consul, ETCD or Eureka! That way, the list of the IP addresses would be dynamic. * http://technologyconversations.com/2015/09/08/service-discovery-zookeeper-vs-etcd-vs-consul/ * Practical example of scaling Wordpress http://agiletesting.blogspot.com/2014/11/service-discovery-with-consul-and.html
- Eric_ChenEmployeeAll these suggestions are great and would make for a great continuation of the previous article (just need to find the time!). Some newer developments with Swarm, etc... would be good to look at as well. I'd be interested in hearing about your progress. Feel free to also reach out to your local F5 account team to discuss the topic/requirements as well. Thanks, Eric
- Marcello_de_SalNimbostratusHi Eric, Our local F5 Network team does NOT have that much experience with scripting... Really basic ones, and most of the needs are requested to F5 as a service when it is something more elaborate... As nobody in the team has experience with Docker, I'm all alone on this with a couple of other Engineers who are not part of Network, but we see a growth of the need to have different strategies of deployment, especially because we are looking for Docker Orchestration with both infrastructure (Mesos, Nomad) and Container management (Rancher, OpenShift, Tectonic, Kubernetes)... Is there a way to make a service request where you could help bootstrap our F5 Network team (Intuit) with Docker?
- Eric_ChenEmployeeMarcello, I'm reaching out to you via alternate channels so we can take the conversation offline/out of comments. Thanks.
- R0d_78689Nimbostratus
Marcello, Take a look at www.DCHQ.io for Docker Orchestration, management and much more. To get started quickly, download DCHQ On-premise from here: http://dchq.co/dchq-on-premise.html and follow the installation instructions on the video, very easy to install. You should create a VM with at least 2 CPU cores and 14 GB of RAM. Once you have DCHQ up and running on this VM, sign-in to DCHQ and create a cluster. Add two bare metal servers (or VMs) to the cluster, each with at least 12GB ram and 4 vCPU cores running Ubunu 14.04 or CentOS 6.7. If F5 releases a docker image for F5 BIG-IP, you can certainly bootstrap that image and its deployment along with the application/web servers, database, and any other service like SOLR, REDIS, ETC... as long as there is a docker image for it you can boopstrap entire stacks in minutes. Additionally, you can create your own docker images and integrate them in the templating system DCHQ uses which is YAML based.. Have fun ! Eric, Thank you for this awesome article. Looking forward to seeing an F5 docker image on dockerhub soon.. Regards, Rod