Creating a Docker Container to Run AS3 Declarations
This guide will take you through some very basic docker, Python, and F5 AS3 configuration to create a single-function container that will update a pre-determined BIG-IP using an AS3 declaration stored on Github. While it’s far from production ready, it might serve as a basis for more complex configurations, plus it illustrates nicely some technology you can use to automate BIG-IP configuration using AS3, Python and containers. I'm starting with a running BIG-IP - in this case a VE running on theGoogle Cloud Platform, with the AS3 wroker installed and provisioned, plus a couple of webservers listening on different ports. First we’re going to need a host running docker. Fire up an instance in on the platform of your choice – in this example I’m using Ubuntu 18.04 LTS on the Google Cloud platform – that’s purely from familiarity – anything that can run Docker will do. The install process is well documented but looks a bit like this: $ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $sudo apt-get update $ sudoapt-get install docker-ce docker-ce-cli containerd.io It's worth adding your user to the docker group to avoid repeadly forgetting to type sudo (or is that just me?) $ sudo usermod -aGdocker$USER Next let's test it's all working: $ docker run hello-world Next let’s take a look at the AS3 declaration,as you might expect form me by now, it’s the most basic version – a simple HTTP app, with two pool members. The beauty of the AS3 model, of course, is that it doesn’t matter how complex your declaration is, the implementation is always the same. So you could take a much more involved declaration and just by changing the file the python script uses, get a more complex configuration. { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d", "label": "Sample 1", "remark": "Simple HTTP Service with Round-Robin Load Balancing", "Sample_01": { "class": "Tenant", "A1": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.138.0.4" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.138.0.3" ] }, { "servicePort": 8081, "serverAddresses": [ "10.138.0.3" ] } ] } } } } } Now we need some python code to fire up our request. The code below is absolutely a minimum viable set that’s been written for simplicity and clarity and does minimal error checking. There are more ways to improve it that lines of code in it, but it will get you started. #Python Code to run an as3 declaration # import requests import os from requests.auth import HTTPBasicAuth # Get rid of annoying insecure requests waring from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # Declaration location GITDRC = 'https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json' IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS'] URLBASE = 'https://' + IP + ':' + PORT TESTPATH = '/mgmt/shared/appsvcs/info' AS3PATH = '/mgmt/shared/appsvcs/declare' print("########### Fetching Declaration ###########") d = requests.get(GITDRC) # Check we have connectivity and AS3 is installed print('########### Checking that AS3 is running on ', IP ,' #########') url = URLBASE + TESTPATH r = requests.get(url, auth=HTTPBasicAuth(USER, PASS), verify=False) if r.status_code == 200: data = r.json() if data["version"]: print('AS3 version is ', data["version"]) print('########## Runnig Declaration #############') url = URLBASE + AS3PATH headers = { 'content-type': 'application/json', 'accept': 'application/json' } r = requests.post(url, auth=HTTPBasicAuth(USER, PASS), verify=False, data=d.text, headers=headers) print('Status Code:', r.status_code,'\n', r.text) else: print('AS3 test to ',IP, 'failed: ', r.text) This simple python code will pull down an S3 declaration from GitHub using the 'requests' Python library, and the GITDRC variable, connect to a specific BIG-IP, test it’s running AS3 (see here for AS3 setup instructions), and then apply the declaration. It will give you some tracing output, but that’s about it. There are couple of things to note about IP’s, users, and passwords: IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS' As you can see, I’ve set the IP and port statically and the username and passwords are pulled in from environment variables in the container. We’ll talk more about the environment variables below, but this is more a way to illustrate your options than design advice. Now we need to build a container to run it in. Containers are relatively easy to build with just a Dockerfile and a few more test files in a directory. Here's the docker file: FROM python:3 WORKDIR /usr/src/app ARG Username=admin ENV XUSER=$Username ARG Password=admin ENV XPASS=$Password # line bleow is not actually used - see comments - but oy probably is a better way ARG DecURL=https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json ENV Declaration=$DecURL COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . ENTRYPOINT [ "python", "./as3.py" ] You can see a couple of ARG and ENV statements , these simple set the environment variables that we’re (somewhat arbitrarily) using in the python script. Further more we’re going to override them in then build command later. It’s worth noting this isn’t a way to obfuscate passwords, they are exposed by a simple $ docker image history command that will expose all sorts of things about the build of the container, including the environment variables passes to it. This can be overcome by a multi-stage build – but proper secret management is something you should explore – and comment below if you’d like some examples. What’s this requirements.txt file mentioned in the Dockerfile it’s just a manifest to the install of the python package we need: # This file is used by pip to install required python packages # Usage: pip install -r requirements.txt # requests package requests==2.21.0 With our Dockerfile, requirements.txt and as3.py files in a directory we're ready to build a container - in this case I'm going to pass some environment variables into the build to be incorporated in the image - and replace the ones we have set in the Dockerfile: $ export XUSER=admin $ export XPASS=admin Build the container (the -t flag names and tags your container, more of which later): $ docker build -t runciblespoon/as3_python:A--build-arg Username=$XUSER --build-arg Password=$XPASS . The first time you do this there will be some lag as files for the python3 source containerare downloaded and cached, but once it has run you should be able to see your image: $ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE runciblespoon/as3_python A 819dfeaad5eb About a minute ago 936MB python 3 954987809e63 42 hours ago 929MB hello-world latest fce289e99eb9 3 months ago 1.84kB Now we are ready to run the container maybe a good time for a 'nothing up my sleeve' moment - here is the state of the BIG-IP before hand Now let's run the container from our image.The -tty flag attached a pseudo terminal for output and --rm deletes the container afterwards: $ docker run --tty --rm runciblespoon/as3_python:A ########### Fetching Declaration ########### ########### Checking that AS3 is running on 10.138.0.4 ######### AS3 version is 3.10.0 ########## Runing Declaration ############# Status Code: 200 {"results":[{"message":"success","lineCount":23,"code":200,"host":"localhost","tenant":"Sample_01","runTime":929}],"declaration":{"class":"ADC","schemaVersion":"3.0.0","id":"urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d","label":"Sample 1","remark":"Simple HTTP Service with Round-Robin Load Balancing","Sample_01":{"class":"Tenant","A1":{"class":"Application","template":"http","serviceMain":{"class":"Service_HTTP","virtualAddresses":["10.138.0.4"],"pool":"web_pool"},"web_pool":{"class":"Pool","monitors":["http"],"members":[{"servicePort":8080,"serverAddresses":["10.138.0.3"]},{"servicePort":8081,"serverAddresses":["10.138.0.3"]}]}}},"updateMode":"selective","controls":{"archiveTimestamp":"2019-04-26T18:40:56.861Z"}}} Success, by the looks of things. Let's check the BIG-IP: running our container has pulled down the AS3 declaration, and applied it to the BIG-IP. This same container can now be run repeatedly - and only the AS3 declaration stored in git (or anywhere else your container can get it from) needs to change. So now you have this container running locally you might want ot put it somewhere. Docker hub is a good choice and lets you create one private repository for free. Remember this container image has credentials, so keep it safe and private. Now the reason for the -t runciblespoon/as3_python:A flag earlier. My docker hub user is "runciblespoon" and my private repository is as3_python. So now all i need ot do is login to Docker Hub and push my image there: $ docker login $ docker push runciblespoon/as3_python:B Now I can go to any other host that runs Docker, login to Docker hub and run my container: $ docker login $ docker run --tty --rm runciblespoon/as3_python:A Unable to find image 'runciblespoon/as3_python:A' locally B: Pulling from runciblespoon/as3_python ... ########### Fetching Declaration ########### Docker will pull down my container form my private repo and run it, using the AS3 declaration I've specified. If I want to change my config, I just change the declaration and run it again. Hopefully this article gives you a starting point to develop your own containers, python scripts, or AS3 declarations, I'd be interested in what more you would like to see, please ask away in the comments section.1.4KViews0likes4CommentsAS3 add another VS to existing tenant
I have deployed the sample AS3 script to create a VS with pool and pool members from here: { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d", "label": "Sample 1", "remark": "Simple HTTP Service with Round-Robin Load Balancing", "AS1": { "class": "Tenant", "A1": { "class": "Application", "template": "generic", "MyVS1": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.1.11" ], "pool": "web_pool_1" }, "web_pool_1": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "serverAddresses": [ "192.0.1.10", "192.0.1.11" ] } ] } } } } } Now I want to add another VS to the same tenant (same partition) but when I edit the above script and deploy this: { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d", "label": "Sample 1", "remark": "Simple HTTP Service with Round-Robin Load Balancing", "AS1": { "class": "Tenant", "A1": { "class": "Application", "template": "generic", "MyVS2": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.1.12" ], "pool": "web_pool_2" }, "web_pool_2": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "serverAddresses": [ "192.0.1.12", "192.0.1.13" ] } ] } } } } } It replaces the old configuration and I only have MyVS2. How can I add MyVS2 to the current configuration without losing MyVS1?373Views0likes1CommentDELETE method with AS3 is too powerful !
Am I the only one totally freaking out about the fact that with AS3, you just have to send a DELETE method to mgmt/shared/appsvcs/declare and everything is gone ?? All your production system could be wiped off that easily ... From my understanding it's mandatory to have the administrator privilege to use AS3, and administrators can access all the partitions ; so you cannot even create users that would be allowed to manage only specific partitions ... It's all or nothing. In my opinion the least you should do is to get rid of this dangerous default behavior, and instead use the keyword "ALL" to remove all tenants ... ========================== Extract from the doc : Use DELETE to remove configurations for one or more declared Tenants from the target ADC. If you do not specify any Tenants, DELETE removes all of them, which is to say, it removes the entire declared configuration. Indicate the target device and Tenants to remove by appending elements to the main AS3 URL path (/mgmt/shared/appsvcs/declare). By default (just main URL) DELETE removes all Tenants from target localhost. DELETE examples: DELETEhttps://192.0.2.10/mgmt/shared/appsvcs/declare removes all tenants DELETEhttps://192.0.2.10/mgmt/shared/appsvcs/declare/T1,T2,T5 removes Tenants T1, T2, and T5 leaving the rest of the most recent declared configuration for localhost in place ========================== Does anyone agree, or have a suggestion to add some security ?1KViews0likes4CommentsHow to modify HTTP URI using ILX Plugin (v13+)
I am trying to implement ILX Plugin for modifying certain HTTP parameters before forwarding the request to the server. I am able to modify the headers using setHeaders. However, I am unable to find a method to modify the URI. eg. /a/b/c?p=np (Transform to) /catch-all?p=np Is there a method I can use to modify the URI ?306Views0likes1CommentAS3 declaration
In all the example declarations I've seen so far, it lists the virtual server name as serviceMain and if I deviate from that by giving it my own virtual server name like testme123.example.com-80 it complains about not using serviceMain. How can we supply a different VS name on an AS3 declaration? Here is the error message. I used a Python get request to send the declaration. I'm using a Simple HTTP AS3 declaration. ('Status Code:', 422, '\n', u'{"code":422,"errors":["/Sample_01/A1: should have required property \'serviceMain\'"],"declarationFullId":"","message":"declaration is invalid"}')Solved1.5KViews0likes2Commentsquestion of limitation and expiration for rest api token
now I cannot login ltm via rest api , i thought the number of token of my account has reach the maximum .here is an error login myhost fail b'{"code":401,"message":"remoteSender:http://localhost:8100/shared/authn/login, method:POST ","originalRequestBody":"{\\"username\\":\\"user\\",\\"loginProviderName\\":\\"tmos\\",\\"generation\\":0,\\"lastUpdateMicros\\":0}","referer":"ipaddress","restOperationId":124004534,"kind":":resterrorresponse"}' here are my questions: is there any number limitation for number of rest api token a user can apply ? I can see some one say one user can only apply 100 tokens , how to check the existing token by GUI, or cli since I cannot login device by rest api. how to take how long a token will expired ; is there any way to delete token;Solved3.9KViews0likes9CommentsCIS and Kubernetes - Part 1: Install Kubernetes and Calico
Welcome to this series to see how to: Install Kubernetes and Calico (Part 1) Deploy F5 Container Ingress Services (F5 CIS) to tie applications lifecycle to our application services (Part 2) Here is the setup of our lab environment: BIG-IP Version: 15.0.1 Kubernetes component: Ubuntu 18.04 LTM We consider that your BIG-IPs are already setup and running: Licensed and setup as a cluster The networking setup is already done Part 1: Install Kubernetes and Calico Setup our systems before installing kubernetes Step1: Update our systems and install docker To run containers in Pods, Kubernetes uses a container runtime. We will use docker and follow the recommendation provided here As root on ALL Kubernetes components (Master and Node): # Install packages to allow apt to use a repository over HTTPS apt-get -y update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common # Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - # Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # Install Docker CE. apt-get -y update && apt-get install -y docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker We may do a quick test to ensure docker run as expected: docker run hello-world Step2: Setup Kubernetes tools (kubeadm, kubelet and kubectl) To setup Kubernetes, we will leverage the following tools: kubeadm: the command to bootstrap the cluster. kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers. kubectl: the command line util to talk to your cluster. As root on ALL Kubernetes components (Master and Node): curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get -y update We can review which version of kubernetes is supported with F5 Container Ingress Services here At the time of this article, the latest supported version is v1.13.4. We'll make sure to install this specific version with our following step apt-get install -qy kubelet=1.13.4-00 kubeadm=1.13.4-00 kubectl=1.13.4-00 kubernetes-cni=0.6.0-00 apt-mark hold kubelet kubeadm kubectl Install Kubernetes Step1: Setup Kubernetes with kubeadm We will follow the steps provided in the documentation here As root on the MASTER node (make sure to update the api server address to reflect your master node IP): kubeadm init --apiserver-advertise-address=10.1.20.20 --pod-network-cidr=192.168.0.0/16 Note: SAVE somewhere the kubeadm join command. It is needed to "assimilate" the node later. In my example, it looks like the following (YOURS WILL BE DIFFERENT): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 Now you should NOT be ROOT anymore. Go back to your non root user. Since i use Ubuntu, i'll use the default "ubuntu" user Run the following commands as highlighted in the screenshot above: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Step2: Install the networking component of Kubernetes The last step is to setup the network related to our k8s infrastructure. In our kubeadm init command, we used --pod-network-cidr=192.168.0.0/16 in order to be able to setup next on network leveraging Calico as documented here kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml You may monitor the deployment by running the command: kubectl get pods --all-namespaces After some time (<1 min), everything shouldhave a "Running" status. Make sure that CoreDNS started also properly. If everything is up and running, we have our master setup properly and can go to the node to setup k8s on it. Step3: Add the Node to our Kubernetes Cluster Now that the master is setup properly, we can assimilate the node. You need to retrieve the "kubeadmin join …" command that you received at the end of the "kubeadm init …" cmd. You must run the following command as ROOT on the Kubernetes NODE (remember that you got a different hash and token, the command below is an example): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 We can check the status of our node by running the following command on our MASTER (ubuntu user) kubectl get nodes Both component should have a "Ready" status. Last step is to setup Calico between our BIG-IPs and our Kubernetes cluster Setup Calico We need to setup Calico on our BIG-IPs and k8S components. We will setup our environment with the following AS Number: 64512 Step1: BIG-IPs Calico setup F5 has documented this procedure here We will use our self IPs on the internal network. Therefore we need to make sure of the following: The self IP has a portlock down setup to "Allow All" Or add a TCP custom port to the self IP: TCP port 179 You need to allow BGP on the default route domain 0 on your BIG-IPs. Connect to the BIG-IP GUI on go into Network > Route domain. Click on Route Domain "0" and allow BGP Click on "Update" Once this is done,connect via SSH and get into a bash shell on both BIG-IPs Run the following commands: #access the IMI Shell imish #Switch to enable mode enable #Enter configuration mode config terminal #Setup route bgp with AS Number 64512 router bgp 64512 #Create BGP Peer group neighbor calico-k8s peer-group #assign peer group as BGP neighbors neighbor calico-k8s remote-as 64512 #we need to add all the peers: the other BIG-IP, our k8s components neighbor 10.1.20.20 peer-group calico-k8s neighbor 10.1.20.21 peer-group calico-k8s #on BIG-IP1, run neighbor 10.1.20.12 peer-group calico-k8s #on BIG-IP2, run neighbor 10.1.20.11 peer-group calico-k8s #save configuration write #exit end You can review your setup with the command show ip bgp neighbors Note: your other BIG-IP should be identified with a router ID and have a BGP state of "Active". The k8s node won't have a router ID since BGP hasn't already been setup on those nodes. Keep your BIG-IP SSH sessions open. We'll re-use the imish terminal once our k8s components have Calico setup Step2: Kubernetes Calico setup On the MASTER node (not as root), we need to retrieve the calicoctl binary curl -O -Lhttps://github.com/projectcalico/calicoctl/releases/download/v3.10.0/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/local/bin We need to setup calicoctl as explained here sudo mkdir /etc/calico Create a file /etc/calico/calicoctl.cfg with your preferred editor (you'll need sudo privilegies). This file should contain the following apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/ubuntu/config" Note: you may have to change the path specified by the kubeconfig parameter based on the user you use to do kubectl command To make sure that calicoctl is properly setup, run the command calicoctl get nodes You should get a list of your Kubernetes nodes Now we can work on our Calico/BGP configuration as documented here On the MASTER node: cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: true asNumber: 64512 EOF Note: Because we setup nodeToNodeMeshEnabled to True, the k8s node will receive the same config We may now setup our BIG-IP BGP peers. Replace the peerIP Value with the IP of your BIG-IPs cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip1 spec: peerIP: 10.1.20.11 asNumber: 64512 EOF cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip2 spec: peerIP: 10.1.20.12 asNumber: 64512 EOF Review your setup with the command: calicoctl get bgpPeer If you go back to your BIG-IP SSH connections, you may check that your Kubernetes nodes have a router ID now in your BGP configuration: imish show ip bgp neighbors Summary So far we have: Setup Kubernetes Setup Calico between our BIG-IPs and our Kubernetes cluster In the next article, we will setup F5 container Ingress Services (F5 CIS)4.2KViews1like1CommentInstalling and running iControl extensions in isolated GCP VPCs
BIG-IP instances launched on Google Cloud Platform usually need access to the internet to retrieve extensions, install DO and AS3 declarations, and get to any other run-time assets pulled from public URLs during boot. This allows decoupling of BIG-IP releases from the library and extensions that enhance GCP deployments, and is generally a good thing. What if the BIG-IP doesn't have access to the Internet? Best practices for Google Cloud recommend that VMs are deployed with the minimal set of access requirements. For some that means that egress to the internet is restricted too: BIG-IP VMs do not have public IP addresses. A NAT Gateway or NATing VM is not present in the VPC. Default VPC network routes to the internet have been removed. If you have a private artifact repository available in the VPC, supporting libraries and onboarding resources could be added to there and retrieved during initialization as needed, or you could also create customized BIG-IP images that have the supporting libraries pre-installed (see BIG-IP image generator for details). Both those methods solve the problem of installing run-time components without internet access, but Cloud Failover Extension, AS3 Service Discovery, and Telemetry Streaming must be able to make calls to GCP APIs, but GCP APIs are presented as endpoints on the public internet. For example, Cloud Failover Extension will not function correctly out of the box when the BIG-IP instances are not able to reach the internet directly or via a NAT because the extension must have access to Storage APIs for shared-state persistence, and to Compute APIs to updates to network resources. If the BIG-IP is deployed without functioning routes to the internet, CFE cannot function as expected. Figure 1: BIG-IP VMs 1 cannot reach public API endpoints 2 because routes to internet 3 are removed Given that constraint, how can we make CFE work in truly isolated VPCs where internet access is prohibited? Private Google Access Enabling Private Google Access on each VPC subnet that may need to access Google Cloud APIs changes the underlying SDN so that the CIDRs for restricted.googleapis.com (or private.googleapis.com † ) will be routed without going through the internet. When combined with a private DNS zone which shadows all googleapis.com lookups to use the chosen protected endpoint range, the VPC networks effectively have access for all GCP APIs. The steps to do so are simple: Enable Private Google Access on each VPC subnet where a GCP API call may be sourced. Create a Cloud DNS private zone for googleapis.com that contains two records: CNAME for *.googleapis.com that responds with restricted.googleapis.com. A record for restricted.googleapis.com that resolves to each host in 199.36.153.4/30. Create a custom route on each VPC network for 199.36.153.4/30 with next-hop set for internet gateway. With this configuration in place, any VMs that are attached to the VPC networks that are associated with this private DNS zone will automatically try to use 199.36.153.4/30 endpoints for all GCP API calls without code changes, and the custom route will allow Private Google Access to function correctly. Automating with Terraform and Google Cloud Foundation Toolkit ‡ While you can perform the steps to enable private API access manually, it is always better to have a repeatable and reusable approach that can be automated as part of your infrastructure provisioning. My tool of choice for infrastructure automation is Hashicorp's Terraform, and Google's Cloud Foundation Toolkit, a set of Terraform modules that can create and configure GCP resources. By combining Google's modules with my own BIG-IP modules, we can build a repeatable solution for isolated VPC deployments; just change the variable definitions to deploy to development, testing/QA, and production. Cloud Failover Example Figure 2: Private Google Access 1 , custom DNS 2 , and custom routes 3 combine to enable API access 4 without public internet access A fully-functional example that builds out the infrastructure shown in figure 2 can be found in my GitHub repo f5-google-bigip-isolated-vpcs. When executed, Terraform will create three network VPCs that lack the default-internet egress route, but have a custom route defined to allow traffic to restricted.googleapis.com CIDR. A Cloud DNS private zone will be created to override wildcard googleapis.com lookups with restricted.googleapis.com, and the private zone will be enabled on all three VPC networks. A pair of BIG-IPs are instantiated with CFE enabled and configured to use a dedicated CFE bucket for state management. An IAP-enabled bastion host with tinyproxy allows for SSH and GUI access to the BIG-IPs (See the repo's README for full details on how to connect). Once logged in to the active BIG-IP, you can verify that the instances do not have access to the internet, and you can verify that CFE is functioning correctly by forcing the active instance to standby. Almost immediately you can see that the other BIG-IP instance has become the active instance. Notes † Private vs Restricted access GCP supports two protected endpoint options; private and restricted. Both allow access to GCP API endpoints without traversing the public internet, but restricted is integrated with VPC Service Controls. If you need access to a GCP API that is unsupported by VPC Service Controls, you can choose private access and change steps 2 and 3 above to use private.googleapis.com and 199.36.153.8/30 instead. ‡ Prefer Google Deployment Manager? My colleague Gert Wolfis has written a similar article that focuses on using GDM templates for BIG-IP deployment. You can find his article at https://devcentral.f5.com/s/articles/Deploy-BIG-IP-on-GCP-with-GDM-without-Internet-access.339Views1like0CommentsTelemetry streaming - One click deploy using Ansible
In this article we will focus on using Ansible to enable and install telemetry streaming (TS) and associated dependencies. Telemetry streaming The F5 BIG-IP is a full proxy architecture, which essentially means that the BIG-IP LTM completely understands the end-to-end connection, enabling it to be an endpoint and originator of client and server side connections. This empowers the BIG-IP to have traffic statistics from the client to the BIG-IP and from the BIG-IP to the server giving the user the entire view of their network statistics. To gain meaningful insight, you must be able to gather your data and statistics (telemetry) into a useful place.Telemetry streaming is an extension designed to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP to a consumer application. You can earn more about telemetry streaming here, but let's get to Ansible. Enable and Install using Ansible The Ansible playbook below performs the following tasks Grab the latest Application Services 3 (AS) and Telemetry Streaming (TS) versions Download the AS3 and TS packages and install them on BIG-IP using a role Deploy AS3 and TS declarations on BIG-IP using a role from Ansible galaxy If AVR logs are needed for TS then provision the BIG-IP AVR module and configure AVR to point to TS Prerequisites Supported on BIG-IP 14.1+ version If AVR is required to be configured make sure there is enough memory for the module to be enabled along with all the other BIG-IP modules that are provisioned in your environment The TS data is being pushed to Azure log analytics (modify it to use your own consumer). If azure logs are being used then change your TS json file with the correct workspace ID and sharedkey Ansible is installed on the host from where the scripts are run Following files are present in the directory Variable file (vars.yml) TS poller and listener setup (ts_poller_and_listener_setup.declaration.json) Declare logging profile (as3_ts_setup_declaration.json) Ansible playbook (ts_workflow.yml) Get started Download the following roles from ansible galaxy. ansible-galaxy install f5devcentral.f5app_services_package --force This role performs a series of steps needed to download and install RPM packages on the BIG-IP that are a part of F5 automation toolchain. Read through the prerequisites for the role before installing it. ansible-galaxy install f5devcentral.atc_deploy --force This role deploys the declaration using the RPM package installed above. Read through the prerequisites for the role before installing it. By default, roles get installed into the /etc/ansible/role directory. Next copy the below contents into a file named vars.yml. Change the variable file to reflect your environment # BIG-IP MGMT address and username/password f5app_services_package_server: "xxx.xxx.xxx.xxx" f5app_services_package_server_port: "443" f5app_services_package_user: "*****" f5app_services_package_password: "*****" f5app_services_package_validate_certs: "false" f5app_services_package_transport: "rest" # URI from where latest RPM version and package will be downloaded ts_uri: "https://github.com/F5Networks/f5-telemetry-streaming/releases" as3_uri: "https://github.com/F5Networks/f5-appsvcs-extension/releases" #If AVR module logs needed then set to 'yes' else leave it as 'no' avr_needed: "no" # Virtual servers in your environment to assign the logging profiles (If AVR set to 'yes') virtual_servers: - "vs1" - "vs2" Next copy the below contents into a file named ts_poller_and_listener_setup.declaration.json. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "My_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "<<workspace-id>>", "passphrase": { "cipherText": "<<sharedkey>>" }, "useManagedIdentity": false, "region": "eastus" } } Next copy the below contents into a file named as3_ts_setup_declaration.json { "class": "ADC", "schemaVersion": "3.10.0", "remark": "Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class": "Application", "template": "shared", "telemetry_local_rule": { "remark": "Only required when TS is a local listener", "class": "iRule", "iRule": "when CLIENT_ACCEPTED {\n node 127.0.0.1 6514\n}" }, "telemetry_local": { "remark": "Only required when TS is a local listener", "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort": 6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class": "Pool", "members": [ { "enable": true, "serverAddresses": [ "255.255.255.254" ], "servicePort": 6514 } ], "monitors": [ { "bigip": "/Common/tcp" } ] }, "telemetry_hsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "telemetry" } }, "telemetry_formatted": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "telemetry_hsl" } }, "telemetry_publisher": { "class": "Log_Publisher", "destinations": [ { "use": "telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } } } } } NOTE: To better understand the above declarations check out our clouddocs page: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/telemetry-system.html Next copy the below contents into a file named ts_workflow.yml - name: Telemetry streaming setup hosts: localhost connection: local any_errors_fatal: true vars_files: vars.yml tasks: - name: Get latest AS3 RPM name action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release: "{{as3_output.stdout_lines[0]}}" - name: Get latest AS3 RPM tag action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release_tag: "{{as3_output.stdout_lines[0]}}" - name: Get latest TS RPM name action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release: "{{ts_output.stdout_lines[0]}}" - name: Get latest TS RPM tag action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release_tag: "{{ts_output.stdout_lines[0]}}" - name: Download and Install AS3 and TS RPM ackages to BIG-IP using role include_role: name: f5devcentral.f5app_services_package vars: f5app_services_package_url: "{{item.uri}}/download/{{item.release_tag}}/{{item.release}}?raw=true" f5app_services_package_path: "/tmp/{{item.release}}" loop: - {uri: "{{as3_uri}}", release_tag: "{{as3_release_tag}}", release: "{{as3_release}}"} - {uri: "{{ts_uri}}", release_tag: "{{ts_release_tag}}", release: "{{ts_release}}"} - name: Deploy AS3 and TS declaration on the BIG-IP using role include_role: name: f5devcentral.atc_deploy vars: atc_method: POST atc_declaration: "{{ lookup('template', item.file) }}" atc_delay: 10 atc_retries: 15 atc_service: "{{item.service}}" provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" loop: - {service: "AS3", file: "as3_ts_setup_declaration.json"} - {service: "Telemetry", file: "ts_poller_and_listener_setup_declaration.json"} #If AVR logs need to be enabled - name: Provision BIG-IP with AVR bigip_provision: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" module: "avr" level: "nominal" when: avr_needed == "yes" - name: Enable AVR logs using tmsh commands bigip_command: commands: - modify analytics global-settings { offbox-protocol tcp offbox-tcp-addresses add { 127.0.0.1 } offbox-tcp-port 6514 use-offbox enabled } - create ltm profile analytics telemetry-http-analytics { collect-geo enabled collect-http-timing-metrics enabled collect-ip enabled collect-max-tps-and-throughput enabled collect-methods enabled collect-page-load-time enabled collect-response-codes enabled collect-subnets enabled collect-url enabled collect-user-agent enabled collect-user-sessions enabled publish-irule-statistics enabled } - create ltm profile tcp-analytics telemetry-tcp-analytics { collect-city enabled collect-continent enabled collect-country enabled collect-nexthop enabled collect-post-code enabled collect-region enabled collect-remote-host-ip enabled collect-remote-host-subnet enabled collected-by-server-side enabled } provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" when: avr_needed == "yes" - name: Assign TCP and HTTP profiles to virtual servers bigip_virtual_server: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" name: "{{item}}" profiles: - http - telemetry-http-analytics - telemetry-tcp-analytics loop: "{{virtual_servers}}" when: avr_needed == "yes" Now execute the playbook: ansible-playbook ts_workflow.yml Verify Login to the BIG-IP UI Go to menu iApps->Package Management LX. Both the f5-telemetry and f5-appsvs RPM's should be present Login to BIG-IP CLI Check restjavad logs present at /var/log for any TS errors Login to your consumer where the logs are being sent to and make sure the consumer is receiving the logs Conclusion The Telemetry Streaming (TS) extension is very powerful and is capable of sending much more information than described above. Take a look at the complete list of logs as well as consumer applications supported by TS over on CloudDocs: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/using-ts.html639Views3likes0Comments