iControlREST
135 TopicsiControl REST Cookbook - Virtual Server (ltm virtual)
This cookbook lists selected ready-to-use iControl REST curl commands for virtual-server related resources. Each recipe consists of the curl command, it's tmsh equivalent, and sample output. In this cookbook, the following curl options are used. Option Meaning ______________________________________________________________________________________ -s Suppress progress meter. Handy when you want to pipe the output. ______________________________________________________________________________________ -k Allows "insecure" SSL connections. ______________________________________________________________________________________ -u Specify user ID and password. For the start, you should use the "admin" account that you normally use to access the Configuration Utility. When you specify the password at the same time, concatenate with ":". e.g., admin:admin. ______________________________________________________________________________________ -X <method> Specify the HTTP method. When omitted, the default is GET. In the REST framework, POST means create (tmsh create), PATCH means overwriting the existing resource with the data sent (tmsh modify), and PATCH is for merging (ditto). ______________________________________________________________________________________ -H <Header> Specify the request header. When you send (POST, PATCH, PUT) data, you need to tell the server that the data is in JSON format. i.e., -H "Content-Type: application/json. ______________________________________________________________________________________ -d 'data' The JSON data to send. Note that you need to quote the entire json blob, and each "name":"value" pairs must be quoted. When you have nested quotes, make sure you escape (\) them. Get information of the virtual <vs> tmsh list ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', generation: 1109, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs?ver=12.1.0', addressStatus: 'yes', autoLasthop: 'default', cmpEnabled: 'yes', connectionLimit: 0, description: 'TestData', destination: '/Common/192.168.184.226:80', enabled: true, gtmScore: 0, ipProtocol: 'tcp', mask: '255.255.255.255', mirror: 'disabled', mobileAppTunnel: 'disabled', nat64: 'disabled', pool: '/Common/vs-pool', poolReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~vs-pool?ver=12.1.0' }, rateLimit: 'disabled', rateLimitDstMask: 0, rateLimitMode: 'object', rateLimitSrcMask: 0, serviceDownImmediateAction: 'none', source: '0.0.0.0/0', sourceAddressTranslation: { type: 'automap' }, sourcePort: 'preserve', synCookieStatus: 'not-activated', translateAddress: 'enabled', translatePort: 'enabled', vlansDisabled: true, vsIndex: 4, rules: [ '/Common/irule' ], rulesReference: [ { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~iRuleTest?ver=12.1.0' } ], policiesReference: { link: 'https://localhost/mgmt/tm/ltm/virtual/~Common~vs/policies?ver=12.1.0', isSubcollection: true }, profilesReference: { link: 'https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles?ver=12.1.0', isSubcollection: true } } Get only specfic field of the virtual <vs> The naming convension for the parameters is slightly different from the ones on tmsh, so look for the familiar names in the GET response above. The example below queris the Default Pool (pool). tmsh list ltm <vs> pool curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>?options=pool Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', generation: 1, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs?options=pool&ver=12.1.1', pool: '/Common/vs-pool', poolReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~vs-pool?ver=12.1.1' } } Get all the information of the virtual <vs> Unlike the tmsh equivalent, iControl REST GET does not return the configuration information of the attached policies and profiles. To see them, use expandSubcollections tmsh list ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>?expandSubcollections=true Sample Output { "addressStatus": "yes", "autoLasthop": "default", "cmpEnabled": "yes", "connectionLimit": 0, "destination": "/Common/192.168.184.240:80", "enabled": true, "fullPath": "vs", "generation": 291, "gtmScore": 0, "ipProtocol": "tcp", "kind": "tm:ltm:virtual:virtualstate", "mask": "255.255.255.255", "mirror": "disabled", "mobileAppTunnel": "disabled", "name": "vs", "nat64": "disabled", "policiesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/policies?ver=13.1.0" }, "pool": "/Common/CentOS-all80", "poolReference": { "link": "https://localhost/mgmt/tm/ltm/pool/~Common~CentOS-all80?ver=13.1.0" }, "profilesReference": { "isSubcollection": true, "items": [ { "context": "all", "fullPath": "/Common/http", "generation": 291, "kind": "tm:ltm:virtual:profiles:profilesstate", "name": "http", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/profile/http/~Common~http?ver=13.1.0" }, "partition": "Common", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles/~Common~http?ver=13.1.0" }, { "context": "all", "fullPath": "/Common/tcp", "generation": 287, "kind": "tm:ltm:virtual:profiles:profilesstate", "name": "tcp", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/profile/tcp/~Common~tcp?ver=13.1.0" }, "partition": "Common", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles/~Common~tcp?ver=13.1.0" } ], "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles?ver=13.1.0" }, "rateLimit": "disabled", "rateLimitDstMask": 0, "rateLimitMode": "object", "rateLimitSrcMask": 0, "selfLink": "https://localhost/mgmt/tm/ltm/virtual/vs?expandSubcollections=true&ver=13.1.0", "serviceDownImmediateAction": "none", "source": "0.0.0.0/0", "sourceAddressTranslation": { "type": "automap" }, "sourcePort": "preserve", "synCookieStatus": "not-activated", "translateAddress": "enabled", "translatePort": "enabled", "vlansDisabled": true, "vsIndex": 2 } Get stats of the virtual <vs> tmsh show ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>/stats Sample Output { kind: 'tm:ltm:virtual:virtualstats', generation: 1109, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs/stats?ver=12.1.0', entries: { 'https://localhost/mgmt/tm/ltm/virtual/vs/~Common~vs/stats': { nestedStats: { kind: 'tm:ltm:virtual:virtualstats', selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs/~Common~vs/stats?ver=12.1.0', entries: { 'clientside.bitsIn': { value: 12880 }, 'clientside.bitsOut': { value: 34592 }, 'clientside.curConns': { value: 0 }, 'clientside.evictedConns': { value: 0 }, 'clientside.maxConns': { value: 2 }, 'clientside.pktsIn': { value: 26 }, 'clientside.pktsOut': { value: 26 }, 'clientside.slowKilled': { value: 0 }, 'clientside.totConns': { value: 6 }, cmpEnableMode: { description: 'all-cpus' }, cmpEnabled: { description: 'enabled' }, csMaxConnDur: { value: 37 }, csMeanConnDur: { value: 29 }, csMinConnDur: { value: 17 }, destination: { description: '192.168.184.226:80' }, 'ephemeral.bitsIn': { value: 0 }, 'ephemeral.bitsOut': { value: 0 }, 'ephemeral.curConns': { value: 0 }, 'ephemeral.evictedConns': { value: 0 }, 'ephemeral.maxConns': { value: 0 }, 'ephemeral.pktsIn': { value: 0 }, 'ephemeral.pktsOut': { value: 0 }, 'ephemeral.slowKilled': { value: 0 }, 'ephemeral.totConns': { value: 0 }, fiveMinAvgUsageRatio: { value: 0 }, fiveSecAvgUsageRatio: { value: 0 }, tmName: { description: '/Common/vs' }, oneMinAvgUsageRatio: { value: 0 }, 'status.availabilityState': { description: 'available' }, 'status.enabledState': { description: 'enabled' }, 'status.statusReason': { description: 'The virtual server is available' }, syncookieStatus: { description: 'not-activated' }, 'syncookie.accepts': { value: 0 }, 'syncookie.hwAccepts': { value: 0 }, 'syncookie.hwSyncookies': { value: 0 }, 'syncookie.hwsyncookieInstance': { value: 0 }, 'syncookie.rejects': { value: 0 }, 'syncookie.swsyncookieInstance': { value: 0 }, 'syncookie.syncacheCurr': { value: 0 }, 'syncookie.syncacheOver': { value: 0 }, 'syncookie.syncookies': { value: 0 }, totRequests: { value: 4 } } } } } } Change one of the configuration options of the virtual <vs> The command below changes the Description field of the virtual ("description" in tmsh and iControl REST). tmsh modify ltm virtual <vs> description "Hello World!" curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"description": "Hello World!"}' Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', ... description: 'Hello World!', <==== Changed. ... } Disable the virtual <vs> The command syntax is same as above: To change the state of a virtual from "enabled" to "disabled", send "disabled":true. For enabling the virtual, use "enabled":true. Note that the Boolean type true/false does not require quotations. tmsh modify ltm virtual <vs> disabled curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"disabled": true}' \ Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', ... disabled: true, <== Changed ... } Add another iRule to <vs> When the virtual has iRules already attached, you need to send the existing ones too along with the additional one. For example, to add /Common/testRule1 to the virtual with /Common/testRule1, specify both in an array (square brackets). Note that the /Common/testRule2 iRule object should be already created. tmsh modify ltm virtual <vs> rules {testRule1 testRule2} curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"rules": ["/Common/testRule1", "/Common/testRule2"] }' Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', ... rules: [ '/Common/test1', '/Common/test2' ], <== Changed rulesReference: [ { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~test1?ver=12.1.1' }, { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~test2?ver=12.1.1' } ], ... } Create a new virtual <vs> You can create a skeleton virtual by specifying only Destination Address and Mask. The remaining parameters such as profiles are set to default. You can later modify the parameters by PATCH-ing. tmsh create ltm virtual <vs> destination <ip:port> mask <ip> curl -sku admin:admin -X POST -H "Content-Type: application/json" \ -d '{"name": "vs", "destination":"192.168.184.230:80", "mask":"255.255.255.255"}' \ https://<host>/mgmt/tm/ltm/virtual Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', partition: 'Common', fullPath: '/Common/vs', ... destination: '/Common/192.168.184.230:80', <== Created ... mask: '255.255.255.255', <== Created ... } Create a new virtual <vs> with a lot of parameters You can specify all the essential parameters upon creation. This example creates a new virtual with pool, default persistence profile, profiles, iRule, and source address translation. The call fails if any of the parameters conflicts. For example, you cannot specify "Cookie Persistence" without specifying appropriate profiles. If you do not specify any profile, it falls back to the default fastL4 , which is not compatible with Cookie Persistence. tmsh create ltm virtual <vs> destination <ip:port> mask <ip> pool <pool> persist replace-all-with { cookie } profiles add { tcp http clientssl } rules { <rule> } source-address-translation { type automap } curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual -H "Content-Type: application/json" -X POST -d '{"name": "vs", \ "destination": "10.10.10.10:10", \ "mask": "255.255.255.255", \ "pool": "CentOS-all80", \ "persist": [ {"name": "cookie"} ], \ "profilesReference": {"items": [ {"context": "all", "name": "http"}, {"context": "all", "name": "tcp"}, {"context": "clientside", "name": "clientssl"}] }, \ "rules": [ "ShowVersion" ], \ "sourceAddressTranslation": {"type": "automap"} }' Sample Output { "addressStatus": "yes", "autoLasthop": "default", "cmpEnabled": "yes", "connectionLimit": 0, "destination": "/Common/10.10.10.10:10", "enabled": true, "fullPath": "/Common/test", "generation": 592, "gtmScore": 0, "ipProtocol": "tcp", "kind": "tm:ltm:virtual:virtualstate", "mask": "255.255.255.255", "mirror": "disabled", "mobileAppTunnel": "disabled", "name": "vs", "nat64": "disabled", "partition": "Common", "persist": [ { "name": "cookie", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/persistence/cookie/~Common~cookie?ver=13.1.0" }, "partition": "Common", "tmDefault": "yes" } ], "policiesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~test/policies?ver=13.1.0" }, "pool": "/Common/CentOS-all80", "poolReference": { "link": "https://localhost/mgmt/tm/ltm/pool/~Common~CentOS-all80?ver=13.1.0" }, "profilesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~test/profiles?ver=13.1.0" }, "rateLimit": "disabled", "rateLimitDstMask": 0, "rateLimitMode": "object", "rateLimitSrcMask": 0, "rules": [ "/Common/ShowVersion" ], "rulesReference": [ { "link": "https://localhost/mgmt/tm/ltm/rule/~Common~ShowVersion?ver=13.1.0" } ], "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~test?ver=13.1.0", "serviceDownImmediateAction": "none", "source": "0.0.0.0/0", "sourceAddressTranslation": { "type": "automap" }, "sourcePort": "preserve", "synCookieStatus": "not-activated", "translateAddress": "enabled", "translatePort": "enabled", "vlansDisabled": true, "vsIndex": 52 } Delete a virtual <vs> tmsh delete ltm virtual <vs> curl -sku admin:admin https://192.168.226.55/mgmt/tm/ltm/virtual/<vs> -X DELETE Sample Output No output (just 200 OK and no response body) References curl.1 the man page curl Releases and Downloads ... including the port for Windows Jason Rahm's "Demystifying iControl REST" series(DevCentral) -- This is Part I of 7 at the time of this article. iControl REST API reference (DevCentral) iControl® REST API User Guide (DevCentral) -- Link is for 12.1. Search for the older versions.17KViews3likes13CommentsDemystifying iControl REST Part 7 - Understanding Transactions
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written about the rest interface so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. A few months ago, a question in Q&A from community member spirrello asking how to update a tcp profile on a virtual. He was using bigsuds, the python wrapper for the soap interface. For the rest interface on this particular object, this is easy; just use the put method and supply the payload mapping the updated profile. But for soap, this requires a transaction. There are some changes to BIG-IP via the rest interface, however, like updating an ssl cert or key, that likewise will require a transaction to accomplish. In this article, I’ll show you how to use transactions with the rest interface. The Fine Print From the iControl REST user guide, the life cycle of a transaction progresses through three phases: Creation - This phase occurs when the transaction is created using a POST command. Modification - This phase occurs when commands are added to the transaction, or changes are made to the sequence of commands in the transaction. Commit - This phase occurs when iControl REST runs the transaction. To create a transaction, post to /tm/transaction POST https://192.168.25.42/mgmt/tm/transaction {} Response: { "transId":1389812351, "state":"STARTED", "timeoutSeconds":30, "kind":"tm:transactionstate", "selfLink":"https://localhost/mgmt/tm/transaction/1389812351?ver=11.5.0" } Note the transId, the state, and the timeoutSeconds. You'll need the transId to add or re-sequence commands within the transaction, and the transaction will expire after 30 seconds if no commands are added. You can list all transactions, or the details of a specific transaction with a get request. GET https://192.168.25.42/mgmt/tm/transaction GET https://192.168.25.42/mgmt/tm/transaction/transId To add a command to the transaction, you use the normal method uris, but include the X-F5-REST-Coordination-Id header. This example creates a pool with a single member. POST https://192.168.25.42/mgmt/tm/ltm/pool X-F5-REST-Coordination-Id:1389812351 { "name":"tcb-xact-pool", "members": [ {"name":"192.168.25.32:80","description":"First pool for transactions"} ] } Not a great example because there is no need for a transaction here, but we'll roll with it! There are several other option methods for interrogating the transaction itself, see the user guide for details. Now we can commit the transaction. To do that, you reference the transaction id in the URI, remove the X-F5-REST-Coordination-Id header and use the patch method with payload key/value state: VALIDATING . PATCH https://localhost/mgmt/tm/transaction/1389812351 { "state":"VALIDATING" } That's all there is to it! Now that you've seen the nitty gritty details, let's take a look at some code samples. Roll Your Own In this example, I am needing to update and ssl key and certificate. If you try to update the cert or the key, it will complain that they do not match, so you need to update both at the same time. Assuming you are writing all your code from scratch, this is all it takes in python. Note on line 21 I post with an empty payload, and then on line 23, I add the header with the transaction id. I make my modifications and then in line 31, I remove the header, and finally on line 32, I patch to the transaction id with the appropriate payload. import json import requests btx = requests.session() btx.auth = (f5_user, f5_password) btx.verify = False btx.headers.update({'Content-Type':'application/json'}) urlb = 'https://{0}/mgmt/tm'.format(f5_host) domain = 'mydomain.local_sslobj' chain = 'mychain_sslobj try: key = btx.get('{0}/sys/file/ssl-key/~Common~{1}'.format(urlb, domain)) cert = btx.get('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, domain)) chain = btx.get('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, 'chain')) if (key.status_code == 200) and (cert.status_code == 200) and (chain.status_code == 200): # use a transaction txid = btx.post('{0}/transaction'.format(urlb), json.dumps({})).json()['transId'] # set the X-F5-REST-Coordination-Id header with the transaction id btx.headers.update({'X-F5-REST-Coordination-Id': txid}) # make modifications modkey = btx.put('{0}/sys/file/ssl-key/~Common~{1}'.format(urlb, domain), json.dumps(keyparams)) modcert = btx.put('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, domain), json.dumps(certparams)) modchain = btx.put('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, 'le-chain'), json.dumps(chainparams)) # remove header and patch to commit the transaction del btx.headers['X-F5-REST-Coordination-Id'] cresult = btx.patch('{0}/transaction/{1}'.format(urlb, txid), json.dumps({'state':'VALIDATING'})).json() A Little Help from a Friend The f5-common-python library was released a few months ago to relieve you of a lot of the busy work with building requests. This is great, especially for transactions. To simplify the above code just to the transaction steps, consider: # use a transaction txid = btx.post('{0}/transaction'.format(urlb), json.dumps({})).json()['transId'] # set the X-F5-REST-Coordination-Id header with the transaction id btx.headers.update({'X-F5-REST-Coordination-Id': txid}) # do stuff here # remove header and patch to commit the transaction del btx.headers['X-F5-REST-Coordination-Id'] cresult = btx.patch('{0}/transaction/{1}'.format(urlb, txid), json.dumps({'state':'VALIDATING'})).json() With the library, it's simplified to: tx = b.tm.transactions.transaction with TransactionContextManager(tx) as api: # do stuff here api.do_stuff Yep, it's that simple. So if you haven't checked out the f5-common-python library, I highly suggest you do! I'll be writing about how to get started using it next week, and perhaps a follow up on how to contribute to it as well, so stay tuned!2.9KViews2likes9CommentsWorking with subsets of data-group records via iControl REST
The BIG-IP iControl REST interface method for data-groups does not define the records as a subcollection (like pool members.) This is problematic for many because the records are just a list attribute in the data-group object. This means that if you want to add, modify, or delete a single record in a data-group, you have replace the entire list. A short while back I was on a call with iRule extraordinaire John Alam, and he was showing me a management tool he was working on where he could change individual records in a data-group via REST. I was intrigued so we dug into the details and I was floored at how simple the solution to this problem is! UPDATE: Chris L mentioned in the comments below that working with subsets IS possible without the tmsh script, as helearned in this thread in the Q&A section. The normal endpoint (/mgmt/tm/ltm/data-group/internal/yourDGname) works just fine, but instead of trying to change a subset of the records attribute, which only results in the replace-all-with behavior, you can use theoptions query parameterand then pass the normal tmsh command records data as arguments. An example of this request would be to PATCHwith an empty json payload ({})to urlhttps://{{host}}/mgmt/tm/ltm/data-group/internal/mydg?options=records%20modify%20%7B%20k3%20%7Bdata%20v3%20%7D%20%7D(without the encoding, that query value format is “records modify { k3 { data v3 } }”). As this article is still a good learning exercise on how to use tmsh scripts with iControl REST, I’ll keep the article as is, but an updated script for the specific problem we’re solving can befound in this gist on Github. Enter the tmsh script! Even though the iControl REST doesn’t treat data-group records as individual objects, the tmsh cli does. So if you can create a tmsh script to manage the local manipulation of the records, pass your record data into that script, and execute it from REST, well, that’s where the gold is, people. Let’s start with the tmsh script, written by John Alam but modified very slightly by me. cli script dgmgmt { proc script::init {} { } proc script::run {} { set record_data [lindex $tmsh::argv 3] switch [lindex $tmsh::argv 1] { "add-record" { tmsh::modify ltm data-group internal [lindex $tmsh::argv 2] type string records add $record_data puts "Record [lindex $tmsh::argv 3] added." } "modify-record" { tmsh::modify ltm data-group internal [lindex $tmsh::argv 2] type string records modify $record_data puts "Record changed [lindex $tmsh::argv 3]." } "delete-record" { tmsh::modify ltm data-group internal [lindex $tmsh::argv 2] type string records delete $record_data puts "Record [lindex $tmsh::argv 3] deleted." } "save" { tmsh::save sys config puts "Config saved." } } } proc script::help {} { } proc script::tabc {} { } total-signing-status not-all-signed } This script is installed on the BIG-IP and is a regular object in the BIG-IP configuration, stored in the bigip_script.conf file. There are four arguments passed. The first (arg 0) is always the script name. The other args we pass to the script are: arg 1 - action. Are we adding, modifying, or deleting records? arg 2 - data-group name arg 3 - data-group records to be changed The commands are pretty straight forward. Notice, however, that the record data at the tail end of each of those commands is just the data passed to the script, so the required tmsh format is left to the remote side of this transaction. Since I’m writing that side of the solution, that’s ok, but if I were to put my best practices hat on, the record formatting work should really be done in the tmsh script, so that all I have to do on the remote side is pass the key/value data. Executing the script! Now that we have a shiny new tmsh script for the BIG-IP, we have two issues. We need to install that script on the BIG-IP in order to use it We need to be able to run that script remotely, and pass data to it This is where you grab your programming language of choice and go at it! For me, that would be python. And I’ll be using the BIGREST SDK to interact with BIG-IP. Let’s start with the program flow: if __name__ == "__main__": args = build_parser() b = instantiate_bigip(args.host, args.user) if not b.exist("/mgmt/tm/cli/script/dgmgmt"): print( "\n\tThe data-group management tmsh script is not yet on your system, installing..." ) deploy_tmsh_script(b) sleep(2) if not b.exist(f"/mgmt/tm/ltm/data-group/internal/{args.datagroup}"): print( f"\n\tThe {args.datagroup} data-group doesn't exist. Please specify an existing data-group.\n" ) sys.exit() cli_arguments = format_records(args.action, args.datagroup, args.dgvalues) dg_update(b, cli_arguments) dg_listing(b, args.datagroup) This is a cli script, so we need to create a parser to handle the arguments. After collecting the data, we instantiate BIG-IP. Next, we check for the existence of the tmsh script on BIG-IP and install it if it is not present. We then format the record data and proceed to supply that output as arguments when we make the REST call to run the tmsh script. Finally, we print the results. This last step is probably not something you'd want to do for large data sets, but it's included here for validation. Now, let's look at each step of the flow. The imports and tmsh script # Imports used in this script from bigrest.bigip import BIGIP from time import sleep import argparse import getpass import sys # The tmsh script DGMGMT_SCRIPT = 'proc script::init {} {\n}\n\nproc script::run {} {\nset record_data [lindex $tmsh::argv 3]\n\n' \ 'switch [lindex $tmsh::argv 1] {\n "add-record" {\n tmsh::modify ltm data-group internal ' \ '[lindex $tmsh::argv 2] type string records add $record_data\n ' \ 'puts "Record [lindex $tmsh::argv 3] added."\n }\n "modify-record" {\n ' \ 'tmsh::modify ltm data-group internal [lindex $tmsh::argv 2] type string records modify' \ ' $record_data\n puts "Record changed [lindex $tmsh::argv 3]."\n }\n "delete-record" {\n' \ ' tmsh::modify ltm data-group internal [lindex $tmsh::argv 2] type string records delete' \ ' $record_data\n puts "Record [lindex $tmsh::argv 3] deleted."\n }\n "save" {\n ' \ ' tmsh::save sys config\n puts "Config saved."\n }\n}\n}\n' \ 'proc script::help {} {\n}\n\nproc script::tabc {} {\n}\n' These are defined at the top of the script and are necessary to the appropriate functions defined in the below sections. You could move the script into a file and load it, but it's small enough that it doesn't clutter the script and makes it easier not to have to manage multiple files. The parser def build_parser(): parser = argparse.ArgumentParser() parser.add_argument("host", help="BIG-IP IP/FQDN") parser.add_argument("user", help="BIG-IP Username") parser.add_argument( "action", help="add | modify | delete", choices=["add", "modify", "delete"] ) parser.add_argument("datagroup", help="Data-Group name you wish to change") parser.add_argument( "dgvalues", help='Key or KV Pairs, in this format: "k1,k2,k3=v3,k4=v4,k5"' ) return parser.parse_args() This is probably the least interesting part, but I'm including it here to be thorough. The one thing to note is the cli format to supply the key/value pairs for the data-group records. I could have also added an alternate option to load a file instead, but I'll leave that as an exercise for future development. If you supply no arguments or the optional -h/--help, you'll get the help message. % python dgmgmt.py -h usage: dgmgmt.py [-h] host user {add,modify,delete} datagroup dgvalues positional arguments: host BIG-IP IP/FQDN user BIG-IP Username {add,modify,delete} add | modify | delete datagroup Data-Group name you wish to change dgvalues Key or KV Pairs, in this format: "k1,k2,k3=v3,k4=v4,k5" optional arguments: -h, --help show this help message and exit Instantiation def instantiate_bigip(host, user): pw = getpass.getpass(prompt=f"\n\tWell hello {user}, please enter your password: ") try: obj = BIGIP(host, user, pw) except Exception as e: print(f"Failed to connect to {args.host} due to {type(e).__name__}:\n") print(f"{e}") sys.exit() return obj I don't like typing out my passwords on the cli so I use getpass here to ask for it after I kick off the script. You'll likely want to add an argument for the password if you automate this script with any of your tooling. This function makes a request to BIG-IP and builds a local python object to be used for future requests. Uploading the tmsh script def deploy_tmsh_script(bigip): try: cli_script = {"name": "dgmgmt", "apiAnonymous": DGMGMT_SCRIPT} bigip.create("/mgmt/tm/cli/script", cli_script) except Exception as e: print(f"Failed to create the tmsh script due to {type(e).__name__}:\n") print(f"{e}") sys.exit() Because tmsh scripts are BIG-IP objects, we don't have to interact with the file system. It's just a simple object creation like creating a pool. I have taken the liberty to hardcode the script name to limit the number of arguments required to pass on the cli, but that can be updated if you so desire by either changing the name in the script, or adding arguments. Formatting the records def format_records(action, name, records): recs = "" for record in records.split(","): x = record.split("=") record_key = x[0] if len(x) == 1 and action != 'modify': recs += f"{record_key} " elif len(x) == 1 and action == 'modify': recs += f'{record_key} {{ data \\\"\\\" }} ' elif len(x) == 2: record_value = x[1] recs += f'{record_key} {{ data \\\"{record_value}\\\" }} ' else: raise ValueError("Max record items is 2: key or key/value pair.") return f"{action}-record {name} '{{ {recs} }}'" This is the function I spent the most time ironing out. As I pointed out earlier, it would be better handled in the tmsh script, but since that work was already completed by John, I focused on the python side of things. The few things that I fleshed out in testing that I didn't consider while making it work the first time: Escaping all the special characters that make iControl REST unhappy. Handling whitespace in the data value. This requires quotes around the data value. Modifying a key by removing an existing value. This requires you to provide an empty data reference. Executing the script def dg_update(bigip, cli_args): try: dg_mods = {"command": "run", "name": "/Common/dgmgmt", "utilCmdArgs": cli_args} bigip.command("/mgmt/tm/cli/script", dg_mods) except Exception as e: print(f"Failed to modify the data-group due to {type(e).__name__}:\n") print(f"{e}") sys.exit() With all the formatting out of the way, the update is actually anticlimactic. iControl REST requires a json payload for the command, which is running the cli script. The cli arguments for that script are passed in the utilCmdArgs attribute. Validating the results def dg_listing(bigip, dgname): dg = b.load(f'/mgmt/tm/ltm/data-group/internal/{dgname}') print(f'\n\t{args.datagroup}\'s updated record set: ') for i in dg.properties['records']: print(f'\t\tkey: {i["name"]}, value: {i["data"]}') print('\n\n') And finally, we bask in the validity of our updates! Like the update function, this one doesn't have much to do. It grabs the data-group contents from BIG-IP and prints out each of the key/value pairs. As I indicated earlier, this may not be desirable on large data sets. You could modify the function by passing the keys you changed and compare that to the full results returned from BIG-IP and only print the updates, but I'll leave that as another exercise for future development. This is a cool workaround to the non-subcollection problem with data-groups that I wish I'd thought of years ago! The full script is in the codeshare. I hope you got something out of this article, drop a comment below and let me know!1.5KViews0likes2CommentsDemystifying iControl REST Part 6: Token-Based Authentication
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written on iControl REST so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. This article will cover the details on how to retrieve and use an authentication token from the BIG-IP using iControl REST and the python programming language. This token is used in place of basic authentication on API calls, which is a requirement for external authentication. Note that for configuration changes, version 12.0 or higher is required as earlier versions will trigger an un-authorized error. The tacacs config in this article is dependent on a version that I am no longer able to get installed on a modern linux flavor. Instead, try this Dockerized tacacs+ server for your testing. The Fine Print The details of the token provider are here in the wiki. We’ll focus on a provider not listed there: tmos. This provider instructs the API interface to use the provider that is configured in tmos. For this article, I’ve configured a tacacs server and the BIG-IP with custom remote roles as shown below to show BIG-IP version 12’s iControl REST support for remote authentication and authorization. Details for how this configuration works can be found in the tacacs+ article I wrote a while back. BIG-IP tacacs+ configuration auth remote-role { role-info { adm { attribute F5-LTM-User-Info-1=adm console %F5-LTM-User-Console line-order 1 role %F5-LTM-User-Role user-partition %F5-LTM-User-Partition } mgr { attribute F5-LTM-User-Info-1=mgr console %F5-LTM-User-Console line-order 2 role %F5-LTM-User-Role user-partition %F5-LTM-User-Partition } } } auth remote-user { } auth source { type tacacs } auth tacacs system-auth { debug enabled protocol ip secret $M$Zq$T2SNeIqxi29CAfShLLqw8Q== servers { 172.16.44.20 } service ppp } Tacacs+ Server configuration id = tac_plus { debug = PACKET AUTHEN AUTHOR access log = /var/log/access.log accounting log = /var/log/acct.log host = world { address = ::/0 prompt = "\nAuthorized Access Only!\nTACACS+ Login\n" key = devcentral } group = adm { service = ppp { protocol = ip { set F5-LTM-User-Info-1 = adm set F5-LTM-User-Console = 1 set F5-LTM-User-Role = 0 set F5-LTM-User-Partition = all } } } group = mgr { service = ppp { protocol = ip { set F5-LTM-User-Info-1 = mgr set F5-LTM-User-Console = 1 set F5-LTM-User-Role = 100 set F5-LTM-User-Partition = all } } } user = user_admin { password = clear letmein00 member = adm } user = user_mgr { password = clear letmein00 member = mgr } } Basic Requirements Before we look at code, however, let’s take a look at the json payload requirements, followed by response data from a query using Chrome’s Advanced REST Client plugin. First, since we are sending json payload, we need to add the Content-Type: application/json header to the query. The payload we are sending with the post looks like this: { "username": "remote_auth_user", "password": "remote_auth_password", "loginProviderName": "tmos" } You submit the same remote authentication credentials in the initial basic authentication as well, no need to have access to the default admin account credentials. A successful query for a token returns data like this: { username: "user_admin" loginReference: { link: "https://localhost/mgmt/cm/system/authn/providers/tmos/1f44a60e-11a7-3c51-a49f-82983026b41b/login" }- token: { uuid: "4d1bd79f-dca7-406b-8627-3ad262628f31" name: "5C0F982A0BF37CBE5DE2CB8313102A494A4759E5704371B77D7E35ADBE4AAC33184EB3C5117D94FAFA054B7DB7F02539F6550F8D4FA25C4BFF1145287E93F70D" token: "5C0F982A0BF37CBE5DE2CB8313102A494A4759E5704371B77D7E35ADBE4AAC33184EB3C5117D94FAFA054B7DB7F02539F6550F8D4FA25C4BFF1145287E93F70D" userName: "user_admin" user: { link: "https://localhost/mgmt/cm/system/authn/providers/tmos/1f44a60e-11a7-3c51-a49f-82983026b41b/users/34ba3932-bfa3-4738-9d55-c81a1c783619" }- groupReferences: [1] 0: { link: "https://localhost/mgmt/cm/system/authn/providers/tmos/1f44a60e-11a7-3c51-a49f-82983026b41b/user-groups/21232f29-7a57-35a7-8389-4a0e4a801fc3" }- - timeout: 1200 startTime: "2015-11-17T19:38:50.415-0800" address: "172.16.44.1" partition: "[All]" generation: 1 lastUpdateMicros: 1447817930414518 expirationMicros: 1447819130415000 kind: "shared:authz:tokens:authtokenitemstate" selfLink: "https://localhost/mgmt/shared/authz/tokens/4d1bd79f-dca7-406b-8627-3ad262628f31" }- generation: 0 lastUpdateMicros: 0 } Among many other fields, you can see the token field with a very long hexadecimal token. That’s what we need to authenticate future API calls. Requesting the token programmatically In order to request the token, you first have to supply basic auth credentials like normal. This is currently required to access the /mgmt/shared/authn/login API location. The basic workflow is as follows (with line numbers from the code below in parentheses): Make a POST request to BIG-IP with basic authentication header and json payload with username, password, and the login provider (9-16, 41-47) Remove the basic authentication (49) Add the token from the post response to the X-F5-Auth-Token header (50) Continue further requests like normal. In this example, we’ll create a pool to verify read/write privileges. (1-6, 52-53) And here’s the code (in python) to make that happen: def create_pool(bigip, url, pool): payload = {} payload['name'] = pool pool_config = bigip.post(url, json.dumps(payload)).json() return pool_config def get_token(bigip, url, creds): payload = {} payload['username'] = creds[0] payload['password'] = creds[1] payload['loginProviderName'] = 'tmos' token = bigip.post(url, json.dumps(payload)).json()['token']['token'] return token if __name__ == "__main__": import os, requests, json, argparse, getpass requests.packages.urllib3.disable_warnings() parser = argparse.ArgumentParser(description='Remote Authentication Test - Create Pool') parser.add_argument("host", help='BIG-IP IP or Hostname', ) parser.add_argument("username", help='BIG-IP Username') parser.add_argument("poolname", help='Key/Cert file names (include the path.)') args = vars(parser.parse_args()) hostname = args['host'] username = args['username'] poolname = args['poolname'] print "%s, enter your password: " % args['username'], password = getpass.getpass() url_base = 'https://%s/mgmt' % hostname url_auth = '%s/shared/authn/login' % url_base url_pool = '%s/tm/ltm/pool' % url_base b = requests.session() b.headers.update({'Content-Type':'application/json'}) b.auth = (username, password) b.verify = False token = get_token(b, url_auth, (username, password)) print '\nToken: %s\n' % token b.auth = None b.headers.update({'X-F5-Auth-Token': token}) response = create_pool(b, url_pool, poolname) print '\nNew Pool: %s\n' % response Running this script from the command line, we get the following response: FLD-ML-RAHM:scripts rahm$ python remoteauth.py 172.16.44.15 user_admin myNewestPool1 Password: user_admin, enter your password: Token: 2C61FE257C7A8B6E49C74864240E8C3D3592FDE9DA3007618CE11915F1183BF9FBAF00D09F61DE15FCE9CAB2DC2ACC165CBA3721362014807A9BF4DEA90BB09F New Pool: {u'generation': 453, u'minActiveMembers': 0, u'ipTosToServer': u'pass-through', u'loadBalancingMode': u'round-robin', u'allowNat': u'yes', u'queueDepthLimit': 0, u'membersReference': {u'isSubcollection': True, u'link': u'https://localhost/mgmt/tm/ltm/pool/~Common~myNewestPool1/members?ver=12.0.0'}, u'minUpMembers': 0, u'slowRampTime': 10, u'minUpMembersAction': u'failover', u'minUpMembersChecking': u'disabled', u'queueTimeLimit': 0, u'linkQosToServer': u'pass-through', u'queueOnConnectionLimit': u'disabled', u'fullPath': u'myNewestPool1', u'kind': u'tm:ltm:pool:poolstate', u'name': u'myNewestPool1', u'allowSnat': u'yes', u'ipTosToClient': u'pass-through', u'reselectTries': 0, u'selfLink': u'https://localhost/mgmt/tm/ltm/pool/myNewestPool1?ver=12.0.0', u'serviceDownAction': u'none', u'ignorePersistedWeight': u'disabled', u'linkQosToClient': u'pass-through'} You can test this out in the Chrome Advanced Rest Client plugin, or from the command line with curl or any other language supporting REST clients as well, I just use python for the examples well, because I like it. I hope you all are digging into iControl REST! What questions do you have? What else would you like clarity on? Drop a comment below.19KViews0likes42CommentsDemystifying iControl REST Part 1 - Understanding the request URI
iControl REST. It’s iControl SOAP’s baby brother, introduced back in TMOS version 11.4 as an early access feature but was released fully in version 11.5. Several articles on basic usage have been written on iControl REST (see the resources at the bottom of this article) so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. This article is the first of a four part series and will cover how the URI path plays a role in how the API functions. Examining the REST interface URI With iControl SOAP, all the interfaces and methods are defined in the WSDLs. With REST, the bulk of the interface and method is part of the URI. Take for example a request to get the list of iRules on a BIG-IP. https://x.x.x.x/mgmt/tm/ltm/rule With a simple get request with the appropriate headers and credentials (see below for curl and python examples,) this breaks down to the Base URI, the Module URI, and the Sub-Module URI. #python example >>> import requests >>> import json >>> b = requests.session() >>> b.auth = ('admin', 'admin') >>> b.verify = False >>> b.headers.update({'Content-Type':'application/json'}) >>> b.get('https://172.16.44.128/mgmt/tm/ltm/rule') #curl example curl -k -u admin:admin -H “Content-Type: application/json” -X GET https://172.16.44.128/mgmt/tm/ltm/rule Beyond the Sub-Module URI is the component URI. https://x.x.x.x/mgmt/tm/ltm/rule/testlog The component URI is the spot that snags most beginners. When creating a new configuration object on an F5, it is fairly obvious which URI to use for the REST call. If you were creating a new virtual server, it would be /mgmt/tm/ltm/virtual, while a new pool would be /mgmt/tm/ltm/pool. Thus a REST call to create a new pool on an LTM with the IP address 172.16.44.128 would look like the following: #python example >>> import requests >>> import json >>> b = requests.session() >>> b.auth = ('admin', 'admin') >>> b.verify = False >>> b.headers.update({'Content-Type':'application/json'}) >>> pm = ['192.168.25.32:80', '192.168.25.33:80'] >>> payload = { } >>> payload['kind'] = 'tm:ltm:pool:poolstate' >>> payload['name'] = 'tcb-pool' >>> payload['members'] = [ {'kind': 'ltm:pool:members', 'name': member } for member in pm] >>> b.post('https://172.16.44.128/mgmt/tm/ltm/pool', data=json.dumps(payload)) #curl example curl -k -u admin:admin -H "Content-Type: \ application/json" -X POST -d \ '{"name":"tcb-pool","members":[ \ {"name":"192.168.25.32:80","description":"first member”}, \ {"name":"192.168.25.33:80","description":"second member”} ] }' \ https://172.16.44.128/mgmt/tm/ltm/pool The call would create the pool “tcb-pool” with members at 192.168.25.32:80 and 192.168.25.33:80. All other aspects of the pool would be the defaults. Thus this pool would be created in the Common partition, have the Round Robin load balancing method, and no monitor.At first glance, some programmers would then modify the pool “tcb-pool” with a command like the following (same payload in python, added the .text attribute to see the error response on the requests object): #python example >>> b.put('https://172.16.44.128/mgmt/tm/ltm/pool', data=json.dumps(payload)).text #return data u'{"code":403,"message":"Operation is not supported on component /ltm/pool.","errorStack":[]}' #curl example curl -k -u admin:admin -H "Content-Type: \ application/json" -X PUT -d \ '{"name":"tcb-pool","members":[ \ {"name":"192.168.25.32:80","description":"first member"} {"name":"192.168.25.33:80","description":"second member"} ] }' \ https://172.16.44.128/mgmt/tm/ltm/pool #return data {"code":403,"message":"Operation is not supported on component /ltm/pool.","errorStack":[]} You can see because they use the sub module URI used to create the pool this returns a 403 error. The command fails because one is trying to modify a specific pool at the generic pool level. Now that the pool exists, one must use the URI that specifies the pool. Thus, the correct command would be: #python example >>> b.put('https://172.16.44.128/mgmt/tm/ltm/pool/~Common~tcb-pool', data=json.dumps(payload)) #curl example curl -k -u admin:admin -H "Content-Type: \ application/json" -X PUT -d \ '{"members":[ \ {"name":"192.168.25.32:80","description":"first member"} {"name":"192.168.25.33:80","description":"second member"} ] }' \ https://172.16.44.128/mgmt/tm/ltm/pool/~Common~tcb-pool We add the ~Common~ in front of the pool name because it is in the Common partition. However, this would also work with https://172.16.44.128/mgmt/tm/ltm/pool/tcb-pool. It is just good practice to explicitly insert the partition name since not all configuration objects will be in the default Common partition. Because we specify the pool in the URI, it is no longer necessary to have the “name” key value pair. In practice, programmers usually correctly modify items such as virtual servers and pools. However, we encounter this confusion much more often in configuration items that are ifiles. This may be because the creation of configuration items that are ifiles is a 3-step process. For instance, in order to create an external data group, one would first scp the file to 172.16.44.128/config/filestore/data_mda_1, then issue 2 Rest commands: curl -sk -u admin:admin -H "Content-Type: application/json" -X POST -d '{"name":"data_mda_1","type":"string","source-path":"file:///config/filestore/data_mda_1"}' https://172.16.44.128/mgmt/tm/sys/file/data-group curl -sk -u admin:admin -H "Content-Type: application/json" -X POST -d '{"name":"dg_mda","external-file-name":"/Common/data_mda_1"}' https://172.16.44.128/mgmt/tm/ltm/data-group/external/ To update the external data group, many programmers first try something like the following: curl -sk -u admin:admin -H "Content-Type: application/json" -X POST -d '{"name":"data_mda_2","type":"string","source-path":"file:///config/filestore/data_mda_2”}’ https://172.16.44.128/mgmt/tm/sys/file/data-group curl -sk -u admin:admin -H "Content-Type: application/json" -X PUT -d '{"name":"dg_mda","external-file-name":"/Common/data_mda_2"}' https://172.16.44.128/mgmt/tm/ltm/data-group/external/ The first command works because we are creating a new ifile object. However, the second command fails because we are trying to modify a specific external data group at the generic external data group level. The proper command is: curl -sk -u admin:admin -H "Content-Type: application/json" -X PUT -d '{"external-file-name":"/Common/data_mda_2"}' https://172.16.44.128/mgmt/tm/ltm/data-group/external/dg_mda The python code gets a little more complex with the data-group examples, so I've uploaded it to the codeshare here. Much thanks to Pat Chang for the bulk of the content in this article. Stay tuned for part 2, where we'll cover sub collections and how to use them.5.6KViews1like8CommentsDemystifying iControl REST Part 5: Transferring Files
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written on iControl REST so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. This article will cover the details on how to transfer files to/from the BIG-IP using iControl REST and the python programming language. (Note: this functionality requires 12.0+.) The REST File Transfer Worker The file transfer worker allows a client to transfer files through a series of GET operations for downloads and POST operations for uploads. The Content-Range header is used for both as a means to chunk the content. For downloads, the worker listens onthe following interfaces. Description Method URI File Location Download a File GET /mgmt/cm/autodeploy/software-image-downloads/ /shared/images/ Upload an Image File POST /mgmt/cm/autodeploy/software-image-uploads/ /shared/images/ Upload a File POST /mgmt/shared/file-transfer/uploads/ /var/config/rest/downloads/ Download a QKView GET /mgmt/shared/file-transfer/qkview-downloads/ /var/tmp/ Download a UCS GET /mgmt/shared/file-transfer/ucs-downloads/ /var/local/ucs/ Upload ASM Policy POST /mgmt/tm/asm/file-transfer/uploads/ /var/ts/var/rest/ Download ASM Policy GET /mgmt/tm/asm/file-transfer/downloads/ /var/ts/var/rest/ Binary and text files are supported. The magic in the transfer is the Content-Range header, which has the following format: Content-Range: start-end/filesize Where start/end are the chunk's delimiters in the file and filesize is well, the file size. Any file larger than 1M needs to be chunked with this header as that limit is enforced by the worker. This is done to avoid potential denial of service attacks and out of memory errors. There are benefits of chunking as well: Accurate progress bars Resuming interrupted downloads Random access to file content possible Uploading a File The function is shown below. Note that whereas normally with the REST API the Content-Type is application/json, with file transfers that changes to application/octet-stream. The workflow for the function works like this (line number in parentheses) : Set the Chunk Size (3) Set the Content-Type header (4-6) Open the file (7) Get the filename (apart from the path) from the absolute path (8) If the extension is an .iso file (image) put it in /shared/images, otherwise it’ll go in /var/config/rest/downloads (9-12) Disable ssl warnings requests (required with my version: 2.8.1. YMMV) (14) Set the total file size for use with the Content-Range header (15) Set the start variable to 0 (17) Begin loop to iterate through the file and upload in chunks (19) Read data from the file and if there is no more data, break the loop (20-22) set the current bytes read, if less than the chunk size, then this is the last chunk, so set the end to the size from step 7. Otherwise, add current bytes length to the start value and set that as the end. (24-28) Set the Content-Range header value and then add that to the header (30-31) Make the POST request, uploading the content chunk (32-36) Increment the start value by the current bytes content length (38) def _upload(host, creds, fp): chunk_size = 512 * 1024 headers = { 'Content-Type': 'application/octet-stream' } fileobj = open(fp, 'rb') filename = os.path.basename(fp) if os.path.splitext(filename)[-1] == '.iso': uri = 'https://%s/mgmt/cm/autodeploy/software-image-uploads/%s' % (host, filename) else: uri = 'https://%s/mgmt/shared/file-transfer/uploads/%s' % (host, filename) requests.packages.urllib3.disable_warnings() size = os.path.getsize(fp) start = 0 while True: file_slice = fileobj.read(chunk_size) if not file_slice: break current_bytes = len(file_slice) if current_bytes < chunk_size: end = size else: end = start + current_bytes content_range = "%s-%s/%s" % (start, end - 1, size) headers['Content-Range'] = content_range requests.post(uri, auth=creds, data=file_slice, headers=headers, verify=False) start += current_bytes Downloading a File Downloading is very similar but there are some differences. Here is the workflow that is different, followed by the code. Note that the local path where the file will be downloaded to is given as part of the filename. URI is set to downloads worker. The only supported download directory at this time is /shared/images. (8) Open the local file so received data can be written to it (11) Make the request (22-26) If response code is 200 and if size is greater than 0, increment the current bytes and write the data to file, otherwise exit the loop (28-40) Set the value of the returned Content-Range header to crange and if initial size (0), set the file size to the size variable (42-46) If the file is smaller than the chunk size, adjust the chunk size down to the total file size and continue (51-55) Do the math to get ready to download the next chunk (57-62) def _download(host, creds, fp): chunk_size = 512 * 1024 headers = { 'Content-Type': 'application/octet-stream' } filename = os.path.basename(fp) uri = 'https://%s/mgmt/cm/autodeploy/software-image-downloads/%s' % (host, filename) requests.packages.urllib3.disable_warnings() with open(fp, 'wb') as f: start = 0 end = chunk_size - 1 size = 0 current_bytes = 0 while True: content_range = "%s-%s/%s" % (start, end, size) headers['Content-Range'] = content_range #print headers resp = requests.get(uri, auth=creds, headers=headers, verify=False, stream=True) if resp.status_code == 200: # If the size is zero, then this is the first time through the # loop and we don't want to write data because we haven't yet # figured out the total size of the file. if size > 0: current_bytes += chunk_size for chunk in resp.iter_content(chunk_size): f.write(chunk) # Once we've downloaded the entire file, we can break out of # the loop if end == size: break crange = resp.headers['Content-Range'] # Determine the total number of bytes to read if size == 0: size = int(crange.split('/')[-1]) - 1 # If the file is smaller than the chunk size, BIG-IP will # return an HTTP 400. So adjust the chunk_size down to the # total file size... if chunk_size > size: end = size # ...and pass on the rest of the code continue start += chunk_size if (current_bytes + chunk_size) > size: end = size else: end = start + chunk_size - 1 Now you know how to upload and download files. Let’s do something with it! A Use Case - Upload Cert & Key to BIG-IP and Create a Clientssl Profile! This whole effort was sparked by a use case in Q&A, so I had to deliver the goods with more than just moving files around. The complete script is linked at the bottom, but there are a few steps required to get to a clientssl certificate: Upload the key & certificate Create the file object for key/cert Create the clientssl profile You know how to do step 1 now. Step 2 is to create the file object for the key and certificate. After a quick test to see which file is the certificate, you set both files, build the payload, then make the POST requests to bind the uploaded files to the file object. def create_cert_obj(bigip, b_url, files): f1 = os.path.basename(files[0]) f2 = os.path.basename(files[1]) if f1.endswith('.crt'): certfilename = f1 keyfilename = f2 else: keyfilename = f1 certfilename = f2 certname = f1.split('.')[0] payload = {} payload['command'] = 'install' payload['name'] = certname # Map Cert to File Object payload['from-local-file'] = '/var/config/rest/downloads/%s' % certfilename bigip.post('%s/sys/crypto/cert' % b_url, json.dumps(payload)) # Map Key to File Object payload['from-local-file'] = '/var/config/rest/downloads/%s' % keyfilename bigip.post('%s/sys/crypto/key' % b_url, json.dumps(payload)) return certfilename, keyfilename Notice we return the key/cert filenames so they can be used for step 3 to establish the clientssl profile. In this example, I name the file object and the clientssl profile to the name of the certfilename (minus the extension) but you can alter this to allow the objects names to be provided. To build the profile, just create the payload with the custom key/cert and make the POST request and you are done! def create_ssl_profile(bigip, b_url, certname, keyname): payload = {} payload['name'] = certname.split('.')[0] payload['cert'] = certname payload['key'] = keyname bigip.post('%s/ltm/profile/client-ssl' % b_url, json.dumps(payload)) Much thanks to Tim Rupp who helped me get across the finish line with some counting and rest worker errors we were troubleshooting on the download function. Get the Code Upload a File Download a File Upload Cert/Key & Build a Clientssl Profile8.7KViews4likes45CommentsF5 in AWS Part 4 - Orchestrating BIG-IP Application Services with Open-Source tools
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following post references code hosted at F5's Github repository f5networks/aws-deployments. This code provides a demonstration of using open-source tools to configure and orchestrate BIG-IP. Full documentation for F5 BIG-IP cloud work can be found at Cloud Docs: F5 Public Cloud Integrations. So far we have talked above AWS networking basics, how to run BIG-IP in a VPC, and highly-available deployment footprints. In this post, we’ll move on to my favorite topic, orchestration. By this point, you probably have several VMs running in AWS. You’ve lost track of which configuration is setup on which VM, and you have found yourself slowly going mad as you toggle between the AWS web portal and several SSH windows. I call this ‘point-and-click’ purgatory. Let's be blunt, why would you move to cloud without realizing the benefits of automation, of which cloud is a large enabler. If you remember our second article, we mentioned CloudFormation templates as a great way to deploy a standardized set of resources (perhaps BIG-IP + the additional virtualized network resources) in EC2. This is a great start, but we need to configure these resources once they have started, and we need a way to define and execute workflows which will run across a set of hosts, perhaps even hosts which are external to the AWS environment. Enter the use of open-source configuration management and workflow tools that have been popularized by the software development community. Open-source configuration management and AWS APIs Lately, I have been playing with Ansible, which is a python-based, agentless workflow engine for IT automation. By agentless, I mean that you don’t need to install an agent on hosts under management. Ansible, like the other tools, provides a number of libraries (or “modules”) which provide the ability to manage a diverse collection of remote systems. These modules are typically implemented through the use of API calls, often over HTTP. Out of the box, Ansible comes with several modules for managing resources in AWS. While the EC2 libraries provided are useful for basic orchestration use cases, we decided it would be easier to atomically manage sets of resources using the CloudFormation module. In doing so, we were able to deploy entire CloudFormation stacks which would include items like VPCs, networking elements, BIG-IP, app servers, etc. Underneath the covers, the CloudFormation: Ansible module and our own project use the python module to interact with AWS service endpoints. Ansible provides some basic modules for managing BIG-IP configuration resources. These along with libraries for similar tools can be found here: Ansible Puppet SaltStack In the rest of this post, I’ll discuss some work colleagues and I have done to automate BIG-IP deployments in AWS using Ansible. While we chose to use Ansible, we readily admit that Puppet, Chef, Salt and whatever else you use are all appropriate choices for implementing deployment and configuration management workflows for your network. Each have their upsides and downsides, and different tools may lend themselves to different use cases for your infrastructure. Browse the web to figure out which tool is right for you. Using Standardized BIG-IP Interfaces Speaking of APIs, for years F5 has provided the ability to programmatically configure BIG-IP using iControlSOAP. As the audiences performing automation work have matured, so have the weapons of choice. The new hot ticket is REST (Representational State Transfer), and guess what, BIG-IP has a REST interface (you can probably figure out what it is called). Together, iControlSOAP and iControlREST give you the power to manage nearly every configuration element and feature of BIG-IP. These interfaces become extremely powerful when you combine them with your favorite open-source configuration management tool and a cloud that allows you to spin up and down compute and networking resources. In the project described below, we have also made use of iApps using iControlRest as a way to create a standard virtual server configuration with the correct policies and profiles. The documentation in Github describes this in detail, but our approach shows how iApps provide a strongly supported approach for managing network policy across engineering teams. For example, imagine that a team of software engineers has written a framework to deploy applications. You can package the network policy into iApps for various types of apps, and pass these to the teams writing the deployment framework. Implementing a Service Catalog To pull the above concepts together, a colleague and I put together the aws-deployments project.The goal was to build a simple service catalog which would enable a user to deploy a containerized application in EC2 with BIG-IP network services sitting in front. This is example code that is not supported by F5 support but is a proof of concept to show how you can fully automate production-like deployments in AWS. Some highlights of the project include: Use of iControlRest and iControlSoap within Ansible playbooks to setup advanced topologies of BIG-IP in AWS. Automated deployment of a basic ASM web application firewall policy to protect a vulnerable web app (Hackazon. Use of iApps to manage virtual server configurations, including the WAF policy mentioned above. Figure 1 - Generic Architecture for automating application deployments in public or private cloud In examination of the code, you will see that we provide the opportunity to provision all the development models outlined in our earlier post (a single standalone VE, standalones BIG-IP VEs striped availability zones, clusters within an availability zone, etc). We used Ansible and the interfaces on BIG-IP to orchestrate the workflows assoiated with these deployment models. To perform the clustering step, we have used the iControlSoap interface on BIG-IP. The final set of technology used is depicted in Figure 3. Figure 2 - Technologies used in the aws-deployments project on Github Read the Code and Test It Yourself All the code I have mentioned is available at f5networks/aws-deployments. We encourage you to download and run the code for yourself. Instructions for setting up a development environment which includes the necessary dependencies is easy. We have packaged all the dependencies for use with either Vagrant or Docker as development tools. The instructions for either of these approaches can be found in the README.md or in the /docs directory. The following video shows an end-to-end usage example. (Keep in mind that the code has been updated since this video was produced). At the end of the day, our goal for this work was to collect customer feedback. Please provide some by leaving a comment below, or by filing ‘pull requests’ or ‘issues’ in Github. In the next few weeks, we will be updating the project to include the Hackazon app mentioned above, show how to cluster BIG-IP across availability zones, and how to deploy an ASM profile with an iApp. Have fun!1.3KViews1like3CommentsF5 in AWS Part 5 - Cloud-init, Single-NIC, and Auto Scale Out in BIG-IP
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following article covers features and examplesin the 12.1 AWS Marketplace release, discussed in the following documentation: Amazon Web Services: Single NIC BIG-IP VE Amazon Web Services: Auto Scaling BIG-IP VE You can find the BIG-IP Hourly and BYOL releases in the Amazon marketplace here. BIG-IP utility billing images are available, which makes it a great time to talk about some of the functionality. So far in Chris’s series, we have discussed some of the highly-available deployment footprints of BIG-IP in AWS and how these might be orchestrated. Several of these footprints leverage BIG-IP's Device Service Clustering (DSC) technology for configuration management across devices and also lend themselves to multi-app or multi-tenant configurations in a shared-service model. But what if you want to deploy BIG-IP on a per-app or per-tenant basis, in a horizontally scalable footprint that plays well with the concepts of elasticity and immutability in cloud? Today we have just the option for you. Before highlighting these scalable deployment models in AWS, we need to cover cloud-init and single-NIC configurations; two important additions to BIG-IP that enable an Auto Scaling topology. Elasticity Elastiity is obviously one of the biggest promises/benefits of cloud. By leveraging cloud, we are essentially tapping into the "unlimited" (at least relative to our own datacenters) resources large cloud providers have built. In actual practice, this means adopting new methodologies and approaches to truely deliver this. Immutablity In traditional operational model of datacenters, everything was "actively" managed. Physical Infrastructure still tends to lend itself to active management but even virtualized services and applications running on top of the infrastructure were actively managed. For example, servers were patched, code was live upgraded inplace, etc. However, to achieve true elasticity, where things are spinning up or down and more ephemeral in nature, it required a new approach. Instead of trying to patch or upgrade individual instances, the approach was treating them as disposable which meant focusing more on the build process itself. ex. Netflix's famous Building with Legos approach. Yes, the concept of golden images/snapshots existed since virtualization but cloud, with self-service, automation and auto scale functionality, forced this to a new level. Operations focus shifted more towards a consistent and repeatable "build" or "packaging" effort, with final goal of creating instances that didn't need to be touched, logged into, etc. In the specific context of AWS's Auto Scale groups, that means modifying the Auto Scale Group's "launch config". The steps for creating the new instances involve either referencing an entirely new image ID or maybe modification to a cloud-init config. Cloud-init What is it? First, let’s talk about cloud-init as it is used with most Linux distributions. Most of you who are evaluating or operating in the cloud have heard of it. For those who haven’t, cloud-init is an industry standard for bootstrapping machines at startup. It provides a simple domain specific language for common infrastructure provisioning tasks. You can read the official docs here. For the average linux or systems engineer, cloud-init is used to perform tasks such as installing a custom package, updating yum repositories or installing certificates to perform final customizations to a "base" or “golden” image. For example, the Security team might create an approved hardened base image and various Dev teams would use cloud-init to customize the image so it booted up with an ‘identity’ if you will – an application server with an Apache webserver running or a database server with MySQL provisioned. Let’s start with the most basic "Hello World" use of cloud-init, passing in User Data (in this case a simple bash script). If launching an instance via the AWS Console, on the Configure Instance page, navigate down the “Advanced Details”: Figure 1: User Data input field - bash However, User Data is limited to < 16KBs and one of the real powers of cloud-init came from extending functionality past this boundry and providing a standardized way to provision, or ah humm, "initialize" instances.Instead of using unwieldy bash scripts that probed whether it was an Ubuntu or a Redhat instance and used this OS method or that OS method (ex. use apt-get vs. rpm) to install a package, configure users, dns settings, mount a drive, etc. you could pass a yaml file starting with #cloud-config file that did a lot of this heavy lifting for you. Figure 2: User Data input field - cloud-config Similar to one of the benefits of Chef, Puppet, Salt or Ansible, it provided a reliable OS or distribution abstraction but where those approaches require external orchestration to configure instances, this was internally orchestrated from the very first boot which was more condusive to the "immutable" workflow. NOTE: cloud-init also compliments and helps boot strap those tools for more advanced or sophisticated workflows (ex. installing Chef/Puppet to keep long running non-immutable services under operation/management and preventing configuration drift). This brings us to another important distinction. Cloud-init originated as a project from Canonical (Ubuntu) and was designed for general purpose OSs. The BIG-IP's OS (TMOS) however is a highly customized, hardened OS so most of the modules don't strictly apply. Much of the Big-IP's configuration is consumed via its APIs (TMSH, iControl REST, etc.) and stored in it's database MCPD. We can still achieve some of the benefits of having cloud-init but instead, we will mostly leverage the simple bash processor. So when Auto Scaling BIG-IPs, there are a couple of approaches. 1) Creating a custom image as described in the official documentation. 2) Providing a cloud-init configuration This is a little lighter weight approach in that it doesn't require the customization work above. 3) Using a combination of the two, creating a custom image and leveraging cloud-init. For example, you may create a custom image with ASM provisioned, SSL certs/keys installed, and use cloud-init to configure additional environment specific elements. Disclaimer: Packaging is an art, just look at the rise of Docker and new operating systems. Ideally, the more you bake into the image upfront, the more predictable it will be and faster it deploys. However, the less you build-in, the more flexible you can be. Things like installing libraries, compiling, etc. are usually worth building in the image upfront. However, the BIG-IP is already a hardened image and things like installing libraries is not something required or recommended so the task is more about addressing the last/lighter weight configuration steps. However, depending on your priorities and objectives,installing sensitive keying material, setting credentials, pre-provisioning modules, etc. might make good candidates for investing in building custom images. Using Cloud-init with CloudFormation templates Remember when we talked about how you could use CloudFormation templates in a previous post to setup BIG-IP in a standard way? Because the CloudFormation service by itself only gave us the ability to lay down the EC2/VPC infrastructure, we were still left with remaining 80% of the work to do; we needed an external agent or service (in our case Ansible) to configure the BIG-IP and application services. Now, with cloud-init on the BIG-IP (using version 12.0 or later), we can perform that last 80% for you. Using Cloud-init with BIG-IP As you can imagine, there’s quite a lot you can do with just that one simple bash script shown above. However, more interestingly, we also installed AWS’s Cloudformation Helper scripts. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html to help extend cloud-init and unlock a larger more powerful set of AWS functionality. So when used with Cloudformation, our User Data simply changes to executing the following AWS Cloudformation helper script instead. "UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "#!/bin/bash\n", "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ", { "Ref": "AWS::StackId" }, " -r ", "Bigip1Instance", " --region ", { "Ref": "AWS::Region" }, "\n" ] ] } } This allows us to do things like obtaining variables passed in from Cloudformation environment, grabbing various information from the metadata service, creating or downloading files, running particular sequence of commands, etc. so once BIG-IP has finishing running, our entire application delivery service is up and running. For more information, this page discusses how meta-data is attached to an instance using CloudFormation templates: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-commands. Example BYOL and Utility CloudFormation Templates We’ve posted several examples on github to get you started. https://github.com/f5networks/f5-aws-cloudformation In just a few short clicks, you can have an entire BIG-IP deployment up and running. The two examples belowwill launch an entire reference stack complete with VPCs, Subnets, Routing Tables, sample webserver, etc. andshow the use of cloud-init to bootstrap a BIG-IP. Cloud-init is used to configure interfaces, Self-IPs, database variables, a simple virtual server, and in the case of of the BYOL instance, to license BIG-IP. Let’s take a closer look at the BIG-IP resource created in one of these to see what’s going on here: "Bigip1Instance ": { "Metadata ": { "AWS::CloudFormation::Init ": { "config ": { "files ": { "/tmp/firstrun.config ": { "content ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", "HOSTNAME=`curl http://169.254.169.254/latest/meta-data/hostname`\n ", "TZ='UTC'\n ", "BIGIP_ADMIN_USERNAME=' ", { "Ref ": "BigipAdminUsername " }, "'\n ", "BIGIP_ADMIN_PASSWORD=' ", { "Ref ": "BigipAdminPassword " }, "'\n ", "MANAGEMENT_GUI_PORT=' ", { "Ref ": "BigipManagementGuiPort " }, "'\n ", "GATEWAY_MAC=`ifconfig eth0 | egrep HWaddr | awk '{print tolower($5)}'`\n ", "GATEWAY_CIDR_BLOCK=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${GATEWAY_MAC}/subnet-ipv4-cidr-block`\n ", "GATEWAY_NET=${GATEWAY_CIDR_BLOCK%/*}\n ", "GATEWAY_PREFIX=${GATEWAY_CIDR_BLOCK#*/}\n ", "GATEWAY=`echo ${GATEWAY_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+1 }'`\n ", "VPC_CIDR_BLOCK=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${GATEWAY_MAC}/vpc-ipv4-cidr-block`\n ", "VPC_NET=${VPC_CIDR_BLOCK%/*}\n ", "VPC_PREFIX=${VPC_CIDR_BLOCK#*/}\n ", "NAME_SERVER=`echo ${VPC_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+2 }'`\n ", "POOLMEM=' ", { "Fn::GetAtt ": [ "Webserver ", "PrivateIp " ] }, "'\n ", "POOLMEMPORT=80\n ", "APPNAME='demo-app-1'\n ", "VIRTUALSERVERPORT=80\n ", "CRT='default.crt'\n ", "KEY='default.key'\n " ] ] }, "group ": "root ", "mode ": "000755 ", "owner ": "root " }, "/tmp/firstrun.utils ": { "group ": "root ", "mode ": "000755 ", "owner ": "root ", "source ": "http://cdn.f5.com/product/templates/utils/firstrun.utils " } "/tmp/firstrun.sh ": { "content ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", ". /tmp/firstrun.config\n ", ". /tmp/firstrun.utils\n ", "FILE=/tmp/firstrun.log\n ", "if [ ! -e $FILE ]\n ", " then\n ", " touch $FILE\n ", " nohup $0 0<&- &>/dev/null &\n ", " exit\n ", "fi\n ", "exec 1<&-\n ", "exec 2<&-\n ", "exec 1<>$FILE\n ", "exec 2>&1\n ", "date\n ", "checkF5Ready\n ", "echo 'starting tmsh config'\n ", "tmsh modify sys ntp timezone ${TZ}\n ", "tmsh modify sys ntp servers add { 0.pool.ntp.org 1.pool.ntp.org }\n ", "tmsh modify sys dns name-servers add { ${NAME_SERVER} }\n ", "tmsh modify sys global-settings gui-setup disabled\n ", "tmsh modify sys global-settings hostname ${HOSTNAME}\n ", "tmsh modify auth user admin password \ "'${BIGIP_ADMIN_PASSWORD}'\ "\n ", "tmsh save /sys config\n ", "tmsh modify sys httpd ssl-port ${MANAGEMENT_GUI_PORT}\n ", "tmsh modify net self-allow defaults add { tcp:${MANAGEMENT_GUI_PORT} }\n ", "if [[ \ "${MANAGEMENT_GUI_PORT}\ " != \ "443\ " ]]; then tmsh modify net self-allow defaults delete { tcp:443 }; fi \n ", "tmsh mv cm device bigip1 ${HOSTNAME}\n ", "tmsh save /sys config\n ", "checkStatusnoret\n ", "sleep 20 \n ", "tmsh save /sys config\n ", "tmsh create ltm pool ${APPNAME}-pool members add { ${POOLMEM}:${POOLMEMPORT} } monitor http\n ", "tmsh create ltm policy uri-routing-policy controls add { forwarding } requires add { http } strategy first-match legacy\n ", "tmsh modify ltm policy uri-routing-policy rules add { service1.example.com { conditions add { 0 { http-uri host values { service1.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 1 } }\n ", "tmsh modify ltm policy uri-routing-policy rules add { service2.example.com { conditions add { 0 { http-uri host values { service2.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 2 } }\n ", "tmsh modify ltm policy uri-routing-policy rules add { apiv2 { conditions add { 0 { http-uri path starts-with values { /apiv2 } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 3 } }\n ", "tmsh create ltm virtual /Common/${APPNAME}-${VIRTUALSERVERPORT} { destination 0.0.0.0:${VIRTUALSERVERPORT} mask any ip-protocol tcp pool /Common/${APPNAME}-pool policies replace-all-with { uri-routing-policy { } } profiles replace-all-with { tcp { } http { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled }\n ", "tmsh save /sys config\n ", "date\n ", "# typically want to remove firstrun.config after first boot\n ", "# rm /tmp/firstrun.config\n " ] ] }, "group ": "root ", "mode ": "000755 ", "owner ": "root " } }, "commands ": { "b-configure-Bigip ": { "command ": "/tmp/firstrun.sh\n " } } } } }, "Properties ": { "ImageId ": { "Fn::FindInMap ": [ "BigipRegionMap ", { "Ref ": "AWS::Region " }, { "Ref ": "BigipPerformanceType " } ] }, "InstanceType ": { "Ref ": "BigipInstanceType " }, "KeyName ": { "Ref ": "KeyName " }, "NetworkInterfaces ": [ { "Description ": "Public or External Interface ", "DeviceIndex ": "0 ", "NetworkInterfaceId ": { "Ref ": "Bigip1ExternalInterface " } } ], "Tags ": [ { "Key ": "Application ", "Value ": { "Ref ": "AWS::StackName " } }, { "Key ": "Name ", "Value ": { "Fn::Join ": [ " ", [ "BIG-IP: ", { "Ref ": "AWS::StackName " } ] ] } } ], "UserData ": { "Fn::Base64 ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ", { "Ref ": "AWS::StackId " }, " -r ", "Bigip1Instance ", " --region ", { "Ref ": "AWS::Region " }, "\n " ] ] } } }, "Type ": "AWS::EC2::Instance " } Above may look like a lot at first but high level, we start by creating some files "inline" as well as “sourcing” some files from a remote location. /tmp/firstrun.config - Here we create a file inline, laying down variables from the Cloudformation Stack deployment itself and even the metadata service (http://169.254.169.254/latest/meta-data/). Take a look at the “Ref” stanzas. When this file is laid down on the BIG-IP disk itself, those variables will be interpolated and contain the actual contents. The idea here is to try to keep config and execution separate. /tmp/firstrun.utils – These are just some helper functions to help with initial provisioning. We use those to determine when the BIG-IP is ready for this particular configuration (ex. after a licensing or provisioning step). Note that instead of creating the file inline like the config file above, we simply “source” or download the file from a remote location. /tmp/firstrun.sh – This file is created inline as well and where it really all comes together. The first thing we do is load config variables from the firstrun.conf file and load the helper functions from firstrun.utils. We then create separate log file (/tmp/firstrun.log) to capture the output of this particular script. Capturing the output of these various commands just helps with debugging runs. Then we run a function called checkF5Ready (that we loaded from that helper function file) to make sure BIG-IP’s database is up and ready to accept a configuration. The rest may look more familiar and where most of the user customization takes place. We use variables from the config file to configure the BIG-IP using familiar methods like TMSH and iControl REST. Technically, you could lay down an entire config file (like SCF) and load it instead. We use tmsh here for simplicity. The possibilities are endless though. Disclaimer: the specific implementation above will certainly be optimized and evolve but the most important take away is we can now leverage cloud-init and AWS's helper libraries to help bootstrap the BIG-IP into a working configuration from the very first boot! Debugging Cloud-init What if something goes wrong? Where do you look for more information? The first place you might look is in various cloud-init logs in /var/log (cloud-init.log, cfn-init.log, cfn-wire.log): Below is an example for the CFTs below: [admin@ip-10-0-0-205:NO LICENSE:Standalone] log # tail -150 cfn-init.log 2016-01-11 10:47:59,353 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-east-1.amazonaws.com 2016-01-11 10:47:59,353 [DEBUG] Describing resource BigipEc2Instance in stack arn:aws:cloudformation:us-east-1:452013943082:stack/as-testing-byol-bigip/07c962d0-b893-11e5-9174-500c217b4a62 2016-01-11 10:47:59,782 [DEBUG] Not setting a reboot trigger as scheduling support is not available 2016-01-11 10:47:59,790 [INFO] Running configSets: default 2016-01-11 10:47:59,791 [INFO] Running configSet default 2016-01-11 10:47:59,791 [INFO] Running config config 2016-01-11 10:47:59,792 [DEBUG] No packages specified 2016-01-11 10:47:59,792 [DEBUG] No groups specified 2016-01-11 10:47:59,792 [DEBUG] No users specified 2016-01-11 10:47:59,792 [DEBUG] Writing content to /tmp/firstrun.config 2016-01-11 10:47:59,792 [DEBUG] No mode specified for /tmp/firstrun.config 2016-01-11 10:47:59,793 [DEBUG] Writing content to /tmp/firstrun.sh 2016-01-11 10:47:59,793 [DEBUG] Setting mode for /tmp/firstrun.sh to 000755 2016-01-11 10:47:59,793 [DEBUG] Setting owner 0 and group 0 for /tmp/firstrun.sh 2016-01-11 10:47:59,793 [DEBUG] Running command b-configure-BigIP 2016-01-11 10:47:59,793 [DEBUG] No test for command b-configure-BigIP 2016-01-11 10:47:59,840 [INFO] Command b-configure-BigIP succeeded 2016-01-11 10:47:59,841 [DEBUG] Command b-configure-BigIP output: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 40 0 40 0 0 74211 0 --:--:-- --:--:-- --:--:-- 40000 2016-01-11 10:47:59,841 [DEBUG] No services specified 2016-01-11 10:47:59,844 [INFO] ConfigSets completed 2016-01-11 10:47:59,851 [DEBUG] Not clearing reboot trigger as scheduling support is not available [admin@ip-10-0-0-205:NO LICENSE:Standalone] log # If trying out the example templates above, you can inspect the various files mentioned. Ex. In addition to checking for their general presence: /tmp/firstrun.config= make sure variables were passed as you expected. /tmp/firstrun.utils= Make sure exists and was downloaded /tmp/firstrun.log = See if any obvious errors were outputted. It may also be worth checking AWS Cloudformation Console to make sure you passed the parameters you were expecting. Single-NIC Another one of the important building blocks introduced with 12.0 on AWS and Azure Virtual Editions is the ability to run BIG-IP with just a single network interface. Typically, BIG-IPs were deployed in a multi-interface model, where interfaces were attached to an out-of-band management network and one or more traffic (or "data-plane") networks. But, as we know, cloud architectures scale by requiring simplicity, especially at the network level. To this day, some clouds can only support instances with a single IP on a single NIC. In AWS’s case, although they do support multiple NIC/multiple IP, some of their services like ELB only point to first IP address of the first NIC. So this Single-NIC configuration makes it not only possible but also dramatically easier to deploy in these various architectures. How this works: We can now attach just one interface to the instance and BIG-IP will start up, recognize this, use DHCP to configure the necessary settings on that interface. Underneath the hood, the following DB keys will be set: admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nic one-line sys db provision.1nic { value "enable" } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nicautoconfig one-line sys db provision.1nicautoconfig { value "enable" } provision.1nic = allows both management and data-plane to use the same interface provision.1nicautoconfig = uses address from DHCP to configure a vlan, Self-IP and default gateway. Ex. network objects automatically configured admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net vlan net vlan internal if-index 112 interfaces { 1.0 { } } tag 4094 } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net self net self self_1nic { address 10.0.1.65/24 allow-service { default } traffic-group traffic-group-local-only vlan internal } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net route net route default { gw 10.0.1.1 network default } Note: Traffic Management Shell and the Configuration Utility (GUI) are still available on ports 22 and 443 respectively. If you want to run the management GUI on a higher port (for instance if you don’t have the BIG-IPs behind a Port Address Translation service (like ELB) and want to run an HTTPS virtual on 443), use the following commands: tmsh modify sys httpd ssl-port 8443 tmsh modify net self-allow defaults add { tcp:8443 } tmsh modify net self-allow defaults delete { tcp:443 } WARNING: Now that management and dataplane run on the same interface, make sure to modify your Security Groups to restrict access to SSH and whatever port you use for the Mgmt GUI port to trusted networks. UPDATE: On Single-Nic, Device Service Clustering currently only supports Configuration Syncing (Network Failover is restricted for now due to BZ-606032). In general, the single-NIC model lends itself better to single-tenant or per-app deployments, where you need the advanced services from BIG-IP like content routing policies, iRules scripting, WAF, etc. but don’t necessarily care for maintaining a management subnet in the deployment and are just optimizing or securing a single application. By single tenant we also mean single management domain as you're typically running everything through a single wildcard virtual (ex. 0.0.0.0/0, 0.0.0.0/80, 0.0.0.0/443, etc.) vs. giving each tenant its own Virtual Server (usually with its own IP and configuration) to manage. However, you can still technically run multiple applications behind this virtual with a policy or iRule, where you make traffic steering decisions based on L4-L7 content (SNI, hostname headers, URIs, etc.). In addition, if the BIG-IPs are sitting behind a Port Address Translation service, it also possible to stack virtual services on ports instead. Ex. 0.0.0.0:444 = Virtual Service 1 0.0.0.0:445 = Virtual Service 2 0.0.0.0:446 = Virtual Service 3 We’ll let you get creative here…. BIG-IP Auto Scale Finally, the last component of Auto Scaling BIG-IPs involves building scaling policies via Cloudwatch Alarms. In addition to the built-in EC2 metrics in CloudWatch, BIG-IP can report its own set of metrics, shown below: Figure 3: Cloudwatch metrics to scale BIG-IPs based on traffic load. This can be configured with the following TMSH commands on any version 12.0 or later build: tmsh modify sys autoscale-group autoscale-group-id ${BIGIP_ASG_NAME} tmsh load sys config merge file /usr/share/aws/metrics/aws-cloudwatch-icall-metrics-config These commands tell BIG-IP to push the above metrics to a custom “Namespace” on which we can roll up data via standard aggregation functions (min, max, average, sum). This namespace (based on Auto Scale group name) will appear as a row under in the “Custom metrics” dropdown on the left side-bar in CloudWatch (left side of Figure 1). Once these BIG-IP or EC2 metrics have been populated, CloudWatch alarms can be built, and these alarms are available for use in Auto Scaling policies for the BIG-IP Auto Scale group. (Amazon provides some nice instructions here). Auto Scaling Pool Members and Service Discovery If you are scaling your ADC tier, you are likely also going to be scaling your application as well. There are two options for discovering pool members in your application's auto scale group. 1) FQDN Nodes For example, in a typicalsandwichdeployment, your application members might also be sitting behind an internal ELB so you would simply point your FQDN node at the ELB's DNS. For more information, please see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-implementations-12-1-0/25.html?sr=56133323 2) BIG-IP's AWS Auto Scale Pool Member Discovery feature (introduced v12.0) This feature polls the Auto Scale Group via API calls and populates the pool based on its membership. For more information, please see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-autoscaling-amazon-ec2-12-1-0/2.html Putting it all together The high-level steps for Auto Scaling BIG-IP include the following: Optionally* creating an ElasticLoadBalancer group which will direct traffic to BIG-IPs in your Auto Scale group once they become operational. Otherwise, will need Global Server Load Balancing (GSLB). Creating a launch configuration in EC2 (referencing either custom image id and/or using Cloud-init scripts as described above) Creating an Auto Scale group using this launch configuration Creating CloudWatch alarms using the EC2 or custom metrics reported by BIG-IP. Creating scaling policies for your BIG-IP Auto Scale group using the alarms above. You will want to create both scale up and scale down policies. Here are some things to keep in mind when Auto Scaling BIG-IP ● BIG-IP must run in a single-interface configuration with a wildcard listener (as we talked about earlier). This is required because we don't know what IP address the BIG-IP will get. ● Auto Scale Groups themselves consist of utility instances of BIG-IP ● The scale up time for BIG-IP is about 12-20 minutes depending on what is configured or provisioned. While this may seem like a long-time, creating the right scaling policies (polling intervals, thresholds, unit of scale) make this a non-issue. ● This deployment model lends itself toward the themes of stateless, horizontal scalability and immutability embraced by the cloud. Currently, the config on each device is updated once and only once at device startup. The only way to change the config will be through the creation of a new image or modification to the launch configuration. Stay tuned for a clustered deployment which is managed in a more traditional operational approach. If interesting in exploring Auto Scale further, we have put together some examples of scaling the BIG-IP tier here: https://github.com/f5devcentral/f5-aws-autoscale * Above github repository also provides some examples of incorporating BYOL instances in the deployment to help leverage BYOL instances for your static load and using Auto Scale instances for your dynamic load. See the READMEs for more information. CloudFormation templates with Auto Scaled BIG-IP and Application Need some ideas on how you might leverage these solutions? Now that you can completely deploy a solution with a single template, how about building service catalog items for your business units that deploy different BIG-IP services (LB, WAF) or a managed service build on top of AWS that can be easily deployed each time you on-board a new customer?2.8KViews0likes7CommentsBuilding Application Delivery Services from Templates using the REST API (for People Like Me).
I’ve talked before about the value of Application Policies (AKA Service Templates, AKA iApp Templates). So has my esteemed colleague Nathan Pearce. We think they are pretty important, especially if you’re looking to deploy complex application delivery services using automation tools. There are quite a lot of resources availabe to help you do this, including a great guide from Chris Mutzel, and some cool Python scripts and tools from the shy and retiring Hitesh Patel on Github. These are great, and if you are up to speed on iControl and REST then stop reading this and head to one of those places. They are clever over there. However, to steal a line: “I’m not a smart man” (that’s probably why I’ve ended up as an explainer, rather than a true creator.) I needed some more basic information and examples before I moved on to these. I thought it might be worth documenting my journey, if nothing else for the amusement of others. First REST – I read up on the BIG-IP REST API here. There is a lot of useful information there. Now our service template implementation – iApp Templates and their resultant Application Services. For the purposes of this first blog we are going to be using about the most basic template – HTTP. This template simply creates a standard L7 VIP with a pool of servers at the backend. Jump on a BIG-IP and take a look – it’s pretty simple: We are going to stick with straight HTTP, port 80 for now. This is just to understand the template and how to create a service using it via the REST API. To keep this as simple as possible we’ll be using the Chrome Advanced REST Client. You can use other tools like curl or the REST client of your choice, of course. The first thing we are going to do is to create a new application service on the BIG-IP using the GUI. What? Yes, I know that seems backwards – but first we need to see what a deployed service looks like when we qurey it using REST. If anyone blagged their way through Visual Basic for Applications by mainly recording macro steps and then editing the resulting code, you know what I’m talking about. So here we have the resulting service – note that the pool members are dummies - they don’t exist so the monitor will mark them as down. OK, so what does this look like if you query for it using REST? Lets find out by executing a GET in the REST client with this URL: https://<my-bigip>/mgmt/tm/sys/application/service/~Common~test1.app~test1?expandSubcollections=true So what does all that mean? Well the first bit is obvious, even to me. “ https://<my-bigip> ” is the address of the BIG-IP, not much to ponder there. Now we have the path to the API section that gets us to application services “ /mgmt/tm/sys/application/service/ ” – there is a good post explaining how this works here. Full documentation for the URLs/modules are in the REST user guide. Now we get down to our deployed service “ /~Common~test1.app~test1 ”. This one is maybe a little harder to get. First you need to know that some of our resource names have a “/” in their name - to get around that we simply replace “/” with “~” in the URL. If you were to query all the application services on the BIG-IP using https://<my-bigip>/mgmt/tm/sys/application/service/ If you are using the advanced REST client display this using the JSON tab to get a nice color coded output. You would be able to see the key value pair "fullPath": "/Common/test1.app/test1" – that’s what we are representing with our “~Common~test1.app~test1 ” . I’m sure there is an excellent architectural reason for why this is the full path, and one day I’ll find the motivation to go and discover it. But today is not that day, today I want to make something work. Finally there is the query string “?expandSubcollections=true” This simply expands any objects that are part of the application service but also an object in their own right – known as subcollections. A great example is a pool member of a pool. The pool is an object with its own properties (such as load balancing algorithm) that also contains a collection of pool members that each have their own properties (e.g. address, port, connection limit). In this instance it actually makes no difference in this example, but it may well do so later so we will leave it in. Subcollections are nicely explained here. Still with me? Great. Let’s take a look at the output of the GET: {"kind":"tm:sys:application:service:servicestate", "name":"test1","partition":"Common", "subPath":"test1.app", "fullPath":"/Common/test1.app/test1", "generation":188, "selfLink":"https://localhost/mgmt/tm/sys/application/service/~Common~test1.app~test1?expandSubcollections=true&ver=12.0.0", "deviceGroup":"none", "inheritedDevicegroup":"true", "inheritedTrafficGroup":"true","strictUpdates":"enabled", "template":"/Common/f5.http", "templateReference":{"link":"https://localhost/mgmt/tm/sys/application/template/~Common~f5.http?ver=12.0.0"}, "templateModified":"no", "trafficGroup":"/Common/traffic-group-1", "trafficGroupReference":{"link":"https://localhost/mgmt/tm/cm/traffic-group/~Common~traffic-group-1?ver=12.0.0"}, "tables":[{"name":"basic__snatpool_members"}, {"name":"net__snatpool_members"}, {"name":"optimizations__hosts"}, {"name":"pool__hosts","columnNames":["name"], "rows":[{"row":["test.test.com"]}]}, {"name":"pool__members", "columnNames":["addr","port","connection_limit"], "rows":[{"row":["10.0.0.10","80","0"]}]}, {"name":"server_pools__servers"}], "variables":[{"name":"client__http_compression", "encrypted":"no","value":"/#create_new#"}, {"name":"monitor__monitor","encrypted":"no","value":"/#create_new#"}, {"name":"monitor__response","encrypted":"no","value":"none"}, {"name":"monitor__uri","encrypted":"no","value":"/"}, {"name":"net__client_mode","encrypted":"no","value":"wan"}, {"name":"net__server_mode","encrypted":"no","value":"lan"}, {"name":"pool__addr","encrypted":"no","value":"10.0.1.28"}, {"name":"pool__pool_to_use","encrypted":"no","value":"/#create_new#"}, {"name":"pool__port","encrypted":"no","value":"80"}, {"name":"ssl__mode","encrypted":"no","value":"no_ssl"}, {"name":"ssl_encryption_questions__advanced","encrypted":"no","value":"no"}, {"name":"ssl_encryption_questions__help","encrypted":"no","value":"hide"} ] } OK, so now we just need to do a quick match up with the properties we created via the GUI to see what everything means. {"kind":"tm:sys:application:service:servicestate", "name":"test1","partition":"Common", "subPath":"test1.app", "fullPath":"/Common/test1.app/test1", "generation":188, "selfLink":"https://localhost/mgmt/tm/sys/application/service/~Common~test1.app~test1? "Test1" - That’s the the name of our application service. Easy. Now let’s ignore some other bits until we get to another section we recognize. {"name":"pool__members", "columnNames":["addr","port","connection_limit"], "rows":[{"row":["10.0.0.10","80","0"]} This is the line that defines the pool members – because there can be more than one we define a list of columns and then the rows afterwards. After that there are lines that instruct the iApp to create new monitors, set the TCP profiles, turn on or off compression etc. Now we get to another important line: {"name":"pool__addr","encrypted":"no","value":"10.0.1.28"} That’s the virtual server address, then the port is next. {"name":"pool__port","encrypted":"no","value":"80"} Why, you are asking, is this the ‘ pool_address ’ and not ‘ VS_address ’? Honestly – I have no idea. I could find out, but that won’t get either of us anywhere and I have a stack of books to read and records to listen to. So actually, there we have it, just a few key lines that define a simple iApp service. That’s probably enough to create a new one right? So let’s try doing a POST with this body: {"kind":"tm:sys:application:service:servicestate", "name":"test2","partition":"Common", "subPath":"test2.app", "fullPath":"/Common/test2.app/test2", "generation":485, "selfLink":"https://localhost/mgmt/tm/sys/application/service/~Common~test2.app~test2?ver=12.0.0", "deviceGroup":"none", "inheritedDevicegroup":"true", "inheritedTrafficGroup":"false", "strictUpdates":"enabled", "template":"/Common/f5.http", "templateReference":{"link":"https://localhost/mgmt/tm/sys/application/template/~Common~f5.http?ver=12.0.0"}, "templateModified":"no","trafficGroup":"/Common/traffic-group-1", "trafficGroupReference":{"link":"https://localhost/mgmt/tm/cm/traffic-group/~Common~traffic-group-1?ver=12.0.0"}, "tables":[{"name":"basic__snatpool_members"}, {"name":"net__snatpool_members"}, {"name":"optimizations__hosts"}, {"name":"pool__hosts", "columnNames":["name"], "rows":[{"row":["test2.test.com"]}]}, {"name":"pool__members", "columnNames":["addr","port","connection_limit"], "rows":[{"row":["10.0.0.20","80","0"]}]}, {"name":"server_pools__servers"}], "variables":[{"name":"client__http_compression", "encrypted":"no","value":"/#create_new#"}, {"name":"monitor__monitor", "encrypted":"no","value":"/#create_new#"}, {"name":"monitor__response","encrypted":"no","value":"none"}, {"name":"monitor__uri","encrypted":"no","value":"/"}, {"name":"net__client_mode","encrypted":"no","value":"wan"}, {"name":"net__server_mode","encrypted":"no","value":"lan"}, {"name":"pool__addr","encrypted":"no","value":"10.0.0.30"}, {"name":"pool__pool_to_use","encrypted":"no","value":"/#create_new#"}, {"name":"pool__port","encrypted":"no","value":"80"}, {"name":"ssl__mode","encrypted":"no","value":"no_ssl"}, {"name":"ssl_encryption_questions__advanced","encrypted":"no","value":"no"}, {"name":"ssl_encryption_questions__help","encrypted":"no","value":"hide"} ]} POST this, and your return should be similar to the GET we did before, but with your new variables. Check the GUI and see what we have: Looks like we’ve just created a new service. Go us. Let’s expand out the test2 application service: There you go – we have created a new configuration from a template in a fairly easy manner. Now I know that the Advanced REST API client is not a scalable enterprise automation tool, but it exposes the heart of how this works and now you might be in a better position to automate with the tool of your choice. Next I’m going to work out how to do an update to the config and redeploy it. Stay tuned. (Note - I'm sorry about the typo riddled earlier version of this article. There were a combination of technological and meat-bag failures. As I say below - "I'm not a smart man". Thanks to those that pointed out my many errors, and no thanks to my blog writing app.)904Views0likes4Comments20 Lines or Less: SAML SP Issuer, Python!, and Rewriting
What could you do with your code in 20 Lines or Less? That's the question we like to ask from, for, and of (feel free to insert your favorite preposition here) the DevCentral community, and every time we do, we go looking to find cool new examples that show just how flexible and powerful iRules and other F5 API code can be without getting in over your head. Thus was born the 20LoL (20 Lines or Less) series many moons ago. Over the years we've highlighted hundreds of iRules examples, and a growing number of iControl samples, all of which do downright cool things in less than 21 lines of code. Deploy an iRule with the new F5 SDK This time last month Lori posted an article announcing a new sdk for python. You might be thinking things like “but I love bigsuds, it’s awesome!” or if you are Joe, “python sucks,” but unlike bigsuds, which interacts with the SOAP interface on BIG-IP, the f5-common-python sdk interacts with the REST interface. This simplifies end user code tremendously as you don’t need to roll your own modules. For example, to take an iRule from a local file and deploy it to a range of BIG-IPs, it takes a scant few lines of code that doesn’t come near our 20 line limit. from f5.bigip import BigIP with open('proc.tcl', 'r') as irule: mycode = irule.read() irule.close() for i in range(1, 30): host = '172.16.%d.5' % i b = BigIP(host, 'admin', 'admin') try: newrule = b.ltm.rules.rule.create(name='proctest', partition='Common', apiAnonymous=mycode) print newrule.raw except Exception, e: print e SP Issuer Extraction from APM SAML idP APM doesn't expose any detail about the SAML SP Issuer when authentication requests hitting APM as an IdP during an SP initiated SAMLRequest. This iRule when applied to a SAML IdP enabled virtual server will extract the assertion request, decode it and present the SAML SP Issuer ID as the session variable %{session.saml.request.issuer} within APM. This comes in real handy when performing authorization of the resource and could help avoid having APM perform a TCP connection reset when a SAML resource isn't authorized. when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_REQUEST { if { [HTTP::path] equals "/saml/idp/profile/redirectorpost/sso" } { if { [HTTP::method] equals "POST" } { set content_length [HTTP::header value Content-Length] HTTP::collect $content_length } } } when HTTP_REQUEST_DATA { set payload_data [URI::decode [HTTP::payload]] if { $payload_data contains "SAMLRequest" } { set SAMLdata [b64decode [URI::query "?$payload_data" "SAMLRequest"]] set SAML_Issuer_loc [string first "saml:issuer" [string tolower $SAMLdata]] set SAML_Issuer_start [expr {[string first ">" $SAMLdata $SAML_Issuer_loc] + 1}] set SAML_Issuer_end [expr {[string first " Changing Narrow Scope Ken B had a couple specific requirements for his scenario: Change the server and path from initial request, but retain the query. Sometimes the scale seems so much larger than it actually is, and once you get to a solution, like this one provided by Stanislas, you can really see the elegance and simplicity of iRules. when HTTP_REQUEST { if {([HTTP::path] equals "/fubarpath/AMSDocViewer.asp") && ([HTTP::host] equals "server1")} { HTTP::respond 302 noserver Location "http://server2/apppath/ImageEnablingLauncher.aspx?[HTTP::query]" } } And there it is, another edition of 20 Lines or Less, the power of iRules on full display. Happy coding out there, and I look forward to seeing what excellent contributions might make the next edition!445Views0likes2Comments