iControlREST
1095 TopicsAS3 Best Practice
Introduction AS3 is a declarative API that uses JSON key-value pairs to describe a BIG-IP configuration. From virtual IP to virtual server, to the members, pools, and nodes required, AS3 provides a simple, readable format in which to describe a configuration. Once you've got the configuration, all that's needed is to POST it to the BIG-IP, where the AS3 extension will happily accept it and execute the commands necessary to turn it into a fully functional, deployed BIG-IP configuration. If you are new to AS3, start reading the following references: Products - Automation and orchestration toolchain(f5.com; Product information) Application Services 3 Extension Documentation(clouddocs; API documentation and guides) F5 Application Services 3 Extension(AS3) (GitHub; Source repository) This article describes some considerations in order to efficiently deploy the AS3 configurations. Architecture In the TMOS space, the services that AS3 provides are processed by a daemon named 'restnoded'. It relies on the existing BIG-IP framework for deploying declarations. The framework consists of httpd, restjavad and icrd_child as depicted below (the numbers in parenthesis are listening TCP port numbers). These processes are also used by other services. For example, restjavad is a gateway for all the iControl REST requests, and is used by a number of services on BIG-IP and BIG-IQ. When an interaction between any of the processes fails, AS3 operation fails. The failures stem from lack of resources, timeouts, data exceeding predefined thresholds, resource contention among the services, and more. In order to complete AS3 operations successfully, it is advised to follow the Best Practice outlined below. Best Practice Your single source of truth is your declaration Refrain from overwriting the AS3-deployed BIG-IP configurations by the other means such as TMSH, GUI or iControl REST calls. Since you started to use the AS3 declarative model, the source of truth for your device's configurations is in your declaration, not the BIG-IP configuration files. Although AS3 tries to weigh BIG-IP locally stored configurations as much as it can do, discrepancy between the declaration and the current configuration on BIG-IP may cause the AS3 to perform less efficiently or error unexpectedly. When you wish to change a section of a tenant (e.g., pool name change), modify the declaration and submit it. Keep the number of applications in one tenant to a minimum AS3 processes each tenant separately.Having too many applications (virtual servers) in a single tenant (partition) results in a lengthy poll when determining the current configuration. In extreme cases (thousands of virtuals), the action may time out. When you want to deploy a thousand or more applications on a single device, consider chunking the work for AS3 by spreading the applications across multiple tenants (say, 100 applications per tenant). AS3 tenant access behavior behaves as BIG-IP partition behavior.A non-Common partition virtual cannot gain access to another partition's pool, and in the same way, an AS3 application does not have access to a pool or profile in another tenant.In order to share configuration across tenants, AS3 allows configuration of the "Shared" application within the "Common" tenant.AS3 avoids race conditions while configuring /Common/Shared by processing additions first and deletions last, as shown below.This dual process may cause some additional delay in declaration handling. Overwrite rather than patching (POSTing is a more efficient practice than PATCHing) AS3 is a stateless machine and is idempotent. It polls BIG-IP for its full configuration, performs a current-vs-desired state comparison, and generates an optimal set of REST calls to fill the differences.When the initial state of BIG-IP is blank, the poll time is negligible.This is why initial configuration with AS3 is often quicker than subsequent changes, especially when the tenant contains a large number of applications. AS3 provides the means to partially modify using PATCH (seeAS3 API Methods Details), but do not expect PATCH changes to be performant.AS3 processes each PATCH by (1) performing a GET to obtain the last declaration, (2) patching that declaration, and (3) POSTing the entire declaration to itself.A PATCH of one pool member is therefore slower than a POST of your entire tenant configuration.If you decide to use PATCH,make sure that the tenant configuration is a manageable size. Note: Using PATCH to make a surgical change is convenient, but using PATCH over POST breaks the declarative model. Your declaration should be your single source of truth.If you include PATCH, the source of truth becomes "POST this file, then apply one or more PATCH declarations." Get the latest version AS3 is evolving rapidly with new features that customers have been wishing for along with fixes for known issues. Visitthe AS3 section of the F5 Networks Github.Issuessection shows what features and fixes have been incorporated. For BIG-IQ, check K54909607: BIG-IQ Centralized Management compatibility with F5 Application Services 3 Extension and F5 Declarative Onboarding for compatibilities with BIG-IQ versions before installation. Use administrator Use a user with the administrator role when you submit your declaration to a target BIG-IP device. Your may find your role insufficient to manipulate BIG-IP objects that are included in your declaration. Even one authorized item will cause the entire operation to fail and role back. See the following articles for more on BIG-IP user and role. Manual Chapter : User Roles (12.x) Manual Chapter : User Roles (13.x) Manual Chapter : User Roles (14.x) Prerequisites and Requirements(clouddocs AS3 document) Use Basic Authentication for a large declaration You can choose either Basic Authentication (HTTP Authorization header) or Token-Based Authentication (F5 proprietary X-F5-Auth-Token) for accessing BIG-IP. While the Basic Authentication can be used any time, a token obtained for the Token-Based Authentication expires after 1,200 seconds (20 minutes). While AS3 does re-request a new token upon expiry, it requires time to perform the operation, which may cause AS3 to slow down. Also, the number of tokens for a user is limited to 100 (since 13.1), hence if you happen to have other iControl REST players (such as BIG-IQ or your custom iControl REST scripts) using the Token-Based Authentication for the same user, AS3 may not be able to obtain the next token, and your request will fail. See the following articles for more on the Token-Based Authentication. Demystifying iControl REST Part 6: Token-Based Authentication(DevCentral article). iControl REST Authentication Token Management(DevCentral article) Authentication and Authorization(clouddocs AS3 document) Choose the best window for deployment AS3 (restnoded daemon) is a Control Plane process. It competes against other Control Plane processes such as monpd and iRules LX (node.js) for CPU/memory resources. AS3 uses the iControl REST framework for manipulating the BIG-IP resources. This implies that its operation is impacted by any processes that use httpd (e.g., GUI), restjavad, icrd_child and mcpd. If you have resource-hungry processes that run periodically (e.g., avrd), you may want to run your AS3 declaration during some other time window. See the following K articles for alist of processes K89999342 BIG-IP Daemons (12.x) K05645522BIG-IP Daemons (v13.x) K67197865BIG-IP Daemons (v14.x) K14020: BIG-IP ASM daemons (11.x - 15.x) K14462: Overview of BIG-IP AAM daemons (11.x - 15.x) Workarounds If you experience issues such as timeout on restjavad, it is possible that your AS3 operation had resource issues. After reviewing the Best Practice above but still unable to alleviate the problem, you may be able to temporarily fix it by applying the following tactics. Increase the restjavad memory allocation The memory size of restjavad can be increased by the following tmsh sys db commands tmsh modify sys db provision.extramb value <value> tmsh modify sys db restjavad.useextramb value true The provision.extramb db key changes the maximum Java heap memory to (192 + <value> * 8 / 10) MB. The default value is 0. After changing the memory size, you need to restart restjavad. tmsh restart sys service restjavad See the following article for more on the memory allocation: K26427018: Overview of Management provisioning Increase a number of icrd_child processes restjavad spawns a number of icrd_child processes depending on the load. The maximum number of icrd_child processes can be configured from /etc/icrd.conf. Please consult F5 Support for details. See the following article for more on the icrd_child process verbosity: K96840770: Configuring the log verbosity for iControl REST API related to icrd_child Decrease the verbosity levels of restjavad and icrd_child Writing log messages to the file system is not exactly free of charge. Writing unnecessarily large amount of messages to files would increase the I/O wait, hence results in slowness of processes. If you have changed the verbosity levels of restjavad and/or icrd_child, consider rolling back the default levels. See the following article for methods to change verbosity level: K15436: Configuring the verbosity for restjavad logs on the BIG-IP system13KViews13likes2CommentsDemystifying iControl REST Part 5: Transferring Files
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written on iControl REST so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. This article will cover the details on how to transfer files to/from the BIG-IP using iControl REST and the python programming language. (Note: this functionality requires 12.0+.) The REST File Transfer Worker The file transfer worker allows a client to transfer files through a series of GET operations for downloads and POST operations for uploads. The Content-Range header is used for both as a means to chunk the content. For downloads, the worker listens onthe following interfaces. Description Method URI File Location Download a File GET /mgmt/cm/autodeploy/software-image-downloads/ /shared/images/ Upload an Image File POST /mgmt/cm/autodeploy/software-image-uploads/ /shared/images/ Upload a File POST /mgmt/shared/file-transfer/uploads/ /var/config/rest/downloads/ Download a QKView GET /mgmt/shared/file-transfer/qkview-downloads/ /var/tmp/ Download a UCS GET /mgmt/shared/file-transfer/ucs-downloads/ /var/local/ucs/ Upload ASM Policy POST /mgmt/tm/asm/file-transfer/uploads/ /var/ts/var/rest/ Download ASM Policy GET /mgmt/tm/asm/file-transfer/downloads/ /var/ts/var/rest/ Binary and text files are supported. The magic in the transfer is the Content-Range header, which has the following format: Content-Range: start-end/filesize Where start/end are the chunk's delimiters in the file and filesize is well, the file size. Any file larger than 1M needs to be chunked with this header as that limit is enforced by the worker. This is done to avoid potential denial of service attacks and out of memory errors. There are benefits of chunking as well: Accurate progress bars Resuming interrupted downloads Random access to file content possible Uploading a File The function is shown below. Note that whereas normally with the REST API the Content-Type is application/json, with file transfers that changes to application/octet-stream. The workflow for the function works like this (line number in parentheses) : Set the Chunk Size (3) Set the Content-Type header (4-6) Open the file (7) Get the filename (apart from the path) from the absolute path (8) If the extension is an .iso file (image) put it in /shared/images, otherwise it’ll go in /var/config/rest/downloads (9-12) Disable ssl warnings requests (required with my version: 2.8.1. YMMV) (14) Set the total file size for use with the Content-Range header (15) Set the start variable to 0 (17) Begin loop to iterate through the file and upload in chunks (19) Read data from the file and if there is no more data, break the loop (20-22) set the current bytes read, if less than the chunk size, then this is the last chunk, so set the end to the size from step 7. Otherwise, add current bytes length to the start value and set that as the end. (24-28) Set the Content-Range header value and then add that to the header (30-31) Make the POST request, uploading the content chunk (32-36) Increment the start value by the current bytes content length (38) def _upload(host, creds, fp): chunk_size = 512 * 1024 headers = { 'Content-Type': 'application/octet-stream' } fileobj = open(fp, 'rb') filename = os.path.basename(fp) if os.path.splitext(filename)[-1] == '.iso': uri = 'https://%s/mgmt/cm/autodeploy/software-image-uploads/%s' % (host, filename) else: uri = 'https://%s/mgmt/shared/file-transfer/uploads/%s' % (host, filename) requests.packages.urllib3.disable_warnings() size = os.path.getsize(fp) start = 0 while True: file_slice = fileobj.read(chunk_size) if not file_slice: break current_bytes = len(file_slice) if current_bytes < chunk_size: end = size else: end = start + current_bytes content_range = "%s-%s/%s" % (start, end - 1, size) headers['Content-Range'] = content_range requests.post(uri, auth=creds, data=file_slice, headers=headers, verify=False) start += current_bytes Downloading a File Downloading is very similar but there are some differences. Here is the workflow that is different, followed by the code. Note that the local path where the file will be downloaded to is given as part of the filename. URI is set to downloads worker. The only supported download directory at this time is /shared/images. (8) Open the local file so received data can be written to it (11) Make the request (22-26) If response code is 200 and if size is greater than 0, increment the current bytes and write the data to file, otherwise exit the loop (28-40) Set the value of the returned Content-Range header to crange and if initial size (0), set the file size to the size variable (42-46) If the file is smaller than the chunk size, adjust the chunk size down to the total file size and continue (51-55) Do the math to get ready to download the next chunk (57-62) def _download(host, creds, fp): chunk_size = 512 * 1024 headers = { 'Content-Type': 'application/octet-stream' } filename = os.path.basename(fp) uri = 'https://%s/mgmt/cm/autodeploy/software-image-downloads/%s' % (host, filename) requests.packages.urllib3.disable_warnings() with open(fp, 'wb') as f: start = 0 end = chunk_size - 1 size = 0 current_bytes = 0 while True: content_range = "%s-%s/%s" % (start, end, size) headers['Content-Range'] = content_range #print headers resp = requests.get(uri, auth=creds, headers=headers, verify=False, stream=True) if resp.status_code == 200: # If the size is zero, then this is the first time through the # loop and we don't want to write data because we haven't yet # figured out the total size of the file. if size > 0: current_bytes += chunk_size for chunk in resp.iter_content(chunk_size): f.write(chunk) # Once we've downloaded the entire file, we can break out of # the loop if end == size: break crange = resp.headers['Content-Range'] # Determine the total number of bytes to read if size == 0: size = int(crange.split('/')[-1]) - 1 # If the file is smaller than the chunk size, BIG-IP will # return an HTTP 400. So adjust the chunk_size down to the # total file size... if chunk_size > size: end = size # ...and pass on the rest of the code continue start += chunk_size if (current_bytes + chunk_size) > size: end = size else: end = start + chunk_size - 1 Now you know how to upload and download files. Let’s do something with it! A Use Case - Upload Cert & Key to BIG-IP and Create a Clientssl Profile! This whole effort was sparked by a use case in Q&A, so I had to deliver the goods with more than just moving files around. The complete script is linked at the bottom, but there are a few steps required to get to a clientssl certificate: Upload the key & certificate Create the file object for key/cert Create the clientssl profile You know how to do step 1 now. Step 2 is to create the file object for the key and certificate. After a quick test to see which file is the certificate, you set both files, build the payload, then make the POST requests to bind the uploaded files to the file object. def create_cert_obj(bigip, b_url, files): f1 = os.path.basename(files[0]) f2 = os.path.basename(files[1]) if f1.endswith('.crt'): certfilename = f1 keyfilename = f2 else: keyfilename = f1 certfilename = f2 certname = f1.split('.')[0] payload = {} payload['command'] = 'install' payload['name'] = certname # Map Cert to File Object payload['from-local-file'] = '/var/config/rest/downloads/%s' % certfilename bigip.post('%s/sys/crypto/cert' % b_url, json.dumps(payload)) # Map Key to File Object payload['from-local-file'] = '/var/config/rest/downloads/%s' % keyfilename bigip.post('%s/sys/crypto/key' % b_url, json.dumps(payload)) return certfilename, keyfilename Notice we return the key/cert filenames so they can be used for step 3 to establish the clientssl profile. In this example, I name the file object and the clientssl profile to the name of the certfilename (minus the extension) but you can alter this to allow the objects names to be provided. To build the profile, just create the payload with the custom key/cert and make the POST request and you are done! def create_ssl_profile(bigip, b_url, certname, keyname): payload = {} payload['name'] = certname.split('.')[0] payload['cert'] = certname payload['key'] = keyname bigip.post('%s/ltm/profile/client-ssl' % b_url, json.dumps(payload)) Much thanks to Tim Rupp who helped me get across the finish line with some counting and rest worker errors we were troubleshooting on the download function. Get the Code Upload a File Download a File Upload Cert/Key & Build a Clientssl Profile8.7KViews4likes45CommentsGetting Started with the f5-common-python SDK
If you have dabbled with python and iControl over the years, you might be familiar with some of my other “Getting Stared with …” articles on python libraries. I started my last, on Bigsuds, this way: I imagine the progression for you, the reader, will be something like this in the first six- or seven-hundred milliseconds after reading the title: Oh cool! Wait, what? Don’t we already have like two libraries for python? Really, a third library for python? It’s past time to update those numbers as the forth library in our python support evolution, the f5-common-python SDK, has been available since March of last year!I still love Bigsuds, but it only supports the iControl SOAP interface. The f5-common-python SDK is under continuous development in support of the iControl REST interface, and like Bigsuds, does a lot of the API heavy lifting for you so you can just focus on the logic of bending BIG-IP configuration to your will. Not all endpoints are supported yet, but please feel free to open an issue on the GitHub repo if there’s something missing you need for your project.In this article, I’ll cover the basics of installing the SDK and how to utilize the core functionality. Installing the SDK This section is going to be really short, as the SDK is uploaded to PyPI after reach release, though you can clone the GitHub project and run the development branch with latest features if you so desire. I'd recommend installing in a virtual environment to keep your system python uncluttered, but YMMV. pip install f5-sdk A simple one-liner and we're done! Moving on... Instantiating BIG-IP The first thing you’ll want to do with your shiny new toy is authenticate to the BIG-IP. You can use basic or token authentication to do so. I disable the certificate security warnings on my test boxes, but the first two lines in the sample code below are not necessary if you are using valid certificates >>> import requests >>> requests.packages.urllib3.disable_warnings() >>> from f5.bigip import ManagementRoot >>> # Basic Authentication >>> b = ManagementRoot('ltm3.test.local', 'admin', 'admin') >>> # Token Authentication >>> b = ManagementRoot('ltm3.test.local', 'admin', 'admin', token=True) >>> b.tmos_version u'12.1.0' The b object has credentials attached and various other attributes as well, such as the tmos_version attribute shown above. This is the root object you’ll use (of course you don’t have to call it b, you can call it plutoWillAlwaysBeAPlanetToMe if you want to, but that’s a lot more typing) for all the modules you might interact with on the system. Nomenclature The method mappings are tied to the tmsh and REST URL ids. Consider the tmsh command tmsh list /ltm pool . In the URL, this would be https://ip/mgmt/tm/ltm/pool. For the SDK, at the collection level the command would be b.tm.ltm.pools . It's plural here because we are signifying the collection. If there is a collection already ending in an s, like the subcollection of a pool in members, it would be addressed as members_s. This will be more clear as we work through examples in later articles, but I wanted to provide a little guidance before moving on. Working with Collections There are two types of collections (well three if you include subcollections, but we’ll cover those in a later article,) organizing collections and collections. An organizing collection is a superset of other collections. For example, the ltm or net module listing would be an organizing collection, whereas ltm/pool or net/vlan would be collections. To retrieve either type, you use the get_collection method as shown below, with abbreviated output. # The LTM Organizing Collection >>> for x in b.tm.ltm.get_collection(): ... print x ... {u'reference': {u'link': u'https://localhost/mgmt/tm/ltm/auth?ver=12.1.0'}} {u'reference': {u'link': u'https://localhost/mgmt/tm/ltm/data-group?ver=12.1.0'}} {u'reference': {u'link': u'https://localhost/mgmt/tm/ltm/dns?ver=12.1.0'}} # The Net/Vlan Collection: >>> vlans = b.tm.net.vlans.get_collection() >>> for vlan in vlans: ... print vlan.name ... vlan10 vlan102 vlan103 Working with Named Resources A named resource, like a pool, vip, or vlan, is a fully configurable object for which the CURDLE methods are supported. These methods are: create() update() refresh() delete() load() exists() Let’s work through all these methods with a pool object. >>> b.tm.ltm.pools.pool.exists(name='mypool2017', partition='Common') False >>> p1 = b.tm.ltm.pools.pool.create(name='mypool2017', partition='Common') >>> p2 = b.tm.ltm.pools.pool.load(name='mypool2017', partition='Common') >>> p1.loadBalancingMode = 'least-connections-member' >>> p1.update() >>> assert p1.loadBalancingMode == p2.loadBalancingMode Traceback (most recent call last): File "", line 1, in AssertionError >>> p2.refresh() >>> assert p1.loadBalancingMode == p2.loadBalancingMode >>> p1.delete() >>> b.tm.ltm.pools.pool.exists(name='mypool2017', partition='Common') False Notice in line 1, I am looking to see if the pool called mypool2017 exists, to which I get a return value of False. So I can go ahead and create that pool as shown in line 3. In line 4, I load the same pool so I have two local python objects (p1, p2) that reference the same BIG-IP pool (mypool2017.) In line 5, I update the load balancing algorithm from the default of round robin to least connections member. But at this point, only the local python object has been updated. To update the BIG-IP, in line 6 I apply that method to the object. Now if I assert the LB algorithm between the local p1 and p2 python objects as shown in line 7, it fails, because we have updated p1, but p2 is still as it was when I initially loaded it. Refreshing p2 as shown in line 11 will update it (the local python object, not the BIG-IP pool.) Now I assert again in line 12, and it does not fail. As this was just an exercise, I delete the new pool (could be done on p1 or p2 since they reference the same BIG-IP object) in line 13, and a quick check to see if it exists in line 14 returns false. The great thing is that even though the endpoints change from pool to virtual to rule and so on, the methods used for them do not. Next Steps This is just the tip of the iceberg! There is much more to cover, so come back for the next installment, where we’ll cover unnamed resources and commands. If you can't wait, feel free to dig into the SDK documentation.13KViews3likes70CommentsiControl REST Cookbook - Virtual Server (ltm virtual)
This cookbook lists selected ready-to-use iControl REST curl commands for virtual-server related resources. Each recipe consists of the curl command, it's tmsh equivalent, and sample output. In this cookbook, the following curl options are used. Option Meaning ______________________________________________________________________________________ -s Suppress progress meter. Handy when you want to pipe the output. ______________________________________________________________________________________ -k Allows "insecure" SSL connections. ______________________________________________________________________________________ -u Specify user ID and password. For the start, you should use the "admin" account that you normally use to access the Configuration Utility. When you specify the password at the same time, concatenate with ":". e.g., admin:admin. ______________________________________________________________________________________ -X <method> Specify the HTTP method. When omitted, the default is GET. In the REST framework, POST means create (tmsh create), PATCH means overwriting the existing resource with the data sent (tmsh modify), and PATCH is for merging (ditto). ______________________________________________________________________________________ -H <Header> Specify the request header. When you send (POST, PATCH, PUT) data, you need to tell the server that the data is in JSON format. i.e., -H "Content-Type: application/json. ______________________________________________________________________________________ -d 'data' The JSON data to send. Note that you need to quote the entire json blob, and each "name":"value" pairs must be quoted. When you have nested quotes, make sure you escape (\) them. Get information of the virtual <vs> tmsh list ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', generation: 1109, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs?ver=12.1.0', addressStatus: 'yes', autoLasthop: 'default', cmpEnabled: 'yes', connectionLimit: 0, description: 'TestData', destination: '/Common/192.168.184.226:80', enabled: true, gtmScore: 0, ipProtocol: 'tcp', mask: '255.255.255.255', mirror: 'disabled', mobileAppTunnel: 'disabled', nat64: 'disabled', pool: '/Common/vs-pool', poolReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~vs-pool?ver=12.1.0' }, rateLimit: 'disabled', rateLimitDstMask: 0, rateLimitMode: 'object', rateLimitSrcMask: 0, serviceDownImmediateAction: 'none', source: '0.0.0.0/0', sourceAddressTranslation: { type: 'automap' }, sourcePort: 'preserve', synCookieStatus: 'not-activated', translateAddress: 'enabled', translatePort: 'enabled', vlansDisabled: true, vsIndex: 4, rules: [ '/Common/irule' ], rulesReference: [ { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~iRuleTest?ver=12.1.0' } ], policiesReference: { link: 'https://localhost/mgmt/tm/ltm/virtual/~Common~vs/policies?ver=12.1.0', isSubcollection: true }, profilesReference: { link: 'https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles?ver=12.1.0', isSubcollection: true } } Get only specfic field of the virtual <vs> The naming convension for the parameters is slightly different from the ones on tmsh, so look for the familiar names in the GET response above. The example below queris the Default Pool (pool). tmsh list ltm <vs> pool curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>?options=pool Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', generation: 1, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs?options=pool&ver=12.1.1', pool: '/Common/vs-pool', poolReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~vs-pool?ver=12.1.1' } } Get all the information of the virtual <vs> Unlike the tmsh equivalent, iControl REST GET does not return the configuration information of the attached policies and profiles. To see them, use expandSubcollections tmsh list ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>?expandSubcollections=true Sample Output { "addressStatus": "yes", "autoLasthop": "default", "cmpEnabled": "yes", "connectionLimit": 0, "destination": "/Common/192.168.184.240:80", "enabled": true, "fullPath": "vs", "generation": 291, "gtmScore": 0, "ipProtocol": "tcp", "kind": "tm:ltm:virtual:virtualstate", "mask": "255.255.255.255", "mirror": "disabled", "mobileAppTunnel": "disabled", "name": "vs", "nat64": "disabled", "policiesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/policies?ver=13.1.0" }, "pool": "/Common/CentOS-all80", "poolReference": { "link": "https://localhost/mgmt/tm/ltm/pool/~Common~CentOS-all80?ver=13.1.0" }, "profilesReference": { "isSubcollection": true, "items": [ { "context": "all", "fullPath": "/Common/http", "generation": 291, "kind": "tm:ltm:virtual:profiles:profilesstate", "name": "http", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/profile/http/~Common~http?ver=13.1.0" }, "partition": "Common", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles/~Common~http?ver=13.1.0" }, { "context": "all", "fullPath": "/Common/tcp", "generation": 287, "kind": "tm:ltm:virtual:profiles:profilesstate", "name": "tcp", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/profile/tcp/~Common~tcp?ver=13.1.0" }, "partition": "Common", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles/~Common~tcp?ver=13.1.0" } ], "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles?ver=13.1.0" }, "rateLimit": "disabled", "rateLimitDstMask": 0, "rateLimitMode": "object", "rateLimitSrcMask": 0, "selfLink": "https://localhost/mgmt/tm/ltm/virtual/vs?expandSubcollections=true&ver=13.1.0", "serviceDownImmediateAction": "none", "source": "0.0.0.0/0", "sourceAddressTranslation": { "type": "automap" }, "sourcePort": "preserve", "synCookieStatus": "not-activated", "translateAddress": "enabled", "translatePort": "enabled", "vlansDisabled": true, "vsIndex": 2 } Get stats of the virtual <vs> tmsh show ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>/stats Sample Output { kind: 'tm:ltm:virtual:virtualstats', generation: 1109, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs/stats?ver=12.1.0', entries: { 'https://localhost/mgmt/tm/ltm/virtual/vs/~Common~vs/stats': { nestedStats: { kind: 'tm:ltm:virtual:virtualstats', selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs/~Common~vs/stats?ver=12.1.0', entries: { 'clientside.bitsIn': { value: 12880 }, 'clientside.bitsOut': { value: 34592 }, 'clientside.curConns': { value: 0 }, 'clientside.evictedConns': { value: 0 }, 'clientside.maxConns': { value: 2 }, 'clientside.pktsIn': { value: 26 }, 'clientside.pktsOut': { value: 26 }, 'clientside.slowKilled': { value: 0 }, 'clientside.totConns': { value: 6 }, cmpEnableMode: { description: 'all-cpus' }, cmpEnabled: { description: 'enabled' }, csMaxConnDur: { value: 37 }, csMeanConnDur: { value: 29 }, csMinConnDur: { value: 17 }, destination: { description: '192.168.184.226:80' }, 'ephemeral.bitsIn': { value: 0 }, 'ephemeral.bitsOut': { value: 0 }, 'ephemeral.curConns': { value: 0 }, 'ephemeral.evictedConns': { value: 0 }, 'ephemeral.maxConns': { value: 0 }, 'ephemeral.pktsIn': { value: 0 }, 'ephemeral.pktsOut': { value: 0 }, 'ephemeral.slowKilled': { value: 0 }, 'ephemeral.totConns': { value: 0 }, fiveMinAvgUsageRatio: { value: 0 }, fiveSecAvgUsageRatio: { value: 0 }, tmName: { description: '/Common/vs' }, oneMinAvgUsageRatio: { value: 0 }, 'status.availabilityState': { description: 'available' }, 'status.enabledState': { description: 'enabled' }, 'status.statusReason': { description: 'The virtual server is available' }, syncookieStatus: { description: 'not-activated' }, 'syncookie.accepts': { value: 0 }, 'syncookie.hwAccepts': { value: 0 }, 'syncookie.hwSyncookies': { value: 0 }, 'syncookie.hwsyncookieInstance': { value: 0 }, 'syncookie.rejects': { value: 0 }, 'syncookie.swsyncookieInstance': { value: 0 }, 'syncookie.syncacheCurr': { value: 0 }, 'syncookie.syncacheOver': { value: 0 }, 'syncookie.syncookies': { value: 0 }, totRequests: { value: 4 } } } } } } Change one of the configuration options of the virtual <vs> The command below changes the Description field of the virtual ("description" in tmsh and iControl REST). tmsh modify ltm virtual <vs> description "Hello World!" curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"description": "Hello World!"}' Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', ... description: 'Hello World!', <==== Changed. ... } Disable the virtual <vs> The command syntax is same as above: To change the state of a virtual from "enabled" to "disabled", send "disabled":true. For enabling the virtual, use "enabled":true. Note that the Boolean type true/false does not require quotations. tmsh modify ltm virtual <vs> disabled curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"disabled": true}' \ Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', ... disabled: true, <== Changed ... } Add another iRule to <vs> When the virtual has iRules already attached, you need to send the existing ones too along with the additional one. For example, to add /Common/testRule1 to the virtual with /Common/testRule1, specify both in an array (square brackets). Note that the /Common/testRule2 iRule object should be already created. tmsh modify ltm virtual <vs> rules {testRule1 testRule2} curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"rules": ["/Common/testRule1", "/Common/testRule2"] }' Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', ... rules: [ '/Common/test1', '/Common/test2' ], <== Changed rulesReference: [ { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~test1?ver=12.1.1' }, { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~test2?ver=12.1.1' } ], ... } Create a new virtual <vs> You can create a skeleton virtual by specifying only Destination Address and Mask. The remaining parameters such as profiles are set to default. You can later modify the parameters by PATCH-ing. tmsh create ltm virtual <vs> destination <ip:port> mask <ip> curl -sku admin:admin -X POST -H "Content-Type: application/json" \ -d '{"name": "vs", "destination":"192.168.184.230:80", "mask":"255.255.255.255"}' \ https://<host>/mgmt/tm/ltm/virtual Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', partition: 'Common', fullPath: '/Common/vs', ... destination: '/Common/192.168.184.230:80', <== Created ... mask: '255.255.255.255', <== Created ... } Create a new virtual <vs> with a lot of parameters You can specify all the essential parameters upon creation. This example creates a new virtual with pool, default persistence profile, profiles, iRule, and source address translation. The call fails if any of the parameters conflicts. For example, you cannot specify "Cookie Persistence" without specifying appropriate profiles. If you do not specify any profile, it falls back to the default fastL4 , which is not compatible with Cookie Persistence. tmsh create ltm virtual <vs> destination <ip:port> mask <ip> pool <pool> persist replace-all-with { cookie } profiles add { tcp http clientssl } rules { <rule> } source-address-translation { type automap } curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual -H "Content-Type: application/json" -X POST -d '{"name": "vs", \ "destination": "10.10.10.10:10", \ "mask": "255.255.255.255", \ "pool": "CentOS-all80", \ "persist": [ {"name": "cookie"} ], \ "profilesReference": {"items": [ {"context": "all", "name": "http"}, {"context": "all", "name": "tcp"}, {"context": "clientside", "name": "clientssl"}] }, \ "rules": [ "ShowVersion" ], \ "sourceAddressTranslation": {"type": "automap"} }' Sample Output { "addressStatus": "yes", "autoLasthop": "default", "cmpEnabled": "yes", "connectionLimit": 0, "destination": "/Common/10.10.10.10:10", "enabled": true, "fullPath": "/Common/test", "generation": 592, "gtmScore": 0, "ipProtocol": "tcp", "kind": "tm:ltm:virtual:virtualstate", "mask": "255.255.255.255", "mirror": "disabled", "mobileAppTunnel": "disabled", "name": "vs", "nat64": "disabled", "partition": "Common", "persist": [ { "name": "cookie", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/persistence/cookie/~Common~cookie?ver=13.1.0" }, "partition": "Common", "tmDefault": "yes" } ], "policiesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~test/policies?ver=13.1.0" }, "pool": "/Common/CentOS-all80", "poolReference": { "link": "https://localhost/mgmt/tm/ltm/pool/~Common~CentOS-all80?ver=13.1.0" }, "profilesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~test/profiles?ver=13.1.0" }, "rateLimit": "disabled", "rateLimitDstMask": 0, "rateLimitMode": "object", "rateLimitSrcMask": 0, "rules": [ "/Common/ShowVersion" ], "rulesReference": [ { "link": "https://localhost/mgmt/tm/ltm/rule/~Common~ShowVersion?ver=13.1.0" } ], "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~test?ver=13.1.0", "serviceDownImmediateAction": "none", "source": "0.0.0.0/0", "sourceAddressTranslation": { "type": "automap" }, "sourcePort": "preserve", "synCookieStatus": "not-activated", "translateAddress": "enabled", "translatePort": "enabled", "vlansDisabled": true, "vsIndex": 52 } Delete a virtual <vs> tmsh delete ltm virtual <vs> curl -sku admin:admin https://192.168.226.55/mgmt/tm/ltm/virtual/<vs> -X DELETE Sample Output No output (just 200 OK and no response body) References curl.1 the man page curl Releases and Downloads ... including the port for Windows Jason Rahm's "Demystifying iControl REST" series(DevCentral) -- This is Part I of 7 at the time of this article. iControl REST API reference (DevCentral) iControl® REST API User Guide (DevCentral) -- Link is for 12.1. Search for the older versions.17KViews3likes13CommentsGenerate private key w/ CSR via iControl REST
Problem this snippet solves: Generate a private key w/ CSR How to use this snippet: To create a private key with a CSR via iControl REST: POST URL:https://10.1.1.165/mgmt/tm/sys/crypto/key Use the data below as your payload. For the name field, it must end in .key or you will get a false 404! Code : { "name":"www.testing.com.key", "commonName":"www.testing.com", "keySize":"4096", "keyType":"rsa-private", "options":[{"gen-csr":"www.testing.com"}], "organization":"Let It Snow Corp.", "ou":"Ice Engineering", "city":"Calhoun", "state":"AZ", "admin-email-address":"jerry@letit.snow", "email-address":"beth@letit.snow", "subject-alternative-name":"DNS:www.testing.com", "challenge-password":"myP4ssword" } Tested this on version: 13.01.9KViews3likes11CommentsUsing Cryptonice to check for HTTPS misconfigurations in devsecops workflows
Co-Author: Katie Newbold, F5 Labs Intern, Summer 2020 A huge thanks to Katie Newbold, lead developer on the Cryptonice project, for her amazing work and patience as I constantly moved the goal posts for this project. ---F5 Labs recently published an article Introducing the Cryptonice HTTPS Scanner. Cryptonice is aimed at making it easy for everyone to scan for and diagnose problems with HTTPS configurations. It is provided as a is a command line tool and Python library that allows a user to examine the TLS protocols and ciphers, certificate information, web application headers and DNS records for one or more supplied domain names. You can read more about why Cryptonice was released over at F5 Labs but it basically boils down a few simple reasons. Primarily, many people fire-and-forget their HTTPS configurations which mean they become out of date, and therefore weak, over time. In addition, other protocols, such as DNS can be used to improve upon the strength of TLS but few sites make use of them. Finally, with an increasing shift to automation (i.e. devsecops) it’s important to integrate TLS testing into the application lifecycle. How do I use Cryptonice? Since the tool is available as an executable, a Python script, and a Python library, there are a number of ways and means in which you might use Cryptonice. For example: The executable may be useful for those that do not have Python 3 installed and who want to perform occasional ad-hoc scans against internal or external websites The Python script may be installed along side other Python tools which could allow an internal security team to perform regular and scriptable scanning of internal sites The Python library could be used within devops automation workflows to check for valid certificates, protocols and ciphers when new code is pushed into dev or production environments The aforementioned F5 Labs article provides a quick overview of how to use the command line executable and Python script. But this is DevCentral, after all, so let’s focus on how to use the Python library in your own code. Using Cryptonice in your own code Cryptonice can output results to the console but, since we’re coding, we’ll focus on the detailed JSON output that it produces. Since it collects all scan and test results into a Python dictionary, this variable can be read directly, or your code may wish to read in the completed JSON output. More on this later. First off we’ll need to install the Cryptonice library. With Python 3 installed, we simply use the pip command to download and install it, along with its dependencies. pip install cryptonice Installing Cryptonice using pip will also install the dependent libraries: cffi, cryptography , dnspython, http-client, ipaddress, nassl, pycurl, pycparser, six, sslyze, tls-parser, and urllib3. You may see a warning about the cryptography library installation if you have a version that is greater than 2.9, but cryptonice will still function. This warning is generated because the sslyze package currently requires the cryptography library version to be between 2.6 and 2.9. Creating a simple Cryptonice script An example script (sample_script.py) is included in the GitHub repository. In this example, the script reads in a fully formatted JSON file called sample_scan.json from the command line (see below) and outputs the results in to a JSON file whose filename is based on the site being scanned. The only Cryptonice module that needs to be imported in this script is scanner. The JSON input is converted to a dictionary object and sent directly to the scanner_driver function, where the output is written to a JSON file through the writeToJSONFile function. from cryptonice import scanner import argparse import json def main(): parser = argparse.ArgumentParser() parser.add_argument("input_file", help="JSON input file of scan commands") args = parser.parse_args() input_file = args.input_file with open(input_file) as f: input_data = json.load(f) output_data, hostname = scanner.scanner_driver(input_data) if output_data is None and hostname is None: print("Error with input - scan was not completed") if __name__ == "__main__": main() In our example, above, the scanner_driver function is being passed the necessary dictionary object which is created from the JSON file being supplied as a command line parameter. Alternatively, the dictionary object could be created dynamically in your own code. It must, however, contain the same key/value pairs as our sample input file, below: This is what the JSON input file must look like: { "id": string, "port": int, "scans": [string] "tls_params":[string], "http_body": boolean, "force_redirect": boolean, "print_out": boolean, "generate_json": boolean, "targets": [string] } If certain parameters (such as “scans”, “tls_parameters”, or “targets”) are excluded completely, the program will abort early and print an error message to the console. Mimicking command line input If you would like to mimic the command line input in your own code, you could write a function that accepts a domain name via command line parameter and runs a suite of scans as defined in your variable default_dict: from cryptonice.scanner import writeToJSONFile, scanner_driver import argparse default_dict = {'id': 'default', 'port': 443, 'scans': ['TLS', 'HTTP', 'HTTP2', 'DNS'], 'tls_params': ["certificate_info", "ssl_2_0_cipher_suites", "ssl_3_0_cipher_suites", "tls_1_0_cipher_suites", "tls_1_1_cipher_suites", "tls_1_2_cipher_suites", "tls_1_3_cipher_suites", "http_headers"], 'http_body': False, 'print_out': True, 'generate_json': True, 'force_redirect': True } def main(): parser = argparse.ArgumentParser(description="Supply commands to cryptonice") parser.add_argument("domain", nargs='+', help="Domain name to scan", type=str) args = parser.parse_args() domain_name = args.domain if not domain_name: parser.error('domain (like www.google.com or f5.com) is required') input_data = default_dict input_data.update({'targets': domain_name}) output_data, hostname = scanner_driver(input_data) if output_data is None and hostname is None: print("Error with input - scan was not completed") if __name__ == "__main__": main() Using the Cryptonice JSON output Full documentation for the Cryptonice JSON output will shortly be available on the Cryptonice ReadTheDocs pages and whilst many of the key/value pairs will be self explanatory, let’s take a look at some of the more useful ones. TLS protocols and ciphers The tls_scan block contains detailed information about the protocols, ciphers and certificates discovered as part of the handshake with the target site. This can be used to check for expired or expiring certificates, to ensure that old protocols (such as SSLv3) are not in use and to view recommendations. cipher_suite_supported shows the cipher suite preferred by the target webserver. This is typically the best (read most secure) one available to modern clients. Similarly, highest_tls_version_supported shows the latest available version of the TLS protocol for this site. In this example, cert_recommendations is blank but is a certificate were untrusted or expired this would be a quick place to check for any urgent action that should be taken. The dns section shows results for cryptographically relevant DNS records, for example Certificate Authority Authorization (CAA) and DKIM (found in the TXT records). In our example, below, we can see a dns_recommendations entry which suggested implementing DNS CAA since no such records can be found for this domain. { "scan_metadata":{ "job_id":"test.py", "hostname":"example.com", "port":443, "node_name":"Cocumba", "http_to_https":true, "status":"Successful", "start":"2020-07-1314:31:09.719227", "end":"2020-07-1314:31:16.939356" }, "http_headers":{ "Connection":{ }, "Headers":{ }, "Cookies":{ } }, "tls_scan":{ "hostname":"example.com", "ip_address":"104.127.16.98", "cipher_suite_supported":"TLS_AES_256_GCM_SHA384", "client_authorization_requirement":"DISABLED", "highest_tls_version_supported":"TLS_1_3", "cert_recommendations":{ }, "certificate_info":{ "leaf_certificate_has_must_staple_extension":false, "leaf_certificate_is_ev":false, "leaf_certificate_signed_certificate_timestamps_count":3, "leaf_certificate_subject_matches_hostname":true, "ocsp_response":{ "status":"SUCCESSFUL", "type":"BasicOCSPResponse", "version":1, "responder_id":"17D9D6252267F931C24941D93036448C6CA91FEB", "certificate_status":"good", "hash_algorithm":"sha1", "issuer_name_hash":"21F3459A18CAA6C84BDA1E3962B127D8338A7C48", "issuer_key_hash":"37D9D6252767F931C24943D93036448C2CA94FEB", "serial_number":"BB72FE903FA2B374E1D06F9AC9BC69A2" }, "ocsp_response_is_trusted":true, "certificate_0":{ "common_name":"*.example.com", "serial_number":"147833492218452301349329569502825345612", "public_key_algorithm":"RSA", "public_key_size":2048, "valid_from":"2020-01-1700:00:00", "valid_until":"2022-01-1623:59:59", "days_left":552, "signature_algorithm":"sha256", "subject_alt_names":[ "www.example.com" ], "certificate_errors":{ "cert_trusted":true, "hostname_matches":true } } }, "ssl_2_0":{ "preferred_cipher_suite":null, "accepted_ssl_2_0_cipher_suites":[] }, "ssl_3_0":{ "preferred_cipher_suite":null, "accepted_ssl_3_0_cipher_suites":[] }, "tls_1_0":{ "preferred_cipher_suite":null, "accepted_tls_1_0_cipher_suites":[] }, "tls_1_1":{ "preferred_cipher_suite":null, "accepted_tls_1_1_cipher_suites":[] }, "tls_1_2":{ "preferred_cipher_suite":"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "accepted_tls_1_2_cipher_suites":[ "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA" ] }, "tls_1_3":{ "preferred_cipher_suite":"TLS_AES_256_GCM_SHA384", "accepted_tls_1_3_cipher_suites":[ "TLS_CHACHA20_POLY1305_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_AES_128_CCM_SHA256", "TLS_AES_128_CCM_8_SHA256" ] }, "tests":{ "compression_supported":false, "accepts_early_data":false, "http_headers":{ "strict_transport_security_header":{ "preload":false, "include_subdomains":true, "max_age":15768000 } } }, "scan_information":{ }, "tls_recommendations":{ } }, "dns":{ "Connection":"example.com", "dns_recommendations":{ "Low-CAA":"ConsidercreatingDNSCAArecordstopreventaccidentalormaliciouscertificateissuance." }, "records":{ "A":[ "104.127.16.98" ], "CAA":[], "TXT":[], "MX":[] } }, "http2":{ "http2":false } } Advanced Cryptonice use - Calling specific modules There are 6 files in Cryptonice that are necessary for its functioning in other code. scanner.py and checkport.py live in the Cryptonice folder, and getdns.py, gethttp.py. gethttp2.py and gettls.py all live in the cryptonice/modules folder. A full scan is run out of the scanner_driver function in scanner.py, which generates a dictionary object based on the commands it receives via the input parameter dictionary. scanner_driver is modularized, allowing you to call as many or as few of the modules as needed for your purposes. However, if you would like to customize the use of the cryptonice library further, individual modules can be selected and run as well. You may choose to call the scanner_driver function and have access to all the modules in one location, or you could call on certain modules while excluding others. Here is an example of a function that calls the tls_scan function in modules/gettls.py to specifically make use of the TLS code and none of the other modules. from cryptonice.modules.gettls import tls_scan def tls_info(): ip_address = "172.217.12.196" host = "www.google.com" commands = ["certificate_info"] port = 443 tls_data = tls_scan(ip_address, host, commands, port) cert_0 = tls_data.get("certificate_info").get("certificate_0") # Print certificate information print(f'Common Name:\t\t\t{cert_0.get("common_name")}') print(f'Public Key Algorithm:\t\t{cert_0.get("public_key_algorithm")}') print(f'Public Key Size:\t\t{cert_0.get("public_key_size")}') if cert_0.get("public_key_algorithm") == "EllipticCurvePublicKey": print(f'Curve Algorithm:\t\t{cert_0.get("curve_algorithm")}') print(f'Signature Algorithm:\t\t{cert_0.get("signature_algorithm")}') if __name__ == "__main__": tls_info() Getting Started The modularity of Cryptonice makes it a versatile tool to be used within other projects. Whether you want to use it to test the strength of an internal website using the command line tool or integrate the modules into another project, Cryptonice provides a detailed and easy way to capture certificate information, HTTP headers, DNS restrictions, TLS configuration and more. We plan to add additional modules to query certificate transparency logs, test for protocols such as HTTP/3 and produce detailed output with guidance on how to improve your cryptographics posture on the web. This is version 1.0, and we encourage the submission of bugs and enhancements to our Github page to provide fixes and new features so that everyone may benefit from them. The Cryptonice code and binary releases are maintained on the F5 Labs Github pages. Full documentation is currently being added to our ReadTheDocs page and the Cryptonice library is available on PyPi.: F5 Labs overview: https://www.f5.com/labs/cryptonice Source and releases: https://github.com/F5-Labs/cryptonice PyPi library: https://pypi.org/project/cryptonice Documentation: https://cryptonice.readthedocs.io698Views3likes1Commentimport f5!
We are super excited to announce a preview of the F5 SDK, a set of client tools which facilitate consuming some of our popular APIs/Services, and currently consists of the following: f5-sdk-python - a new python client library f5-cli - a remote CLI that is built-off the f5-sdk-python Getting existential, as the old saying goes, the most valuable commodity in the world is time. Our short time here on earth is finite, the average lifespan is only around 70-71 years (according to Wikipedia), and if we’re not doing everything we can to try to save you every sliver of time possible, we’re not doing one of our fiduciary duties as a technology company. As software developers know, code is the ultimate embodiment of saving time (after all, a “for loop” is nothing but a modern marvel that repeats something dutifully to save time). So, as a convenience to developers and system integrators, vendors / platforms provide Software Development Kits (SDKs) to help them save even more time on that last mile. Developers/Integrators also hate having to re-invent the wheel (which often translates to code “not related” to their business logic) and look to leverage something that already exists. In hardware, you may leverage drivers. In software, you import libraries/modules. In this case, these SDK client libraries provide objects in their own language to help abstract you from low level implementation details like having to know the service’s exact URLs, authentication mechanisms, transport required (ex. REST, SOAP, GRPC), file download/uploads, etc. and let you focus on your business logic.Here at F5, we rely on SDKs like the Broadcom SDK, Octeon SDK, Intel DPDK (for hardware), Cloud SDKs (for software), etc. and the age-old value of offering an SDK hasn’t gone anywhere. SDKs are getting released and making headlines every day. ex. https://www.engadget.com/2020/01/24/boston-dynamics-robotic-dog-spot-sdk/ We’ve actually had an unofficial SDK for a while with a project called the f5-common-python. It was a “common” library for the core underlying component in one our integrations (LBaaS driver for Openstack) but, when looking at it from a more current end-user’s perspective, there were some significant enough changes we wanted to implement that mandated a fresh start. Nevertheless, it gathered a fair amount of traction and the value of a generic reusable client library was a great pattern we wanted to officially expand upon. And now what’s this CLI?I thought it was all about the API now and clients didn't matter? Yes, the main value is obviously in the API itself since that is what scales and where you deliver your main value proposition. However, there is still a client user experience (UX) to your APIs and doing everything you can to optimize / enhance the overall client experience is key to firing on all cylinders. SDKs/CLIs are like the client browser UX to your service … but for programmatic access and automation. From articles like below, you can see some of the traditional values of having a CLI component. https://www.zdnet.com/article/good-news-for-developers-the-cli-is-back/ "CLI is arguably better for ad hoc tasksorquick exploratory operations since it is more human-readable." Source: https://nordicapis.com/the-return-of-the-cli-clis-being-used-by-api-related-companies/ The familiar CLI UX is a common entry point for everyone, whether it is developers testing/exploring your API or network/system admins automating common tasks. And don’t you already have a CLI called TMSH? Well, this is a little different. Everyone today knows how useful *remote* CLIs like AWS’s aws-cli, Azure’s az, Google's gcloud, Openstack’s osc, Kubernete’s kubectl, etc. are in consuming and developing against a service or platform. Instead of SSH’ing to a single device, remote CLIs use transports/protocols like HTTP or GRPC to talk to a service’s remote APIs. The f5-cli, inspired by popular public cloud shells (which are all built on python SDKs) will focus on providing the same ease and convenience to some our most popular APIs/Services and aims to be demand/use case driven (high usage APIs vs. exposing the entire portfolio of APIs). For example, the Automation Tool Chain APIs. For some of you that may have not heard about them yet, those are some of the powerful declarative APIs that allow you to provision entire configurations in a single API call / JSON payload. Application Services 3 (AS3) = Virtual Services Declarative Onboarding (DO) = System related config like licensing, provisioning, networking,clustering, etc. Telemetry Streaming (TS) – Analytics, Logging That would normally require 10s to 100s of imperative API calls.As with any API(s), however, there is some amount of overhead (authentication, async transaction handling, best practices, etc.) in consuming them that can be optimized and captured via client libraries. Probably the most common use case: "I want to create a Virtual Service" (using AS3). Putting the CLI in debug, you can see some of the low level details the SDK is handling: Managing Authentication (dependencies on other APIs) Managing Tasks for Async Operations Retry Attempt logic Even convenience functions like installing the required extension package (an RPM) if not already installed > export F5_SDK_LOG_LEVEL=DEBUG > f5 bigip extension service create --component as3 --install-component --declaration as3.json user1@desktop: f5-cli $ f5 bigip extension service create --component as3 --declaration as3.json --install-component 2020-03-10 00:17:49,779 - f5sdk.bigip.mgmt_client - DEBUG: Performing ready check using port 8443 2020-03-10 00:17:49,810 - f5sdk.bigip.mgmt_client - DEBUG: Logging in using user + password 2020-03-10 00:17:49,810 - f5sdk.bigip.mgmt_client - DEBUG: Getting authentication token 2020-03-10 00:17:49,814 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/authn/login 2020-03-10 00:17:50,009 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:17:50,010 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: PATCH /mgmt/shared/authz/tokens/EEPEH4IXVKX7TVP27NECH76VQJ 2020-03-10 00:17:50,131 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:17:50,134 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/iapp/package-management-tasks 2020-03-10 00:17:50,252 - f5sdk.utils.http_utils - DEBUG: HTTP response: 202 Accepted 2020-03-10 00:17:50,253 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/11fc9258-a2c2-495b-8687-2e427fd64091 2020-03-10 00:17:50,363 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:17:59,798 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:01,605 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:01,608 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:03,814 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:03,815 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:05,678 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:05,680 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:07,905 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:07,908 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:09,642 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:09,644 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:11,707 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:11,709 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:13,915 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:13,918 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:16,050 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:16,052 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:18,042 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:18,044 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:20,623 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:20,625 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:22,823 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:22,826 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:24,864 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:24,866 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:26,624 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:26,625 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:28,922 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:28,924 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:31,068 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:31,070 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/file-transfer/uploads/f5-appsvcs-3.17.1-1.noarch.rpm 2020-03-10 00:18:31,503 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:31,508 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/iapp/package-management-tasks 2020-03-10 00:18:32,618 - f5sdk.utils.http_utils - DEBUG: HTTP response: 202 Accepted 2020-03-10 00:18:32,618 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/a7c839c3-b340-4497-a520-437a237aef30 2020-03-10 00:18:32,727 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:33,733 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/a7c839c3-b340-4497-a520-437a237aef30 2020-03-10 00:18:33,865 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:33,866 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/appsvcs/declare 2020-03-10 00:18:37,066 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/appsvcs/declare 2020-03-10 00:18:37,877 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/iapp/package-management-tasks 2020-03-10 00:18:37,998 - f5sdk.utils.http_utils - DEBUG: HTTP response: 202 Accepted 2020-03-10 00:18:37,999 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: GET /mgmt/shared/iapp/package-management-tasks/093dfa5a-eb52-4002-a3d8-88d3edf4ad71 2020-03-10 00:18:38,116 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK 2020-03-10 00:18:38,125 - f5sdk.utils.http_utils - DEBUG: Making HTTP request: POST /mgmt/shared/appsvcs/declare 2020-03-10 00:18:56,120 - f5sdk.utils.http_utils - DEBUG: HTTP response: 200 OK { "declaration": { "Sample_app_sec_Tenant": { "HTTPS_Service": { "Pool1": { "class": "Pool", "members": [ { "serverAddresses": [ "10.0.1.11" ], "servicePort": 80 } ], "monitors": [ "http" ] }, "WAFPolicy": { "class": "WAF_Policy", "ignoreChanges": true, "url": "https://raw.githubusercontent.com/f5devcentral/f5-asm-policy-templates/master/owasp_ready_template/owasp-no-auto-tune-v1.1.xml" }, "class": "Application", "serviceMain": { "class": "Service_HTTPS", "policyWAF": { "use": "WAFPolicy" }, "pool": "Pool1", "redirect80": false, "serverTLS": { "bigip": "/Common/clientssl" }, "snat": "auto", "virtualAddresses": [ "0.0.0.0" ] }, "template": "https" }, "class": "Tenant" }, "class": "ADC", "controls": { "archiveTimestamp": "2020-03-10T00:18:55.759Z" }, "id": "autogen_3a78cbd8-7aa4-4fb6-8db9-3e458f583513", "label": "ASM_VS1", "remark": "ASM_VS1", "schemaVersion": "3.0.0", "updateMode": "selective" }, "results": [ { "code": 200, "host": "localhost", "lineCount": 28, "message": "success", "runTime": 16658, "tenant": "Sample_app_sec_Tenant" } ] } That’s actually quite a bit of work offloaded to accelerate getting you up and going on an API. As our portfolio expands to offer more and more products and services, this will hopefully save our users some extra time. DISCLAIMER:it’s in public preview and we have many enhancements already planned F5 SDK Documentation: https://clouddocs.f5.com/sdk/f5-sdk-python/ Github repo: https://github.com/f5devcentral/f5-sdk-python F5 CLI Documentation: https://clouddocs.f5.com/sdk/f5-cli/ Github repo: https://github.com/f5devcentral/f5-cli Please take look soon and provide any feedback by filing an issue on Github repos.727Views3likes1Commentis there a way to download / export the actual Key / RSA Certificate files from BIG-IP, using the iControl REST?
Hi all, I know there is a way to upload and import key/cert to F5 either fromFile or fromUrl. I also know that there is a way to download files from /mgmt/tm/asm/file-transfer/downloads/fooFile.txt using iControl REST. Is there a way to download/export the actual Key / Certificate files from BIG-IP, using the iControl REST service? if not directly, is there any way to export Key/Cert under F5_IP:/ts/var/rest/ download those files using the download REST call?1.2KViews2likes5CommentsF5 APM VPN Support For Microsoft O365 Split-Tunneling
We ran into a significant issue with remote VPN client performance when our Microsoft Office products moved to the O365 cloud offering. Our current limitation of "no split-tunneling" per corporate policy, prevented our users from establishing connectivity to their geographically preferable O365 cloud. Instead, their traffic could/would route back to the corporate F5 APM VPN BigIP and then out to the internet. Much longer path and real-time services such as Teams/Skype calls suffered greatly. Other vendors were also having issues with this such as ForcePoint (Websense) and McAfee. Those vendors released O365 specific patches to permit a better performance through various rules and methods. Our F5 APM VPN was the bottle-neck and we had to address this quickly. Approval was granted to permit ONLY O365 products to be split-tunneled. Luckily, Microsoft has fielded this question/requirement many times and they had a ready answer: https://docs.microsoft.com/en-us/office365/enterprise/urls-and-ip-address-ranges Unfortunately, there's +500 IPv4 networks alone. Many are overlapping and some could be combined into a supernet. Not pretty, but workable. Using node.js, we developed a script that will pull-down the Microsoft IPv4 space, perform a CIDR clean on the networks, log into the F5 BigIP and push the Network Access exclude IP list, then apply the Access Policy in one shot. You can see the repo here: https://github.com/adamingle/f5O365SplitTunnelUpdateScript If you'd like to use the repo, please note the "settings.json" file. You will need to update according to the README.md Additionally, you will need to configure the allowable/tunneled traffic for the Network Access on VPN. If you only specify the exclusion space, there will be no inclusion space and no traffic will traverse the tunnel. Enable split-tunneling by checking the "Use split tunneling for traffic" radio button Add ALL networks to the "IPV4 LAN Address Space" with the IP Address 0.0.0.0 and Mask 0.0.0.0 Specify wildcard/asterisk for the "DNS Address Space" After you have the split-tunneling enabled on your Network Access Lists in F5 APM and you have correctly modified the "settings.json" file of your local f5O365SplitTunnelUpdateScript repo, you should be able to execute your O365 split-tunneling address exclusion changes. Use Jenkins or other automation tool to run the script automatically. Definitely worth a watch: https://channel9.msdn.com/Events/Ignite/2015/BRK3141 *This has been tested/used successfully with the Edge 7.1.7.1 client on v13.1.11.6KViews2likes7CommentsManage Infrastructure and Services Lifecycle with Terraform and Ansible + Demo
Working as a Solution Architect for F5, Ioften need to have access to a lab environment. 'Traditionally', the method to implement a lab was to leverage tools like Vagrant,VMWare,or others. A lab environment on a laptop is limited by its computing capacities (CPU/Memory/disk/...).Today we are often asked to show how we can integrate our solutions with many different tools(Orchestration solutions, Version Control systems, CI Servers, containerized environments, ...). Except if your laptop is a powerful one, it's difficult to build such an environment and have itrun smoothly. If the lab requirements are too demanding for my laptop, Iwould access one of our lab facility to do my work. Thisapproach itself is fine but bring some challenges: If you travel like Ido, latency can become a hindrance and be frustrating. Lab facilities leverage "shared resources". Which means you may face issues due toconflicting IP addresses, switch misconfiguration, maintenance operations, ... Some resources may already be reserved/used by another fellow colleague and not be available. You may also face other constraints making both deployment models difficult: Need to share access to the lab. Not easy when it runs on your laptop or in a private cloud that is not always opened to the outside world. People may need to be able to replicate your lab in their own environment. Stability/time needed for maintenance: Using a lab over and over will make it messy. You usually At some point, you'll reach a stage where you want to create a "new" environment that is clean and "trustworthy" (until you played too much with it again) I'm sure i've missed other constraints but you get the idea: maintaining a lab and using it in a collaborativemanner is challenging. Luckily, it's easier today to achieve those objectices: Leverage Public Cloud! Public Cloud gives you access to "unlimited" computing services over Internet that can be automated/orchestrated. With Public Cloud, you have access to an API allowing you to spin up a new environment with all therelevant tools deployed. This way, you may go straight into work (after enjoying a nice cup of coffee/tea while yourinfrastructure is being deployed! ).Once your work is done, you can destroy this environment and save money. When you'll need a lab again, you'll be able to spin a new/clean environment in a matter of minutes and be confident that it's a "healthy lab" When working on Automation/Orchestration of Public cloud environments, I see two dominant tools: Terraform andAnsible. https://www.terraform.io Terraform is an open source command line tool that can be used to provision an infrastructure on dozensof different platforms and services (AWS, Azure, ...).One of the strength of Terraform is that it is declarative: You specify the expected "state" of yourinfrastructure and Terraform will take care of all the underlying complexities (Does it need to be provisioned? Should I update the settings of a component? Which components should be created first? Do we need to deleteresources that are not required anymore, ... ).Terraform will store the "state" of your infrastructure and configuration to be more efficient in its work. https://www.ansible.com Ansible is a provisioning and configuration management tool. It is designed to automate application deployments.One of the strength of Ansible is that it doesn't require any "agents" to run on the targetted systems. Ansibleworks by leveraging "Modules". Those modules are consumed to define the "state" of the targetted systems. They areusually executed over SSH (by default). So how to leverage those tools to have a lab available on-demand? In the following demo, we will: Leverage Terraform to manage the lifecycle of a new AWS environment: manage a dedicated VPC with external/internal subnets, Ubuntu instances, F5 solution) In addition to deploying our infrastructure, it will generate the relevant files for Ansible (inventory file to know theIPs of our systems, ansible variable files to know how to configure the F5 solution with AS3) Use Ansible to manage the configuration of our systems: update our ubuntu instances, install NGINX Web serviceon our Ubuntu instances, deploy a standard F5 configuration to load balance our web application with AS3 Here is a summary for the demo: Demo time! By leveraging tools like Terraform or Ansible (you can achieve the same results with other tools), it is easy to handle thelifecycle of an infrastructure and the services running on top of it. This is what people IaC (Infrastructure as Code) Useful links:- If you want to learn more about the setup of this demo, it is posted on Github: here- F5 provides a list of templates to automate deployment in public cloud. It's available here: AWS Templates, Azure Templates, GCP Templates- F5 Application Services 3 (AS3) documentation/examples: here- If you want to learn more about our API and how to automate/orchestrate F5 solutions (free training): F5 A&O Training1KViews2likes1Comment