rest
44 TopicsCreate iRule REST error: Found invalid JSON body in the request
Greetings, I saw a few other forums posts about the same error but I was not able to figure out what is wrong with the command below: curl -sku admin:admin -H "Content-Type: application/json" -X POST https://sampleF5name.test.com/mgmt/tm/ltm/rule -d '{"name":"f5RESTSampleRule", "apiAnonymous":"when CLIENT_ACCEPTED {\n node 172.28.0.41 \n}" }' I changed the admin credentials and F5 name to post here. When I run the following command, I get the following back: { "code": 400, "message": "Found invalid JSON body in the request.", "errorStack": [], "apiError": 1 } I ran my JSON through a JSON validator and there were no issues with it so my assumption is that I am passing something to the F5 that is not valid but I am not sure what it is. We already have a few rules like this setup but they were created through the UI. Any help would be greatly appreciated! I assume im missing something simple here. BIG-IP v15.1.5 (Build 0.0.10)Solved2.4KViews1like3CommentsF5 SSLO Unified Configuration API Quick Introduction
Introduction Prior to the introduction of BIG-IQ 8.0, you had to use the BIG-IQ graphical user interface (GUI) to configure F5 SSL Orchestrator (SSLO) Topologies and their dependencies. Starting with BIG-IQ 8.0, a new REST unified, supported and documented REST API endpoint was created to simplify SSLO configuration workflows. The aim is to simplify the configuration of F5 SSLO using standardized API calls.You are now able to store the configuration in your versioning tool (Git, SVN, etc.), and easily integrate the configuration of F5 SSLO in your automation and pipeline tools. For more information about F5 SSLO, please refer to this introductory video.An overview of F5 SSL Orchestrator is provided in K1174564. As a reminder the BIG-IQ API reference documentation can be found here.Documentation for the Access Simplified Workflow can be found here. The figure below shows a possible use for the SSLO Unified API. A few shortcuts are taken in the figure above as it is meant to illustrate the advantage of the simplified workflow. Example Configuration For the configuration the administrator needs to: -Create a JSON blurb or payload that will be sent to the BIG-IQ API -Authenticate to the BIG-IQ API -Send the payload to the BIG-IQ -Ensure that the workflow completes successfully The following aims to provide a step-by-step configuration of SSLO leveraging the API.In practice, the steps may be automated and may be included in the pipeline used to deploy the application leveraging the enterprise tooling and processes in place. 1.- Authenticate to the API API interactions with the BIG-IQ API requires the use of a token.The initial REST call should look like the following: REST Endpoint : /mgmt/shared/authn/login HTTP Method: POST Headers: -content-type: application/json Content: { "username": "", "password": "", "loginProviderName": "" } Example: POST https://10.0.0.1/mgmt/shared/authn/login HTTP/1.1 Headers: content-type: application/json Content: { "username": "username", "password": "complicatedPassword!", "loginProviderName": "RadiusServer" } The call above will authenticate the user “bob” to the API.The result of a successful authentication is the response from the BIG-IQ API with a token. 2.- Push the configuration to BIG-IQ The headers and HTTP request should look like the following: URI: mgmt/cm/sslo/api/topology HTTP Method: POST Headers: -content-type: application/json -X-F5-Auth-Token: [token obtained from the authentication process above] To send the configuration to the BIG-IQ you will need to send the following payload - the blurb is cut up in smaller pieces for readability. The JSON blurb is divided in multiple parts - the full concatenated text is available in the file in attachment. Start by defining an new topology with the following characteristics: Name: "sslo_NewTopology" Listening on the "/Common/VLAN_TRAP" VLAN The topology is of type "topology_l3_outbound" The SSL settings defined below named: "ssloT_NewSsl_Dec" The policy is called: "ssloP_NewPolicy_Dec" The JSON payload starts with the following: { "template": { "TOPOLOGY": { "name": "sslo_NewTopology ", "ingressNetwork": { "vlans": [ { "name": "/Common/VLAN_TAP" } ] }, "type": "topology_l3_outbound", "sslSetting": "ssloT_NewSsl_Dec", "securityPolicy": "ssloP_NewPolicy_Dec" }, The SSL settings used above are defined in the following JSON that creates a new profile with default values: "SSL_SETTINGS": { "name": "ssloT_NewSsl_Dec" }, The security policy is configured as follows: name: ssloP_NewPolicy_Dec function: introduces a pinning policy doing a policy lookup - matching requests are bypassed (no ssl decryp) with the associated service chain "ssloSC_NewServiceChain_Dec" that is defined further down below. "SECURITY_POLICY": { "name": "ssloP_NewPolicy_Dec", "rules": [ { "mode": "edit", "name": "Pinners_Rule", "action": "allow", "operation": "AND", "conditions": [ { "type": "SNI Category Lookup", "options": { "category": [ "Pinners" ] } }, { "type": "SSL Check", "options": { "ssl": true } } ], "actionOptions": { "ssl": "bypass", "serviceChain": "ssloSC_NewServiceChain_Dec" } }, { "mode": "edit", "name": "All Traffic", "action": "allow", "isDefault": true, "operation": "AND", "actionOptions": { "ssl": "intercept" } } ] }, The service chain configuration is defined below to forward the traffic to the "ssloS_ICAP_Dec" service. this is done with the following JSON: "SERVICE_CHAIN": { "ssloSC_NewServiceChain_Declarative": { "name": "ssloSC_NewServiceChain_Dec", "orderedServiceList": [ { "name":"ssloS_ICAP_Dec" } ] } }, The "ssloS_ICAP_Dec" service is defined with the JSON below with IP 3.3.3.3 on port 1344 "SERVICE": { "ssloS_ICAP_Declarative": { "name": "ssloS_ICAP_Dec", "customService": { "name": "ssloS_ICAP_Dec", "serviceType": "icap", "loadBalancing": { "devices": [ { "ip": "3.3.3.3", "port": "1344" } ] } } } } }, The configuration will be deployed to the target defined below: "targetList": [ { "type": "DEVICE", "name": "my.bigip.internal" } ] } After the HTTP POST, the BIG-IQ will respond with a transaction id.A sample of what looks like is given below: { […] "id":"edc17b06-8d97-47e1-9a78-3d47d2db70a6", "status":"STARTED", […] } You can check on the status of the deployment task by submitting a request as follows: -HTTP GET Method -Authenticated with the use of the custom authentication header X-F5-Auth-Token -Sent to the BIG-IQ to URI GET mgmt/cm/sslo/tasks/api/{{status_id}} HTTP/1.1 -With Content-Type header set to: Application/JSON Once the status of the task changes to FINISHED.The configuration is successfully completed.You can now check the F5 SSLO interface to make sure the new topology has been created.The BIG-IQ interface will show the new topology as depicted in the example below: The new topology has been deployed to the BIG-IP automatically.You can connect to the BIG-IP to verify, the interface should like the one depicted below: Congratulations, you now have successfully deployed a fully functional topology that your users can start using. Note that, you can also use the BIG-IQ REST API to delete the items that were just created.This is done by sending HTTP DELETE to the different API endpoints for the topology, service, security profile etc. For example, for the example above, you would be sending HTTP DELETE requests to the following URI’s: -For the topology: /mgmt/cm/sslo/api/topology/sslo_NewTopology_Dec -For the service chain: /mgmt/cm/sslo/api/service-chain/ssloSC_NewServiceChain_Dec -For the ICAP service: /mgmt/cm/sslo/api/ssl/ssloT_NewSsl_Dec All the requests listed above need to be sent to the BIG-IQ system to its management IP address with the following 2 headers: -content-type: application/json -X-F5-Auth-Token: [value of the authentication token obtained during authentication] Conclusion BIG-IQ makes it easier to manage SSLO Topologies thanks to its REST API.You can now make supported, standardized API calls to the BIG-IQ to create and modify topologies and deploy the changes directly to BIG-IP.705Views1like0CommentsiHealth API Part 4 - A Little More Code
In the last chapter, we wrote a quick bash script that used some tools to produce diagnostics summaries for qkviews. Now that you have some experience with the API, maybe you want to do a little bit more. In this article, we're going to explore the data in a little more detail to give you an idea of the kinds of things you can do with the API. Generally speaking, the API methods (sometimes called 'calls') are broken into two different broad categories, "group" or "collection" methods, and then methods that deal with a single qkview ("instance methods"). In this article, we're going to cover some of the collection methods, and how to manage your qkviews. In previous articles, we've restrained ourselves from going hog-wild and kept to the nice safe methods that are essentially read-only methods. That is, we just ask for information from the API, and then consume it and manipulate it locally, without making any changes to the data on the server. This is a nice safe way to get a handle on how the API works, without accidentally deleting anything, or changing an attribute of a QKView without meaning to. It's time to take the training wheels off, call in the goats, and go a little wild. Today, we'll be modifying an attribute of a QKView, or if we're feeling really crazy, we'll modify two of them, maybe even in the same HTTP request. My palms are sweating just writing this... Before we get that crazy, let's take a minute and talk about some collection methods first, and get that out of the way. Collection methods can be very handy, and very powerful, as they allow you to deal with your QKViews en masse. There are three collection methods that we'll talk about, the first one is a way to get a list of all your QKView IDs. This makes it possible to iterate over all the QKViews in your iHealth account while hunting for something. Remember how we got our list of goats in a barn? Similarly: GET qkview-analyzer/api/qkviews will return a list of qkview ids for every qkview in your ihealth account: What if we decide we want to get rid of all our QKViews? We could GET the list of IDs, and then iterate through them, DELETEing each one, but a collection method is much easier: DELETE qkview-analyzer/api/qkviews will wipe all the qkviews from your account. Use with caution, this is the 'nuke from orbit' option. We won't be using it in our projects for this article series, but I thought I'd mention it, as it is very handy for some situations. Another collection method that we *will* be using is this: POST qkview-analyzer/api/qkviews This method allows us to add to our collection of qkviews. Since we're using POST, remember that each time this is executed (even with the exact same qkview file), a new qkview object is created on the server. So you'll only need to do it once for a given qkview file, unless of course, you accidentally DELETE it from your collection, and want to add it again. I've never done anything like that, I'm just speaking hypothetically. So now armed with those collection methods, and with knowledge of our previous scripts that allowed us to examine QKView metadata, let's build up a little script that combines collection methods with some instance methods. Now, the sample qkview won't suffice for this exercise, as it is meant for demonstration purposes, and is used system-wide (and is thus read-only). For this exercise today, we'll be using a real qkview. This article is going to write two scripts. The first will perform an upload of a qkview file to iHealth, using many of the techniques that we learned in our first scripting session. Upload QKView script (note that all the scripts referenced in this series are included, but the one needed for this exercise is upload-qkview.sh) Notice how the authentication parts all look the same? What a great opportunity to refactor these scripts and build up an iHealth API library that our other scripts could use. Extra credit if you take that on! So using the same authentication methods, we need to be able to specify the location of the qkview file that we want to upload. Here is a sample qkview file for you to download and use for these exercises, or feel free to pull a qkview off your own gear, and use that. AskF5 explains how to get a QKView. Here is where the collection method to add a qkview to our collection happens: 96 function upload_qkview { 97 path="$1" 98 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} -F 'qkview=@${path}' -D /dev/stdout https://ihealth-api.f5.com/qkview-analyzer/api/qkviews" 99 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 100 out="$(eval "${CURL_CMD}")" 101 if [[ $? -ne 0 ]]; then 102 error "Couldn't retrieve diagnostics for ${qid}" 103 fi 104 location=$(echo "${out}" | grep -e '^Location:' | tr -d '\r\n') 105 transformed=${location/Location: /} 106 echo "${transformed}" 107 The upload will finish with an HTTP 302 redirect that points us to the new qkview. After we upload, we cannot just immediately ask for data about the qkview, as there is a certain amount of processing that happens before we can start grilling the server about it. While the server is busy doing all the prep work it needs to do in order to give you back data about the qkview, the server doesn't mind if we politely ask it about how things are going up there in the cloud. To do this, we use a GET request, and the server will respond with an HTTP status code that tells our script how things are going: our code: "ready yet?" server : "not yet" our code: "ready yet?" server : "not yet" sort of like kids on a car trip asking "are we there yet?" over and over and over again. Only instead of the driver getting enraged and turning up the radio to drown them out, our server just responds politely "not yet" until it's done processing. our code: "ready yet?" server : "yup, ready!" Then our code can go about it's business. In the programming trades, this called 'polling'. We poll the server until the server gives us the answer we want. our code: GET qkview-analyzer/api/qkviews/ server : HTTP 202 our code: GET qkview-analyzer/api/qkviews/ server : HTTP 202 our code: GET qkview-analyzer/api/qkviews/ server : HTTP 200 So that's how to add a qkview to your collection. Of course you might not get an HTTP 202 or 200 back, you might get something else, in which case something went wrong in either the upload or the processing. At that point, we should also bail out, and return an error to the runner of the script: 109 function wait_for_state { 110 url="$1" 111 count=0 112 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} -w "%{http_code}" ${url}" 113 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 114 _status=202 115 time_passed=0 116 while [[ "$_status" -eq 202 ]] && [[ $count -lt ${POLL_COUNT} ]]; do 117 _status="$(eval "${CURL_CMD}")" 118 count=$((count + 1)) 119 time_passed=$((count * POLL_WAIT)) 120 [[ $VERBOSE ]] && echo -ne "waiting (${time_passed} seconds and counting)\r" >&2 121 sleep ${POLL_WAIT} 122 done 123 printf "\nFinished in %s seconds\n" "${time_passed}" >&2 124 if [[ "$_status" -eq 200 ]]; then 125 [[ $VERBOSE ]] && echo "Success - qkview is ready" 126 elif [[ ${count} -ge ${POLL_COUNT} ]]; then 127 error "Timed out waiting for qkview to process" 128 else 129 error "Something went wrong with qkview processing, status: ${_status}" 130 fi 131 } The change-description.sh script (included in the zip linked above) will allow you to update the description field of any number of qkviews with a given chassis serial number. This script will use both a collection method (list all my qkview IDs), and an instance method on several qkviews to update some metadata associated with the qkview. We've introduced yet another verb into our working lexicon, PUT. We use PUT here, because it's a modification on a qkview that no matter how many times we perform it, will result in the same qkview state. This is called idempotency. Unlike, say our POST above in our upload script, which results in new qkviews every time you run it, this PUT may change the state of the qkview the first time, but subsequent identical PUT requests won't change the state further. So now we can submit QKViews, get diagnostic results, and change metadata about the QKView. So what else could we possibly do? The rest of this article series will explore the data in the API and how to retreive and process it.428Views1like0CommentsRunning BASH commands via REST API
I am trying to run bash commands via the REST API but am getting an error. When trying to use the following syntax I am getting a 403 running with Admin authentication... GET: https://F5LTM/mgmt/tm/util/bash Output: {"code":403,"message":"Operation is not allowed on component /util/bash.","errorStack":[]} Does anyone know if this is possible, or have any syntax examples of how to run bash commands? I assume you need to submit a post request, but I am not sure how to structure the syntax in the body of the request and cannot find any examples.3.8KViews1like3CommentsBIG-IP 11.4.1 Build 651.0 Hotfix HF5
Hello, We've the following version of BIG-IP 11.4.1 Build 651.0 Hotfix HF5 devices in our lab and we would like to start using REST API. We know that v11.5 is already released and in stable state, but can you let us know whether it's okay to start using REST API in v11.4.1 with HF5? Are there any definitive steps to enable or trigger REST API service in F5 LTM v11.4.1? Any further help would be greatly appreciated. Thank you!365Views1like8Comments