rest
13 TopicsFrom Terra Incognita to API Incognita: What Captain Cook Teaches Us About API Discovery
When I was young, my parents often took me on UK seaside holidays, including a trip to Whitby, which was known for its kippers (smoked herring), jet (a semi-precious stone), and the then vibrant fishing industry. Whitby was also where Captain Cook, a famous British explorer, learned seamanship. During the 1760s, Cook surveyed Newfoundland and Labrador's coasts, creating precise charts of harbors, anchorages, and dangerous waters, such as Trinity Bay and the Grand Banks. His work, crucial for fishing and navigation, demonstrated exceptional skill and established his reputation, leading to his Pacific expeditions. Cook's charts, featuring detailed observations and navigational hazards, were so accurate they remained in use well into the 20th century. This made me think about how cartography and API Discovery are very similar. API discovery is about mapping and finding unknown and undocumented APIs. When you know which APIs you're actually putting out there, you're much less likely to run into problems that could sink your application rollout like a ship that runs aground, or leaves you dead in the water like a vessel that's lost both mast and rudder in stormy seas. This inspired me to show F5’s process for finding cloud SaaS APIs with a simple REST API. I honored Captain Cook and his Newfoundland trip by using this. Roughly speaking...this is my architecture. I thought about a simple rest API running in AWS. I call it the “Cook API”. An overview of the API interface is as follows. "title": "Captain Cook's Newfoundland Mapping API", "description": "Imaginary Rest API for testing API Discovery", "available_endpoints": [ {"path": "/api/charts", "methods": ["GET", "POST"], "description": "Access and create mapping charts"}, {"path": "/api/charts/<chart_id>", "methods": ["GET"], "description": "Get details of a specific chart"}, {"path": "/api/hazards", "methods": ["GET"], "description": "Get current navigation hazards"}, {"path": "/api/journal", "methods": ["GET"], "description": "Access Captain Cook's expedition journal"}, {"path": "/api/vessels", "methods": ["GET"], "description": "Get information about expedition vessels"}, {"path": "/api/vessels/<vessel_id>", "methods": ["GET"], "description": "Get status of a specific vessel"}, {"path": "/api/resources", "methods": ["GET", "POST"], "description": "Manage expedition resources"} ], I set up a container running on a server in AWS that hosts my REST API. To test my API, I create a partial swagger file that represents only a subset of the APIs that the container is advertising. openapi: 3.0.0 info: title: Captain Cook's Newfoundland Mapping API (Simplified) description: > A simplified version of the API. This documentation shows only the Charts and Vessels endpoints. version: 1.0.0 contact: name: API Support email: support@example.com license: name: MIT url: https://opensource.org/licenses/MIT servers: - url: http://localhost:1234 description: Development server - url: http://your-instance-ip description: Instance tags: - name: Charts description: Mapping charts created by Captain Cook - name: Vessels description: Information about expedition vessels paths: /: get: summary: API welcome page and endpoint documentation description: Provides an overview of the available endpoints and API capabilities responses: '200': description: Successful operation content: application/json: schema: type: object properties: title: type: string example: "Captain Cook's Newfoundland Mapping API" description: type: string available_endpoints: type: array items: type: object properties: path: type: string methods: type: array items: type: string description: type: string /api/charts: get: summary: Get all mapping charts description: Retrieves a list of all mapping charts created during the Newfoundland expedition tags: - Charts responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: type: object additionalProperties: $ref: '#/components/schemas/Chart' post: summary: Create a new chart description: Adds a new mapping chart to the collection tags: - Charts requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/ChartInput' responses: '201': description: Chart created successfully content: application/json: schema: type: object properties: status: type: string example: "success" message: type: string example: "Chart added" id: type: string example: "6" '400': description: Invalid input content: application/json: schema: $ref: '#/components/schemas/Error' /api/charts/{chartId}: get: summary: Get a specific chart description: Retrieves a specific mapping chart by its ID tags: - Charts parameters: - name: chartId in: path required: true description: ID of the chart to retrieve schema: type: string responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: $ref: '#/components/schemas/Chart' '404': description: Chart not found content: application/json: schema: $ref: '#/components/schemas/Error' /api/vessels: get: summary: Get all expedition vessels description: Retrieves information about all vessels involved in the expedition tags: - Vessels responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: type: array items: $ref: '#/components/schemas/VesselBasic' /api/vessels/{vesselId}: get: summary: Get a specific vessel description: Retrieves detailed information about a specific vessel by its ID tags: - Vessels parameters: - name: vesselId in: path required: true description: ID of the vessel to retrieve schema: type: string responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: $ref: '#/components/schemas/VesselDetailed' '404': description: Vessel not found content: application/json: schema: $ref: '#/components/schemas/Error' components: schemas: Chart: type: object properties: name: type: string example: "Trinity Bay" completed: type: boolean example: true date: type: string format: date example: "1763-06-14" landmarks: type: integer example: 12 risk_level: type: string enum: [Low, Medium, High] example: "Medium" ChartInput: type: object required: - name properties: name: type: string example: "St. Mary Bay" completed: type: boolean example: false date: type: string format: date example: "1767-05-20" landmarks: type: integer example: 7 risk_level: type: string enum: [Low, Medium, High, Unknown] example: "Medium" VesselBasic: type: object properties: id: type: string example: "HMS_Grenville" type: type: string example: "Survey Sloop" crew: type: integer example: 18 status: type: string enum: [Active, In-port, Damaged, Repairs] example: "Active" VesselDetailed: allOf: - $ref: '#/components/schemas/VesselBasic' - type: object properties: current_position: type: string example: "LAT: 48.2342°N, LONG: -53.4891°W" heading: type: string example: "Northeast" weather_conditions: type: string enum: [Favorable, Challenging, Dangerous] example: "Favorable" Error: type: object properties: status: type: string example: "error" message: type: string example: "Resource not found" The process to set up API discovery in F5 Distributed Cloud is very simple. I create an origin pool that points to my upstream REST API. I then create a load balancer in distributed cloud and o Associate the origin pool with the load balancer o Enable API definition and import my partial swagger file as my API inventory. Some Screenshots below. o Enable API Discovery Select Enable from Redirect Traffic Run a shell script that tests my API. Take a break or do something else for the API Discovery capabilities to populate the dashboard. The process of API Discovery to show up in the XC Security Dashboard can take several hours. Results Well, as predicted, API discovery has found that my Swagger file is only representing a subset of my APIs. API discovery has found an additional 4 APIs that were not included in the swagger file. Distributed cloud describes these as “Shadow” APIs, or APIs that you may not have known about. API Discovery has also discovered that sensitive data is being returned by couple of the APIs What Now? If this were a real-world situation, you would have to review what was found, paying special attention to APIs that may be returning sensitive data. Each of what we call “shadow” APIs could pose a security risk, so you should review each of these APIs. The good thing is that we are now using distributed cloud, and we can use distributed cloud to protect our APIs. It is very easy to allow only those APIs that your project team might be using. For the APIs that you are exposing through the platform, you can:- Implement JWT Authentication if none exists and authentication is required. Configure Rate Limiting Add a WAF Policy Implement Bot Protection Policy Continually log and monitor your API traffic. Also, you should update your API inventory to include the entirety of the APIs that the Application provides You should only expose the APIs that are being used All of these things are simple to set up in the F5 Distributed Cloud Application Conclusion You need effective API management and discovery. Detecting "Shadow" APIs is crucial to preventing sensitive data exposure. Much like Captain Cook charting unknown territories, the process of uncovering APIs previously hidden in the system reveals a need for precision and vigilance. Cook’s expeditions needed detailed maps and careful navigation to avoid hidden dangers. Modern API management needs tools that can accurately map and monitor every endpoint. By embracing this meticulous approach, we can not only safeguard sensitive data but also steer our digital operations toward a more secure and efficient future. To quote a pirate who happens to be an API security expert and likes a Haiku. Know yer API seas, Map each endpoint 'fore ye sail— Blind waters sink ships.156Views1like0CommentsF5 SSLO Unified Configuration API Quick Introduction
Introduction Prior to the introduction of BIG-IQ 8.0, you had to use the BIG-IQ graphical user interface (GUI) to configure F5 SSL Orchestrator (SSLO) Topologies and their dependencies. Starting with BIG-IQ 8.0, a new REST unified, supported and documented REST API endpoint was created to simplify SSLO configuration workflows. The aim is to simplify the configuration of F5 SSLO using standardized API calls. You are now able to store the configuration in your versioning tool (Git, SVN, etc.), and easily integrate the configuration of F5 SSLO in your automation and pipeline tools. For more information about F5 SSLO, please refer to this introductory video. An overview of F5 SSL Orchestrator is provided in K1174564. As a reminder the BIG-IQ API reference documentation can be found here. Documentation for the Access Simplified Workflow can be found here. The figure below shows a possible use for the SSLO Unified API. A few shortcuts are taken in the figure above as it is meant to illustrate the advantage of the simplified workflow. Example Configuration For the configuration the administrator needs to: -Create a JSON blurb or payload that will be sent to the BIG-IQ API -Authenticate to the BIG-IQ API -Send the payload to the BIG-IQ -Ensure that the workflow completes successfully The following aims to provide a step-by-step configuration of SSLO leveraging the API. In practice, the steps may be automated and may be included in the pipeline used to deploy the application leveraging the enterprise tooling and processes in place. 1.- Authenticate to the API API interactions with the BIG-IQ API requires the use of a token. The initial REST call should look like the following: REST Endpoint : /mgmt/shared/authn/login HTTP Method: POST Headers: -content-type: application/json Content: { "username": "", "password": "", "loginProviderName": "" } Example: POST https://10.0.0.1/mgmt/shared/authn/login HTTP/1.1 Headers: content-type: application/json Content: { "username": "username", "password": "complicatedPassword!", "loginProviderName": "RadiusServer" } The call above will authenticate the user “bob” to the API. The result of a successful authentication is the response from the BIG-IQ API with a token. 2.- Push the configuration to BIG-IQ The headers and HTTP request should look like the following: URI: mgmt/cm/sslo/api/topology HTTP Method: POST Headers: -content-type: application/json -X-F5-Auth-Token: [token obtained from the authentication process above] To send the configuration to the BIG-IQ you will need to send the following payload - the blurb is cut up in smaller pieces for readability. The JSON blurb is divided in multiple parts - the full concatenated text is available in the file in attachment. Start by defining an new topology with the following characteristics: Name: "sslo_NewTopology" Listening on the "/Common/VLAN_TRAP" VLAN The topology is of type "topology_l3_outbound" The SSL settings defined below named: "ssloT_NewSsl_Dec" The policy is called: "ssloP_NewPolicy_Dec" The JSON payload starts with the following: { "template": { "TOPOLOGY": { "name": "sslo_NewTopology ", "ingressNetwork": { "vlans": [ { "name": "/Common/VLAN_TAP" } ] }, "type": "topology_l3_outbound", "sslSetting": "ssloT_NewSsl_Dec", "securityPolicy": "ssloP_NewPolicy_Dec" }, The SSL settings used above are defined in the following JSON that creates a new profile with default values: "SSL_SETTINGS": { "name": "ssloT_NewSsl_Dec" }, The security policy is configured as follows: name: ssloP_NewPolicy_Dec function: introduces a pinning policy doing a policy lookup - matching requests are bypassed (no ssl decryp) with the associated service chain "ssloSC_NewServiceChain_Dec" that is defined further down below. "SECURITY_POLICY": { "name": "ssloP_NewPolicy_Dec", "rules": [ { "mode": "edit", "name": "Pinners_Rule", "action": "allow", "operation": "AND", "conditions": [ { "type": "SNI Category Lookup", "options": { "category": [ "Pinners" ] } }, { "type": "SSL Check", "options": { "ssl": true } } ], "actionOptions": { "ssl": "bypass", "serviceChain": "ssloSC_NewServiceChain_Dec" } }, { "mode": "edit", "name": "All Traffic", "action": "allow", "isDefault": true, "operation": "AND", "actionOptions": { "ssl": "intercept" } } ] }, The service chain configuration is defined below to forward the traffic to the "ssloS_ICAP_Dec" service. this is done with the following JSON: "SERVICE_CHAIN": { "ssloSC_NewServiceChain_Declarative": { "name": "ssloSC_NewServiceChain_Dec", "orderedServiceList": [ { "name":"ssloS_ICAP_Dec" } ] } }, The "ssloS_ICAP_Dec" service is defined with the JSON below with IP 3.3.3.3 on port 1344 "SERVICE": { "ssloS_ICAP_Declarative": { "name": "ssloS_ICAP_Dec", "customService": { "name": "ssloS_ICAP_Dec", "serviceType": "icap", "loadBalancing": { "devices": [ { "ip": "3.3.3.3", "port": "1344" } ] } } } } }, The configuration will be deployed to the target defined below: "targetList": [ { "type": "DEVICE", "name": "my.bigip.internal" } ] } After the HTTP POST, the BIG-IQ will respond with a transaction id. A sample of what looks like is given below: { […] "id":"edc17b06-8d97-47e1-9a78-3d47d2db70a6", "status":"STARTED", […] } You can check on the status of the deployment task by submitting a request as follows: -HTTP GET Method -Authenticated with the use of the custom authentication header X-F5-Auth-Token -Sent to the BIG-IQ to URI GET mgmt/cm/sslo/tasks/api/{{status_id}} HTTP/1.1 -With Content-Type header set to: Application/JSON Once the status of the task changes to FINISHED. The configuration is successfully completed. You can now check the F5 SSLO interface to make sure the new topology has been created. The BIG-IQ interface will show the new topology as depicted in the example below: The new topology has been deployed to the BIG-IP automatically. You can connect to the BIG-IP to verify, the interface should like the one depicted below: Congratulations, you now have successfully deployed a fully functional topology that your users can start using. Note that, you can also use the BIG-IQ REST API to delete the items that were just created. This is done by sending HTTP DELETE to the different API endpoints for the topology, service, security profile etc. For example, for the example above, you would be sending HTTP DELETE requests to the following URI’s: - For the topology: /mgmt/cm/sslo/api/topology/sslo_NewTopology_Dec - For the service chain: /mgmt/cm/sslo/api/service-chain/ssloSC_NewServiceChain_Dec - For the ICAP service: /mgmt/cm/sslo/api/ssl/ssloT_NewSsl_Dec All the requests listed above need to be sent to the BIG-IQ system to its management IP address with the following 2 headers: - content-type: application/json - X-F5-Auth-Token: [value of the authentication token obtained during authentication] Conclusion BIG-IQ makes it easier to manage SSLO Topologies thanks to its REST API. You can now make supported, standardized API calls to the BIG-IQ to create and modify topologies and deploy the changes directly to BIG-IP.802Views1like0CommentsiHealth API Part 4 - A Little More Code
In the last chapter, we wrote a quick bash script that used some tools to produce diagnostics summaries for qkviews. Now that you have some experience with the API, maybe you want to do a little bit more. In this article, we're going to explore the data in a little more detail to give you an idea of the kinds of things you can do with the API. Generally speaking, the API methods (sometimes called 'calls') are broken into two different broad categories, "group" or "collection" methods, and then methods that deal with a single qkview ("instance methods"). In this article, we're going to cover some of the collection methods, and how to manage your qkviews. In previous articles, we've restrained ourselves from going hog-wild and kept to the nice safe methods that are essentially read-only methods. That is, we just ask for information from the API, and then consume it and manipulate it locally, without making any changes to the data on the server. This is a nice safe way to get a handle on how the API works, without accidentally deleting anything, or changing an attribute of a QKView without meaning to. It's time to take the training wheels off, call in the goats, and go a little wild. Today, we'll be modifying an attribute of a QKView, or if we're feeling really crazy, we'll modify two of them, maybe even in the same HTTP request. My palms are sweating just writing this... Before we get that crazy, let's take a minute and talk about some collection methods first, and get that out of the way. Collection methods can be very handy, and very powerful, as they allow you to deal with your QKViews en masse. There are three collection methods that we'll talk about, the first one is a way to get a list of all your QKView IDs. This makes it possible to iterate over all the QKViews in your iHealth account while hunting for something. Remember how we got our list of goats in a barn? Similarly: GET qkview-analyzer/api/qkviews will return a list of qkview ids for every qkview in your ihealth account: What if we decide we want to get rid of all our QKViews? We could GET the list of IDs, and then iterate through them, DELETEing each one, but a collection method is much easier: DELETE qkview-analyzer/api/qkviews will wipe all the qkviews from your account. Use with caution, this is the 'nuke from orbit' option. We won't be using it in our projects for this article series, but I thought I'd mention it, as it is very handy for some situations. Another collection method that we *will* be using is this: POST qkview-analyzer/api/qkviews This method allows us to add to our collection of qkviews. Since we're using POST, remember that each time this is executed (even with the exact same qkview file), a new qkview object is created on the server. So you'll only need to do it once for a given qkview file, unless of course, you accidentally DELETE it from your collection, and want to add it again. I've never done anything like that, I'm just speaking hypothetically. So now armed with those collection methods, and with knowledge of our previous scripts that allowed us to examine QKView metadata, let's build up a little script that combines collection methods with some instance methods. Now, the sample qkview won't suffice for this exercise, as it is meant for demonstration purposes, and is used system-wide (and is thus read-only). For this exercise today, we'll be using a real qkview. This article is going to write two scripts. The first will perform an upload of a qkview file to iHealth, using many of the techniques that we learned in our first scripting session. Upload QKView script (note that all the scripts referenced in this series are included, but the one needed for this exercise is upload-qkview.sh) Notice how the authentication parts all look the same? What a great opportunity to refactor these scripts and build up an iHealth API library that our other scripts could use. Extra credit if you take that on! So using the same authentication methods, we need to be able to specify the location of the qkview file that we want to upload. Here is a sample qkview file for you to download and use for these exercises, or feel free to pull a qkview off your own gear, and use that. AskF5 explains how to get a QKView. Here is where the collection method to add a qkview to our collection happens: 96 function upload_qkview { 97 path="$1" 98 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} -F 'qkview=@${path}' -D /dev/stdout https://ihealth-api.f5.com/qkview-analyzer/api/qkviews" 99 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 100 out="$(eval "${CURL_CMD}")" 101 if [[ $? -ne 0 ]]; then 102 error "Couldn't retrieve diagnostics for ${qid}" 103 fi 104 location=$(echo "${out}" | grep -e '^Location:' | tr -d '\r\n') 105 transformed=${location/Location: /} 106 echo "${transformed}" 107 The upload will finish with an HTTP 302 redirect that points us to the new qkview. After we upload, we cannot just immediately ask for data about the qkview, as there is a certain amount of processing that happens before we can start grilling the server about it. While the server is busy doing all the prep work it needs to do in order to give you back data about the qkview, the server doesn't mind if we politely ask it about how things are going up there in the cloud. To do this, we use a GET request, and the server will respond with an HTTP status code that tells our script how things are going: our code: "ready yet?" server : "not yet" our code: "ready yet?" server : "not yet" sort of like kids on a car trip asking "are we there yet?" over and over and over again. Only instead of the driver getting enraged and turning up the radio to drown them out, our server just responds politely "not yet" until it's done processing. our code: "ready yet?" server : "yup, ready!" Then our code can go about it's business. In the programming trades, this called 'polling'. We poll the server until the server gives us the answer we want. our code: GET qkview-analyzer/api/qkviews/ server : HTTP 202 our code: GET qkview-analyzer/api/qkviews/ server : HTTP 202 our code: GET qkview-analyzer/api/qkviews/ server : HTTP 200 So that's how to add a qkview to your collection. Of course you might not get an HTTP 202 or 200 back, you might get something else, in which case something went wrong in either the upload or the processing. At that point, we should also bail out, and return an error to the runner of the script: 109 function wait_for_state { 110 url="$1" 111 count=0 112 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} -w "%{http_code}" ${url}" 113 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 114 _status=202 115 time_passed=0 116 while [[ "$_status" -eq 202 ]] && [[ $count -lt ${POLL_COUNT} ]]; do 117 _status="$(eval "${CURL_CMD}")" 118 count=$((count + 1)) 119 time_passed=$((count * POLL_WAIT)) 120 [[ $VERBOSE ]] && echo -ne "waiting (${time_passed} seconds and counting)\r" >&2 121 sleep ${POLL_WAIT} 122 done 123 printf "\nFinished in %s seconds\n" "${time_passed}" >&2 124 if [[ "$_status" -eq 200 ]]; then 125 [[ $VERBOSE ]] && echo "Success - qkview is ready" 126 elif [[ ${count} -ge ${POLL_COUNT} ]]; then 127 error "Timed out waiting for qkview to process" 128 else 129 error "Something went wrong with qkview processing, status: ${_status}" 130 fi 131 } The change-description.sh script (included in the zip linked above) will allow you to update the description field of any number of qkviews with a given chassis serial number. This script will use both a collection method (list all my qkview IDs), and an instance method on several qkviews to update some metadata associated with the qkview. We've introduced yet another verb into our working lexicon, PUT. We use PUT here, because it's a modification on a qkview that no matter how many times we perform it, will result in the same qkview state. This is called idempotency. Unlike, say our POST above in our upload script, which results in new qkviews every time you run it, this PUT may change the state of the qkview the first time, but subsequent identical PUT requests won't change the state further. So now we can submit QKViews, get diagnostic results, and change metadata about the QKView. So what else could we possibly do? The rest of this article series will explore the data in the API and how to retreive and process it.458Views1like0Comments