rest
12 TopicsF5 SSLO Unified Configuration API Quick Introduction
Introduction Prior to the introduction of BIG-IQ 8.0, you had to use the BIG-IQ graphical user interface (GUI) to configure F5 SSL Orchestrator (SSLO) Topologies and their dependencies. Starting with BIG-IQ 8.0, a new REST unified, supported and documented REST API endpoint was created to simplify SSLO configuration workflows. The aim is to simplify the configuration of F5 SSLO using standardized API calls.You are now able to store the configuration in your versioning tool (Git, SVN, etc.), and easily integrate the configuration of F5 SSLO in your automation and pipeline tools. For more information about F5 SSLO, please refer to this introductory video.An overview of F5 SSL Orchestrator is provided in K1174564. As a reminder the BIG-IQ API reference documentation can be found here.Documentation for the Access Simplified Workflow can be found here. The figure below shows a possible use for the SSLO Unified API. A few shortcuts are taken in the figure above as it is meant to illustrate the advantage of the simplified workflow. Example Configuration For the configuration the administrator needs to: -Create a JSON blurb or payload that will be sent to the BIG-IQ API -Authenticate to the BIG-IQ API -Send the payload to the BIG-IQ -Ensure that the workflow completes successfully The following aims to provide a step-by-step configuration of SSLO leveraging the API.In practice, the steps may be automated and may be included in the pipeline used to deploy the application leveraging the enterprise tooling and processes in place. 1.- Authenticate to the API API interactions with the BIG-IQ API requires the use of a token.The initial REST call should look like the following: REST Endpoint : /mgmt/shared/authn/login HTTP Method: POST Headers: -content-type: application/json Content: { "username": "", "password": "", "loginProviderName": "" } Example: POST https://10.0.0.1/mgmt/shared/authn/login HTTP/1.1 Headers: content-type: application/json Content: { "username": "username", "password": "complicatedPassword!", "loginProviderName": "RadiusServer" } The call above will authenticate the user “bob” to the API.The result of a successful authentication is the response from the BIG-IQ API with a token. 2.- Push the configuration to BIG-IQ The headers and HTTP request should look like the following: URI: mgmt/cm/sslo/api/topology HTTP Method: POST Headers: -content-type: application/json -X-F5-Auth-Token: [token obtained from the authentication process above] To send the configuration to the BIG-IQ you will need to send the following payload - the blurb is cut up in smaller pieces for readability. The JSON blurb is divided in multiple parts - the full concatenated text is available in the file in attachment. Start by defining an new topology with the following characteristics: Name: "sslo_NewTopology" Listening on the "/Common/VLAN_TRAP" VLAN The topology is of type "topology_l3_outbound" The SSL settings defined below named: "ssloT_NewSsl_Dec" The policy is called: "ssloP_NewPolicy_Dec" The JSON payload starts with the following: { "template": { "TOPOLOGY": { "name": "sslo_NewTopology ", "ingressNetwork": { "vlans": [ { "name": "/Common/VLAN_TAP" } ] }, "type": "topology_l3_outbound", "sslSetting": "ssloT_NewSsl_Dec", "securityPolicy": "ssloP_NewPolicy_Dec" }, The SSL settings used above are defined in the following JSON that creates a new profile with default values: "SSL_SETTINGS": { "name": "ssloT_NewSsl_Dec" }, The security policy is configured as follows: name: ssloP_NewPolicy_Dec function: introduces a pinning policy doing a policy lookup - matching requests are bypassed (no ssl decryp) with the associated service chain "ssloSC_NewServiceChain_Dec" that is defined further down below. "SECURITY_POLICY": { "name": "ssloP_NewPolicy_Dec", "rules": [ { "mode": "edit", "name": "Pinners_Rule", "action": "allow", "operation": "AND", "conditions": [ { "type": "SNI Category Lookup", "options": { "category": [ "Pinners" ] } }, { "type": "SSL Check", "options": { "ssl": true } } ], "actionOptions": { "ssl": "bypass", "serviceChain": "ssloSC_NewServiceChain_Dec" } }, { "mode": "edit", "name": "All Traffic", "action": "allow", "isDefault": true, "operation": "AND", "actionOptions": { "ssl": "intercept" } } ] }, The service chain configuration is defined below to forward the traffic to the "ssloS_ICAP_Dec" service. this is done with the following JSON: "SERVICE_CHAIN": { "ssloSC_NewServiceChain_Declarative": { "name": "ssloSC_NewServiceChain_Dec", "orderedServiceList": [ { "name":"ssloS_ICAP_Dec" } ] } }, The "ssloS_ICAP_Dec" service is defined with the JSON below with IP 3.3.3.3 on port 1344 "SERVICE": { "ssloS_ICAP_Declarative": { "name": "ssloS_ICAP_Dec", "customService": { "name": "ssloS_ICAP_Dec", "serviceType": "icap", "loadBalancing": { "devices": [ { "ip": "3.3.3.3", "port": "1344" } ] } } } } }, The configuration will be deployed to the target defined below: "targetList": [ { "type": "DEVICE", "name": "my.bigip.internal" } ] } After the HTTP POST, the BIG-IQ will respond with a transaction id.A sample of what looks like is given below: { […] "id":"edc17b06-8d97-47e1-9a78-3d47d2db70a6", "status":"STARTED", […] } You can check on the status of the deployment task by submitting a request as follows: -HTTP GET Method -Authenticated with the use of the custom authentication header X-F5-Auth-Token -Sent to the BIG-IQ to URI GET mgmt/cm/sslo/tasks/api/{{status_id}} HTTP/1.1 -With Content-Type header set to: Application/JSON Once the status of the task changes to FINISHED.The configuration is successfully completed.You can now check the F5 SSLO interface to make sure the new topology has been created.The BIG-IQ interface will show the new topology as depicted in the example below: The new topology has been deployed to the BIG-IP automatically.You can connect to the BIG-IP to verify, the interface should like the one depicted below: Congratulations, you now have successfully deployed a fully functional topology that your users can start using. Note that, you can also use the BIG-IQ REST API to delete the items that were just created.This is done by sending HTTP DELETE to the different API endpoints for the topology, service, security profile etc. For example, for the example above, you would be sending HTTP DELETE requests to the following URI’s: -For the topology: /mgmt/cm/sslo/api/topology/sslo_NewTopology_Dec -For the service chain: /mgmt/cm/sslo/api/service-chain/ssloSC_NewServiceChain_Dec -For the ICAP service: /mgmt/cm/sslo/api/ssl/ssloT_NewSsl_Dec All the requests listed above need to be sent to the BIG-IQ system to its management IP address with the following 2 headers: -content-type: application/json -X-F5-Auth-Token: [value of the authentication token obtained during authentication] Conclusion BIG-IQ makes it easier to manage SSLO Topologies thanks to its REST API.You can now make supported, standardized API calls to the BIG-IQ to create and modify topologies and deploy the changes directly to BIG-IP.708Views1like0CommentsiHealth API Part 4 - A Little More Code
In the last chapter, we wrote a quick bash script that used some tools to produce diagnostics summaries for qkviews. Now that you have some experience with the API, maybe you want to do a little bit more. In this article, we're going to explore the data in a little more detail to give you an idea of the kinds of things you can do with the API. Generally speaking, the API methods (sometimes called 'calls') are broken into two different broad categories, "group" or "collection" methods, and then methods that deal with a single qkview ("instance methods"). In this article, we're going to cover some of the collection methods, and how to manage your qkviews. In previous articles, we've restrained ourselves from going hog-wild and kept to the nice safe methods that are essentially read-only methods. That is, we just ask for information from the API, and then consume it and manipulate it locally, without making any changes to the data on the server. This is a nice safe way to get a handle on how the API works, without accidentally deleting anything, or changing an attribute of a QKView without meaning to. It's time to take the training wheels off, call in the goats, and go a little wild. Today, we'll be modifying an attribute of a QKView, or if we're feeling really crazy, we'll modify two of them, maybe even in the same HTTP request. My palms are sweating just writing this... Before we get that crazy, let's take a minute and talk about some collection methods first, and get that out of the way. Collection methods can be very handy, and very powerful, as they allow you to deal with your QKViews en masse. There are three collection methods that we'll talk about, the first one is a way to get a list of all your QKView IDs. This makes it possible to iterate over all the QKViews in your iHealth account while hunting for something. Remember how we got our list of goats in a barn? Similarly: GET qkview-analyzer/api/qkviews will return a list of qkview ids for every qkview in your ihealth account: What if we decide we want to get rid of all our QKViews? We could GET the list of IDs, and then iterate through them, DELETEing each one, but a collection method is much easier: DELETE qkview-analyzer/api/qkviews will wipe all the qkviews from your account. Use with caution, this is the 'nuke from orbit' option. We won't be using it in our projects for this article series, but I thought I'd mention it, as it is very handy for some situations. Another collection method that we *will* be using is this: POST qkview-analyzer/api/qkviews This method allows us to add to our collection of qkviews. Since we're using POST, remember that each time this is executed (even with the exact same qkview file), a new qkview object is created on the server. So you'll only need to do it once for a given qkview file, unless of course, you accidentally DELETE it from your collection, and want to add it again. I've never done anything like that, I'm just speaking hypothetically. So now armed with those collection methods, and with knowledge of our previous scripts that allowed us to examine QKView metadata, let's build up a little script that combines collection methods with some instance methods. Now, the sample qkview won't suffice for this exercise, as it is meant for demonstration purposes, and is used system-wide (and is thus read-only). For this exercise today, we'll be using a real qkview. This article is going to write two scripts. The first will perform an upload of a qkview file to iHealth, using many of the techniques that we learned in our first scripting session. Upload QKView script (note that all the scripts referenced in this series are included, but the one needed for this exercise is upload-qkview.sh) Notice how the authentication parts all look the same? What a great opportunity to refactor these scripts and build up an iHealth API library that our other scripts could use. Extra credit if you take that on! So using the same authentication methods, we need to be able to specify the location of the qkview file that we want to upload. Here is a sample qkview file for you to download and use for these exercises, or feel free to pull a qkview off your own gear, and use that. AskF5 explains how to get a QKView. Here is where the collection method to add a qkview to our collection happens: 96 function upload_qkview { 97 path="$1" 98 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} -F 'qkview=@${path}' -D /dev/stdout https://ihealth-api.f5.com/qkview-analyzer/api/qkviews" 99 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 100 out="$(eval "${CURL_CMD}")" 101 if [[ $? -ne 0 ]]; then 102 error "Couldn't retrieve diagnostics for ${qid}" 103 fi 104 location=$(echo "${out}" | grep -e '^Location:' | tr -d '\r\n') 105 transformed=${location/Location: /} 106 echo "${transformed}" 107 The upload will finish with an HTTP 302 redirect that points us to the new qkview. After we upload, we cannot just immediately ask for data about the qkview, as there is a certain amount of processing that happens before we can start grilling the server about it. While the server is busy doing all the prep work it needs to do in order to give you back data about the qkview, the server doesn't mind if we politely ask it about how things are going up there in the cloud. To do this, we use a GET request, and the server will respond with an HTTP status code that tells our script how things are going: our code: "ready yet?" server : "not yet" our code: "ready yet?" server : "not yet" sort of like kids on a car trip asking "are we there yet?" over and over and over again. Only instead of the driver getting enraged and turning up the radio to drown them out, our server just responds politely "not yet" until it's done processing. our code: "ready yet?" server : "yup, ready!" Then our code can go about it's business. In the programming trades, this called 'polling'. We poll the server until the server gives us the answer we want. our code: GET qkview-analyzer/api/qkviews/ server : HTTP 202 our code: GET qkview-analyzer/api/qkviews/ server : HTTP 202 our code: GET qkview-analyzer/api/qkviews/ server : HTTP 200 So that's how to add a qkview to your collection. Of course you might not get an HTTP 202 or 200 back, you might get something else, in which case something went wrong in either the upload or the processing. At that point, we should also bail out, and return an error to the runner of the script: 109 function wait_for_state { 110 url="$1" 111 count=0 112 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} -w "%{http_code}" ${url}" 113 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 114 _status=202 115 time_passed=0 116 while [[ "$_status" -eq 202 ]] && [[ $count -lt ${POLL_COUNT} ]]; do 117 _status="$(eval "${CURL_CMD}")" 118 count=$((count + 1)) 119 time_passed=$((count * POLL_WAIT)) 120 [[ $VERBOSE ]] && echo -ne "waiting (${time_passed} seconds and counting)\r" >&2 121 sleep ${POLL_WAIT} 122 done 123 printf "\nFinished in %s seconds\n" "${time_passed}" >&2 124 if [[ "$_status" -eq 200 ]]; then 125 [[ $VERBOSE ]] && echo "Success - qkview is ready" 126 elif [[ ${count} -ge ${POLL_COUNT} ]]; then 127 error "Timed out waiting for qkview to process" 128 else 129 error "Something went wrong with qkview processing, status: ${_status}" 130 fi 131 } The change-description.sh script (included in the zip linked above) will allow you to update the description field of any number of qkviews with a given chassis serial number. This script will use both a collection method (list all my qkview IDs), and an instance method on several qkviews to update some metadata associated with the qkview. We've introduced yet another verb into our working lexicon, PUT. We use PUT here, because it's a modification on a qkview that no matter how many times we perform it, will result in the same qkview state. This is called idempotency. Unlike, say our POST above in our upload script, which results in new qkviews every time you run it, this PUT may change the state of the qkview the first time, but subsequent identical PUT requests won't change the state further. So now we can submit QKViews, get diagnostic results, and change metadata about the QKView. So what else could we possibly do? The rest of this article series will explore the data in the API and how to retreive and process it.428Views1like0CommentsiHealth API Part 3 - A Little Code
We finished the last article with exploring some of the data available in the API with a web browser that was helpful enough to render it in a mildly readable fashion. For most automation projects, however, we don't care if it's easy to read, as long as it's parseable and we can do something with the data that doesn't involve watching streams of xml scroll off the screen. For today, we'll be working with the data from the diagnostics section of the API: We'll use the same data from the API that we explored last time. GET /qkview-analyzer/api/qkviews/0/diagnostics For this script exercise, we'll be using a couple of tools working in the unix shell: bash 4.x xmlstarlet grep When the diagnostics run against a QKView, they either are considered a 'hit' where the diagnostic found an issue in the QKView, or a 'miss', where the diagnostic ran, but found no issue. We'll develop a little script that will provide a quick summary of our hits and misses for a given QKView. In the script we'll deal with the following steps: authentication diagnostics retrieval and asking for subsets of diagnostics diagnostics parsing and selective display Let's start out by writing a couple of functions that will deal with the housekeeping that we need to do in order to connect to the API. First thing we have to do is authenticate ourselves with the credentials that we used when we built our iHealth account. By authenticating, we will obtain a cookie in the HTTP response to our authentication request (if it's successful!) that we will send with every subsequent request to the API. This way, the API knows who we are, which qkviews in the system are ours, and knows that we've proven we're who we're claiming to be. This is currently done by sending a form POST to the login server containing our credentials, and stashing the cookies we get back for later use. curl provides exactly the sort of functionality that we need to perform API work, and bindings for the curl libraries are available for lots and lots of languages if you want to use something other than bash, or want to develop a bigger script later on. We set up some initial values for later use: 13 readonly CURL=/usr/bin/curl ... 23 RESPONSE_FORMAT=${FORMAT:-"xml"} ... 32 ACCEPT_HEADER="-H'Accept: application/vnd.f5.ihealth.api+${RESPONSE_FORMAT}'" Now that the housekeeping is done, we'll need an authentication function: 72 function authenticate { 73 user="$1" 74 pass="$2" 75 # Yup! Security issues here! we're eval'ing with user input. Don't put this code into a CGI script... 76 CURL_CMD="${CURL} --data-urlencode 'userid=${user}' --data-urlencode 'passwd=${pass}' ${CURL_OPTS} https://login.f5.com/resource/login 76 Action.jsp" 77 [[ $DEBUG ]] && echo ${CURL_CMD} 78 79 if [[ ! "$user" ]] || [[ ! "$pass" ]]; then 80 error "missing username or password" 81 fi 82 eval "$CURL_CMD" 83 rc=$? 84 if [[ $rc -ne 0 ]]; then 85 error "curl authentication request failed with exit code: ${rc}" 86 fi 87 88 if ! \grep -e "sso_completed.*1$" ${COOKIEJAR} > /dev/null 2>&1; then 89 error "Authentication failed, check username and password" 90 fi 91 [[ $VERBOSE ]] && echo "Authentication successful" >&2 92 } The way this script is written, it uses the setting of environment variables on the commandline in order to protect the innocent. This allows us to pass sensitive information into a script without it needing to be embedded for others to discover, or show up in the output of ps on a shared machine. You can hardcode it if you wish, but I'd recommend not doing so. Once authentication succeeds, then we go out and grab the diagnostics for the qkview we specified as an argument to the script: 94 function get_diagnostics_hits { 95 qid="$1" 96 CURL_CMD="${CURL} ${ACCEPT_HEADER} ${CURL_OPTS} https://ihealth-api.f5.com/qkview-analyzer/api/qkviews/${qid}/diagnostics?set=hit" 97 [[ $DEBUG ]] && echo "${CURL_CMD}" >&2 98 out="$(eval "${CURL_CMD}")" 99 if [[ $? -ne 0 ]]; then 100 error "Couldn't retrieve diagnostics for ${qid}" 101 fi 102 echo "$out" 103 } Notice how we have set up an Accept header in the request? This tells the API what format you want the response to be in. Diagnostics have a special role in the API, so they are available in a couple format: xml, json, pdf, and csv. Everything else in the API is only available in xml or json. Use the Accept header to specify how you want your response data. If you don't use a response header, then the API assumes you want XML, and returns XML. In this example we're asking for XML explicitly. Now that we have a big pile of XML, we only want a couple pieces of it, so make xmlstarlet dig through it and give us a nice display: 131 #perform some XML extraction, and print it out in a nice readable format 132 diagnostics_count=$(echo ${diagnostics} | ${XMLPROCESSOR} select -t -c 'string(/diagnostic_output/diagnostics/@hit_count)' -) 133 for ((i=1;i<=diagnostics_count;i++)); do 134 printf "%-10s : " $(echo ${diagnostics} | ${XMLPROCESSOR} select -t -c "string(//diagnostic[$i]/@name)" -) 135 printf "%s\n" "$(echo ${diagnostics} | ${XMLPROCESSOR} select -t -c "//diagnostic[$i]//h_header/node()" -)" 136 done See how easy that is? We can do the same thing with json if we want: 140 # in json 141 diagnostics_count=$(echo ${diagnostics} | ${JSONPROCESSOR} -r .diagnostics.hit_count) 142 for ((i=0;i<=diagnostics_count;i++)); do 143 printf "%-10s : " $(echo ${diagnostics} | ${JSONPROCESSOR} -r .diagnostics.diagnostic[$i].name) 144 printf "%s\n" "$(echo ${diagnostics} | ${JSONPROCESSOR} -r .diagnostics.diagnostic[$i].results.h_header)" 145 done If you need to deal with json at the commandline, and don't already know about it, jq is an incredibly powerful tool for handling json, and well worth your time exploring (see Working with iControl REST Data on the Command LIne for more details.) So now that it works, what do all the fields mean? Let's look at the json output and see what we're getting: { "diagnostics": { "filter": "hit", "diagnostic": [ { "name": "H371501", "output": [], "run_data": { "h_importance": "MEDIUM", "match": true }, "results": { "h_action": "Upgrade to version 10.2.2 or later.", "h_name": "H371501", "h_header": "A known issue causes the chmand process to leak memory on this BIG-IP platform", "h_summary": "Due to a known issue, the chassis manager daemon, chmand, leaks memory on BIG-IP systems which contain the Always-On Management (AOM) subsystem. The memory leak is constant regardless of the volume of traffic processed by the system.", "h_sols": [ "http://support.f5.com/kb/en-us/solutions/public/12000/900/sol12941.html" ] } }, name - the diagnostic name output - is an array containing any qkview specific information the diagnostics wants to tell you about run-data - data about the run including importance (low medium high critical) match (if it's a hit or not) results h_action - references to more materials h_name - the diagnostic name again h_header - a title h_summary - a more verbose explanation of the issue h_sols - links to solution articles that go into more depth Customizing the data that is shown in your summary is very easy, by adding the statements that dig into the diagnostics return data. One could, if one wished, build up a reporting system that showed the change in diagnostics results over time for a collection of qkviews if one were so inclined. The full script is available here.487Views0likes0CommentsiHealth API Part 2 - An Introduction to REST
In the last article, we got some ideas of the kinds of data that iHealth can provide by exploring the wealth of information through a standard web UI. This is a quick easy way to get answers about a couple of your F5 machines, or to do a one-off check of something, or if you are looking for a particular problem, or working with F5 Support to resolve an issue or answer a question. If you have a couple machines, or maybe many many more, however, and you want to do something periodically, like, say, check for new Diagnostics results (remember, we generally update them once a week), then you'll quickly find that opening your browser, logging in, uploading the QKViews, and then reading through the Diagnostics is not necessarily the most efficient method. It can become annoying and tedious for even one or two machines. Since iHealth is a web application, it made the most sense to make the iHealth API a web API as well. There is a dogs breakfast of different types of web APIs, but one of the most prevalent conceptual frameworks in use today is called Representational State Transfer, or REST. The wikipedia entry on Roy Fielding has links to his dissertation if you want to up your street cred significantly. REST provides a nice clean way of retrieving, modifying and deleting things using HTTP in all (well most all) of it's glory. It's hidden from most users (as it should be), and unless you are a web developer, or someone who is dealing with web traffic a lot, there isn't any reason you would need to know this, but the HTTP protocol specifies a number of different ways to ask the server to do something. Now that you will be working with an HTTP API, you're going to have to know a little more about HTTP. Generally the two verbs that a developer will be familiar with is GET and POST. But HTTP has many other verbs, and we'll use most of them in this article series (GET, POST, PUT, and DELETE for now, although there are others), REST gives us a framework for using those verbs to make changes, or view the properties of things on the web. They generally do what they sound like: GET - get information (generally read-only) POST - create a new object (or modify an existing object repeatedly) PUT - modify an object idempotently so that multiple PUTs will result in the same state DELETE - delete an object This is all very abstract, so let's come up with an extremely contrived example: imagine a barn full of goats ( yes, my first job was as a farmhand ). Let's say that you had a robot that you wanted to perform various tasks relating to this barn full of goats. Maybe the first one would be a survey of all the goats in the barn. If the robot spoke REST, then you might send a text message to your robot that looked like this: GET /barn/1/goats Because the robot knows that in REST, that means that you want to know about all the goats in barn 1, your robot would trundle off (or fly, or crawl) to barn 1, and might come back with this data: goat 1: name: Dopey goat 2: name: Speedy Maybe then you wanted to know more about Speedy, so you'd text your robot this: GET /barn/1/goats/2 and your robot would gather up some more information about goat 2: goat 2: name : Speedy eats : Himalayan Blackberry hates : Dopey Okay, so what the hell do goats and barns have to do with iHealth? Well, if we substitute in QKViews for goats, we can start to see some information about our data in the same way that we were able to examine our herds of goats: GET /qkview-analyzer/api/qkviews Will show us a list of the qkviews that we have in our account: This is a sample, and there is a special QKView that isn't show in the listing, but that you should always have in your account, a QKView with an ID of 0. It's a sample QKView that we allow everyone access to in order to see how iHealth works with requireing you to upload anything of your own. So let's pick QKView 0 and load it up. We'll see some details about that QKView: GET /qkview-analyzer/api/qkviews/0 Feel free to poke around. At the bottom of the listing are some "discovery" URLs. Load them up in your browser; there is a ton to explore here, and you can get an idea of the kind of data you can work with without writing a single line of code. Code will come later, as you'll need it for things like adding to your QKView collections, and modifying existing QKView entries, which are tricky to do with just a browser.829Views0likes3CommentsHTML5 Going Like Gangbusters But Will Anyone Notice?
#v11 #HTML5 will certainly have an impact on web applications, but not nearly as much as hoped on the #mobile application market There’s a war on the horizon. But despite appearances, it’s a war for interactive web application dominance, and not one that’s likely to impact very heavily the war between mobile and web applications. First we have a report by ABI Research indicating a surge in the support of HTML5 on mobile devices indicating substantially impressive growth over the next five years. More than 2.1 billion mobile devices will have HTML5 browsers by 2016, up from just 109 million in 2010, according to a new report by ABI Research. -- The HTML Boom is Coming. Fast. (June 22, 2011) Impressive, no? But browser support does not mean use, and a report issued the day before by yet another analytics firm indicates that HTML5 usage on mobile applications is actually decreasing. Mobile applications are commanding more attention on smartphones than the web, highlighting the need for strong app stores on handset platforms. For the first time since Flurry, a mobile analytics firm, has been reporting engagement time of apps and web on smartphones, software is used on average for 81 minutes per day vs 74 minutes of web use. -- Sorry HTML 5, mobile apps are used more than the web (June 21, 2011) What folks seem to be missing – probably because they lack a background in development – is that the war is not really between HTML5 and mobile applications. The two models are very different – from the way in which they are developed and deployed to the way they are monetized. On the one hand you have HTML5 which, like its HTMLx predecessors, can easily be developed in just about any text editor and deployed on any web server known to man. On the other hand you have operating system and often device-specific development platforms that can only be written in certain languages and deployed on specific, targeted platforms. There’s also a marked difference in the user interface paradigm, with mobile device development heavily leaning toward touch and gesture-based interfaces and all that entails. It might appear shallow on the surface, but from a design perspective there’s a different mindset in the interaction when relying on gestures as opposed to mouse clicks. Consider those gestures that require more than one finger – enlarging or shrinking an image, for example. That’s simply not possible with one mouse – and becomes difficult to replicate in a non gesture-based interface. Similarly there are often very few “key" commands on mobile device applications and games. Accessibility? Not right now, apparently. That’s to say nothing of the differences in the development frameworks; the ones that require specific environments and languages. The advantages of HTML5 is that it’s cross-platform, cross-environment, and highly portable. The disadvantage is that you have little or no access to and control over system-level, well, anything. If you want to write an SSL VPN client, for example, you’re going to have to muck around in the network stack. That’s possible in a mobile device development environment and understandably impossible in a web-only world. Applications that are impossible to realistically replicate in a web application world– think graphic-intense games and simulation systems – are possible in a mobile environment. MOBILE BROADENING ITS USE The one area in which HTML5 may finally gain some legs and make a race out of applications with mobile apps is in its ability to finally leverage offline storage. The assumption for web applications has been, in the past, always on. Mobile devices have connectivity issues, attenuation and loss of signal interrupts connection-oriented applications and games. And let’s not forget the increasing pressure of data transfer caps on wireless networks (T-Mobile data transfer cap angers smartphone users, Jan 2011; O2 signals the end of unlimited data tariffs for iPhone customers, June 2010) that are also hitting broadband customers, much to their chagrin. But that’s exactly where mobile applications have an advantage over HTML5 and web applications, and why HTML5 with its offline storage capabilities comes in handy. But that would require rework on the part of application developers to adapt existing applications to fit the new model. Cookies and frequent database updates via AJAX/JSON is not a reliable solution on a sometimes-on device. And if you’re going to rework an application, why not target the platform specifically? Deployment and installation has reached the point of being as simple as opening a web page – maybe more so given the diversity of browsers and add-on security that can effectively prevent a web application requiring scripting or local storage access from executing at all. Better tracking of application reach is also possible with mobile platforms – including, as we’ve seen from the Flurry data, how much time is spent in the application itself. If you were thinking that mobile is a small segment of the population, think again. Tablets – definitely falling into the mobile device category based on their development model and portability - may be the straw that breaks the laptop’s back. Our exclusive first look at its latest report on how consumers buy and use tablets reveals an increasing acceptance--even reliance--on tablets for work purposes. Of the 1,000 tablet users surveyed, 57 percent said they are using tablets to replace laptop functions. Compared with a year ago, tablet owners are much less likely to buy a new laptop or Netbook, as well. Tablets are also cutting into e-reader purchase plans to an ever greater degree. What's more surprising, given the newness of the tablet market, is that 46 percent of consumers who already have a tablet are planning to buy another one. -- Report: Multi-tablet households growing fast (June 2011) This is an important statistic, as it may – combined with other statistics respecting the downloads of applications from various application stores and markets – indicate a growing irrelevance for web-based applications and, subsequently, HTML5. Mobile applications, not HTML5, are the new hotness. The losers to HTML5 will likely be Flash and video-based technologies, both of which can be replaced using HTML5 mechanisms that come without the headaches of plug-ins that may conflict, require upgrades and often are subject to targeted attacks by miscreants. I argued earlier this year that the increasing focus on mobile platforms and coming-of-age of HTML5 would lead to a client-database model of application development. Recent studies and data indicate that’s likely exactly where we’re headed – toward a client-database model that leverages the same database-as-a-service via a RESTful API and merely mixes up the presentation and application logic tiers on the client – whether through mobile device development kits or HTML5. As mobile devices – tablets, smartphones and whatever might come next – continue to take more and more mindshare from both the consumer and enterprise markets we’ll see more and more mobile-specific support for applications. You’ll note popular enterprise applications aren’t simply being updated to leverage HTML5 even though there is plenty of uptake in the market of the nascent specification. Users want native mobile platform applications – and they’re getting them. That doesn’t mean HTML5 won’t be a game-changer for web-applications – it likely will - but it does likely mean it won’t be a game-changer for mobile applications. Cloud-Tiered Architectural Models are Bad Except When They Aren’t Report: Multi-tablet households growing fast The HTML Boom is Coming. Fast. Sorry HTML 5, mobile apps are used more than the web The Database Tier is Not Elastic 80-line JavaScript Web Application Does This Application Make My Browser Look Fat? HTTP Now Serving … Everything The New Distribution of The 3-Tiered Architecture Changes Everything The Great Client-Server Architecture Myth283Views0likes0CommentsThe New Distribution of The 3-Tiered Architecture Changes Everything
As the majority of an application’s presentation layer logic moves to the client it induces changes that impact the entire application delivery ecosystem The increase in mobile clients, in demand for rich, interactive web applications, and the introduction of the API as one of the primary means by which information and content is shared across applications on the web is slowly but surely forcing a change back toward a traditional three-tiered architecture, if not in practice then in theory. This change will have a profound impact on the security, delivery, and scalability of the application but it also forces changes in the underlying network and application network infrastructure to support what is essentially a very different delivery model. What began with Web 2.0 – AJAX, primarily – is continuing to push in what seems a backward direction in architecture as a means to move web applications forward. In the old days the architecture was three-tiered, yes, but those tiers were maintained almost exclusive on the server-side of the architecture, with the browser acting only as the interpreter of the presentation layer data that was assembled on the server. Early AJAX applications continued using this model, leveraging the out-of-band (asynchronous) access provided by the XMLHTTPRequest object in major browsers as a means to dynamically assemble smaller pieces of the presentation layer. The browser was still relegated primarily to providing little more than rendering support. Enter Web 2.0 and RESTful APIs and a subtle change occurred. These APIs returned not presentation layer fragments, but data. The presentation layer logic required to display that data in a meaningful way based on the application became the responsibility of the browser. This was actually a necessary evolution in web application architecture to support the increasingly diverse set of end-user devices being used to access web applications. Very few people would vote for maintaining the separation of presentation layer logic used to support mobile devices and richer, desktop clients like browsers. By forcing the client to assemble and maintain the presentation layer that complexity on the server side is removed and a single, unified set of application logic resources can be delivered to every device without concern for cross-browser, cross-device support being “built in” to the presentation layer logic. This has a significant impact on the ability to rapidly support emerging clients – mobile and otherwise – that may not support the same robust set of capabilities available on a traditional browser. By reducing the presentation layer assembly on the server side to little more than layout – if that – the responsibility for assembling all the components and their display and routing data to the proper component is laid on the client. This means one server-side application truly can support both mobile and desktop clients with very little modification. It means an API provided by a web application can not only be used by the provider of that API to build its own presentation layer (client) but third-party developers can also leverage that API and the data it provides in whatever way it needs/chooses/desires. This is essentially the point to which we are almost at today.354Views0likes1CommentAmazon Makes the Cloud Sticky
Stateless applications may be the long term answer to scalability of applications in the cloud, but until then, we need a solution like sticky sessions (persistence) Amazon recently introduced “stickiness” to its ELB (Elastic Load Balancing) offering. I’ve written a bit about “stickiness”, a.k.a. what we’ve called persistence for oh, nearly ten years now, before so I won’t reiterate again but to say, “it’s about time.” A description of why sticky sessions is necessary was offered in the AWS blog announcing the new feature: Up until now each Load balancer had the freedom to forward each incoming HTTP or TCP request to any of the EC2 instances under its purview. This resulted in a reasonably even load on each instance, but it also meant that each instance would have to retrieve, manipulate, and store session data for each request without any possible benefit from locality of reference. -- New Elastic Load Balancing Feature: Sticky Sessions What the author is really trying to say is that without “sticky sessions” ELB breaks applications because it does not honor state. Remember that most web applications today rely upon state (session) to store quite a bit of application and user specific data that’s necessary for the application to behave properly. When a load balancer distributes requests across instances without consideration for where that state (session) is stored, the application behavior can become erratic and unpredictable. Hence the need for “stickiness”.208Views0likes0CommentsMagic Virtualization-Fairy Dust and the New Network
The virtualization fairy won’t create APIs out of thin air, but a visit from her may kick-start a necessary (re)evaluation of the role of the API in the new network. The way some people talk about the “virtualization of the network” and how it’s necessary for cloud computing and automation and creating a flexible infrastructure you’d think that the transformation from physical form factor to virtual form factor was a magical one that conferred not only the ability scale on-demand but the APIs, as well. There are actual two misconceptions here that need correcting and you know me - I’m going to correct them. First seems to be the erroneous belief that in order to fit into a dynamic data center a network infrastructure component must be virtualized. The thought here seems to revolve around a belief that only by becoming a virtual network appliance (VNA) can a hardware component suddenly be imbued with the control plane, the API, required to be automated and orchestrated in a dynamic, on-demand way. In other words, to have its capabilities delivered as a service. Not true at all. Many network infrastructure components have been control-plane enabled for years, since 2001 or so, in fact. These control-planes are standards-based, almost always leveraging HTTP and SOAP or POX (Plain Old XML), and provide the means by which these components have been integrated into third-party management applications for many, many years. That management API, the control plane, is what confers flexibility, programmability, and integration with the rest of the infrastructure and management systems that ultimately enable a dynamic data center. Second is what seems to be the belief that the transformation from physical to virtual somehow births an API that did not before exist. That’s simply not true. While moving from one form factor to another inherently allows management at the container level, i.e. of the virtual machine, it does not magically confer the ability to manage what’s running inside the container, i.e. the actual networking component. That requires an API of some kind, whether that’s SOAP or REST or whatever. If the API didn’t exist before “virtualization” there is no guarantee it will suddenly exist after the process of virtualization is complete. Sometimes the process of virtualization seems to result in an API. That’s not because of the process, it’s because (a) the organization realizes that be a part of the new infrastructure an API is going to be required and (b) as long as they’re mucking around in the code-base to virtualization the solution it’s a really good time to add an API. It’s more a matter of “well, we’re in here anyway…let’s just do it” than anything else. This isn’t a “cause –> effect” chain but more a coincidence, albeit perhaps a logical and financially smart one. If the API existed before the virtualization-fairy showed up, well, it’s almost certainly still going to be there after it’s virtualized.185Views0likes0CommentsImpact of Load Balancing on SOAPy and RESTful Applications
A load balancing algorithm can make or break your application’s performance and availability It is a (wrong) belief that “users” of cloud computing and before that “users” of corporate data center infrastructure didn’t need to understand any of that infrastructure. Caution: proceed with infrastructure ignorance at the (very real) risk of your application’s performance and availability. Think I’m kidding? Stefan’s SOA & Enterprise Architecture Blog has a detailed and very explanatory post on Load Balancing Strategies for SOA Infrastructures that may change your mind. This post grew, apparently, out of some (perceived) bad behavior on the part of a load balancer in a SOA infrastructure. Specifically, the load balancer configuration was overwhelming the very services it was supposed to be load balancing. Before we completely blame the load balancer, Stefan goes on to explain that the root of the problem lay in the load balancing algorithm used to distribute requests across the services. Specifically, the load balancer was configured to use a static round robin algorithm and to apply source IP address-based affinity (persistence) while doing so. The result is that one instance of the service was constantly sent requests while the others remained idle and available. Stefan explains how the load balancing algorithm was changed to utilize a dynamic ratio algorithm that takes into consideration the state of each service (CPU and memory available) and removed the server affinity requirement. The problem wasn’t the load balancer, per se. The load balancer was acting exactly as it was configured to act. The problem lay deeper: in understanding the interaction between the network, the application network, and the services themselves. Services, particularly stateless services as offered by SOA and REST-based APIs today, do not generally require persistence. In cases where they do require persistence, that persistence needs to be based on application-layer information, such as an API key or user (usually available in a cookie). But this problem isn’t unique to SOA. Consider, if you will, the effect that such an unaware distribution might have on any one of the popular social networking sites offering RESTful APIs for integration. Imagine that all Twitter API requests ended up distributed to one server in Twitter’s infrastructure. It would fall over quickly, no doubt about that, because the requests are distributed without any consideration for current load and almost, one could say, blindly. Stefan points this out as he continues to examine the effect of load balancing algorithms on his SOA infrastructure: “Secondly, the static round-robin algorithm does not take in effect, which state each cluster node has. So, for example if one cluster node is heavily under load, because it processes some complex orders, and this results in 100% cpu load, then the load balancer will not recognize this but route lots of other requests to this node causing overload and saturation.” Load balancing algorithms that do not take into account the current state of the server and application, i.e. they are not context-aware, are not appropriate for today’s dynamic application architectures. Such algorithms are static, brittle, and blind when it comes to distributed load efficiently and will ultimately result in an uneven request load that is likely to drive an application to downtime. THE APPLICATION SHOULD BE A PART OF THE ALGORITHM It is imperative in a distributed application architecture like SOA or REST that the application network infrastructure, i.e. the load balancer, be able to take into consideration the current load on any given server before distributing a request. If one node in the (pool|farm|cluster) is processing a complex order that consumes most of the CPU resources available, the load balancer should not continue to send it requests. This requires that the load balancer, the application delivery controller, be aware of the application, its environment, as well as the network and the user. It must be able to make a decision, in real-time, about where to direct any given request based on all the variables available. That includes CPU resources, what the request is, and even who the user/application is. For example, Twitter uses a system of inbound rate limiting on API calls to help manage the load on its infrastructure. Part of that equation could be the calling application. HTTP as a transport protocol contains a somewhat surprisingly rich array of information in its headers that can be parsed and inspected and made a part of the load balancing equation in any environment. This is particularly useful to sites like Twitter where multiple “applications” (clients) are making use of the API. Twitter can easily require the use of a custom HTTP header that includes the application name and utilize that as part of its decision making processes. Like RESTful APIs, SOAP envelopes are full of application specifics that provide data to the load balancer, if it’s context-aware, that can be utilized to determine how best to distribute a request. The name of the operation being invoked, for example, can be used to not only load balance at the service level, but at the operation level. That granularity can be important when operations vary in their consumption of resources. This application layer information, in conjunction with current load and connections on the server provide a wealth of information as to how best, i.e. most efficiently, to distribute any given request. But if the folks in charge of configuring the load balancer aren’t aware of the impact of algorithms on the application and its infrastructure, you can end up in a situation much like that described in Stefan’s blog on the subject. CLOUD WILL MAKE THIS SITUATION WORSE Cloud computing won’t solve this problem and, in fact, it will probably make it worse. The belief that the infrastructure should be “hidden” from the user (that’s you) means that configuration options – like the load balancing algorithm – aren’t available to you as a user/deployer of cloud-based applications. Even though load balancing is going to be used to scale your application, you have no clue or control over how that’s going to occur. That’s why it’s important that you ask questions of your provider on this subject. You need to know what algorithm is being used and how requests are distributed so you can determine how that’s going to impact your application and its performance once its deployed. You can’t – or shouldn’t – assume that the load balancing provided is going to magically distribute requests perfectly across your scaled application because it wasn’t configured with your application in mind. If you deploy an application – particularly a SOA or RESTful one – you may find that with scalability comes poor performance or even unavailable applications because of the configuration of that infrastructure you “aren’t supposed to worry about.” Applications are not islands; they aren’t deployed stand-alone even though the virtualization of applications is making it seem like that’s the case. The delivery of applications requires collaboration between a growing number of components in the data center and load balancing is one of the key components that can make or break your application’s performance and availability. Five questions you need to ask about load balancing and the cloud Dr. Dobb’s Journal: Coding in the Cloud Cloud Computing: Vertical Scalability is Still Your Problem Server Virtualization versus Server Virtualization SOA & Web 2.0: The Connection Management Challenge The Impact of the Network on AJAX Have a can of Duh! It’s on me Intro to Load Balancing for Developers – The Algorithms Not All Virtual Servers are Created Equal629Views0likes0CommentsSOAP vs REST: The war between simplicity and standards
SOA is, at its core, a design and development methodology. It embraces reuse through decomposition of business processes and functions into core services. It enables agility by wrapping services in an accessible interface that is decoupled from its implementation. It provides a standard mechanism for application integration that can be used internally or externally. It is, as they say, what it is. SOA is not necessarily SOAP, though until the recent rise of social networking and Web 2.0 there was little real competition against the rising standard. But of late the adoption of REST and its use on the web facing side of applications has begun to push around the incumbent. We still aren't sure who swung first. We may never know, and at this point it's irrelevant: there's a war out there, as SOAP and REST duke it out for dominance of SOA. At the core of the argument is this: SOAP is weighted down by the very standards designed to promote interoperability (WS-I), security (WS-Security), and reliability (WS-Reliability). REST is a lightweight compared to its competitor, with no standards at all. Simplicity is its siren call, and it's being heard even in the far corners of corporate data centers. A February 2007 Evans Data survey found a 37% increase in those implementing or considering REST, with 25% considering REST-Based Web Services as a simpler alternative to SOAP-based services. And that was last year, before social networking really exploded and the integration of Web 2.0 sites via REST-based services took over the face of the Internet. It was postulated then that WOA (Web Oriented Architecture) was the face of SOA (Service Oriented Architecture). That REST on the outside was the way to go, but SOAP on the inside was nearly sacrosanct. Apparently that thought, while not wrong in theory, didn't take into account the fervor with which developers hold dear their beliefs regarding everything from language to operating system to architecture. The downturn in the economy hasn't helped, either, as REST certainly is easier and faster to implement, even with the plethora of development tools and environments available to carry all the complex WS-* standards that go along with SOAP like some sort of technology bellhop. Developers have turned to the standard-less option because it seems faster, cheaper, and easier. And honestly, we really don't like being told how to do things. I don't, and didn't, back in the day when the holy war was between structured and object-oriented programming. While REST has its advantages, certainly, standard-less development can, in the long-run, be much more expensive to maintain and manage than standards-focused competing architectures. The argument that standards-based protocols and architectures is difficult because there's more investment required to learn the basics as well as associated standards is essentially a red herring. Without standards there is often just as much investment in learning data formats (are you using XML? JSON? CSV? Proprietary formats? WWW-URL encoded?) as there is in learning standards. Without standards there is necessarily more documentation required, which cuts into development time. Then there's testing. Functional and vulnerability testing which necessarily has to be customized because testing tools can't predict what format or protocol you might be using. And let's not forget the horror that is integration, and how proprietary application protocols made it a booming software industry replete with toolkits and libraries and third-party packages just to get two applications to play nice together. Conversely, standards that are confusing and complex lengthen the implementation cycle, but make integration and testing as well as long term maintenance much less painful and less costly. Arguing simplicity versus standards is ridiculous in the war between REST and SOA because simplicity without standards is just as detrimental to the costs and manageability of an application as is standards without simplicity. Related articles by Zemanta RESTful .NET Has social computing changed attitudes toward reuse? The death of SOA has been greatly exaggerated Web 2.0: Integration, APIs, and scalability Performance Impact: Granularity of Services520Views0likes3Comments