JSON
18 Topics5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.401Views0likes0CommentsF5 SSLO Unified Configuration API Quick Introduction
Introduction Prior to the introduction of BIG-IQ 8.0, you had to use the BIG-IQ graphical user interface (GUI) to configure F5 SSL Orchestrator (SSLO) Topologies and their dependencies. Starting with BIG-IQ 8.0, a new REST unified, supported and documented REST API endpoint was created to simplify SSLO configuration workflows. The aim is to simplify the configuration of F5 SSLO using standardized API calls.You are now able to store the configuration in your versioning tool (Git, SVN, etc.), and easily integrate the configuration of F5 SSLO in your automation and pipeline tools. For more information about F5 SSLO, please refer to this introductory video.An overview of F5 SSL Orchestrator is provided in K1174564. As a reminder the BIG-IQ API reference documentation can be found here.Documentation for the Access Simplified Workflow can be found here. The figure below shows a possible use for the SSLO Unified API. A few shortcuts are taken in the figure above as it is meant to illustrate the advantage of the simplified workflow. Example Configuration For the configuration the administrator needs to: -Create a JSON blurb or payload that will be sent to the BIG-IQ API -Authenticate to the BIG-IQ API -Send the payload to the BIG-IQ -Ensure that the workflow completes successfully The following aims to provide a step-by-step configuration of SSLO leveraging the API.In practice, the steps may be automated and may be included in the pipeline used to deploy the application leveraging the enterprise tooling and processes in place. 1.- Authenticate to the API API interactions with the BIG-IQ API requires the use of a token.The initial REST call should look like the following: REST Endpoint : /mgmt/shared/authn/login HTTP Method: POST Headers: -content-type: application/json Content: { "username": "", "password": "", "loginProviderName": "" } Example: POST https://10.0.0.1/mgmt/shared/authn/login HTTP/1.1 Headers: content-type: application/json Content: { "username": "username", "password": "complicatedPassword!", "loginProviderName": "RadiusServer" } The call above will authenticate the user “bob” to the API.The result of a successful authentication is the response from the BIG-IQ API with a token. 2.- Push the configuration to BIG-IQ The headers and HTTP request should look like the following: URI: mgmt/cm/sslo/api/topology HTTP Method: POST Headers: -content-type: application/json -X-F5-Auth-Token: [token obtained from the authentication process above] To send the configuration to the BIG-IQ you will need to send the following payload - the blurb is cut up in smaller pieces for readability. The JSON blurb is divided in multiple parts - the full concatenated text is available in the file in attachment. Start by defining an new topology with the following characteristics: Name: "sslo_NewTopology" Listening on the "/Common/VLAN_TRAP" VLAN The topology is of type "topology_l3_outbound" The SSL settings defined below named: "ssloT_NewSsl_Dec" The policy is called: "ssloP_NewPolicy_Dec" The JSON payload starts with the following: { "template": { "TOPOLOGY": { "name": "sslo_NewTopology ", "ingressNetwork": { "vlans": [ { "name": "/Common/VLAN_TAP" } ] }, "type": "topology_l3_outbound", "sslSetting": "ssloT_NewSsl_Dec", "securityPolicy": "ssloP_NewPolicy_Dec" }, The SSL settings used above are defined in the following JSON that creates a new profile with default values: "SSL_SETTINGS": { "name": "ssloT_NewSsl_Dec" }, The security policy is configured as follows: name: ssloP_NewPolicy_Dec function: introduces a pinning policy doing a policy lookup - matching requests are bypassed (no ssl decryp) with the associated service chain "ssloSC_NewServiceChain_Dec" that is defined further down below. "SECURITY_POLICY": { "name": "ssloP_NewPolicy_Dec", "rules": [ { "mode": "edit", "name": "Pinners_Rule", "action": "allow", "operation": "AND", "conditions": [ { "type": "SNI Category Lookup", "options": { "category": [ "Pinners" ] } }, { "type": "SSL Check", "options": { "ssl": true } } ], "actionOptions": { "ssl": "bypass", "serviceChain": "ssloSC_NewServiceChain_Dec" } }, { "mode": "edit", "name": "All Traffic", "action": "allow", "isDefault": true, "operation": "AND", "actionOptions": { "ssl": "intercept" } } ] }, The service chain configuration is defined below to forward the traffic to the "ssloS_ICAP_Dec" service. this is done with the following JSON: "SERVICE_CHAIN": { "ssloSC_NewServiceChain_Declarative": { "name": "ssloSC_NewServiceChain_Dec", "orderedServiceList": [ { "name":"ssloS_ICAP_Dec" } ] } }, The "ssloS_ICAP_Dec" service is defined with the JSON below with IP 3.3.3.3 on port 1344 "SERVICE": { "ssloS_ICAP_Declarative": { "name": "ssloS_ICAP_Dec", "customService": { "name": "ssloS_ICAP_Dec", "serviceType": "icap", "loadBalancing": { "devices": [ { "ip": "3.3.3.3", "port": "1344" } ] } } } } }, The configuration will be deployed to the target defined below: "targetList": [ { "type": "DEVICE", "name": "my.bigip.internal" } ] } After the HTTP POST, the BIG-IQ will respond with a transaction id.A sample of what looks like is given below: { […] "id":"edc17b06-8d97-47e1-9a78-3d47d2db70a6", "status":"STARTED", […] } You can check on the status of the deployment task by submitting a request as follows: -HTTP GET Method -Authenticated with the use of the custom authentication header X-F5-Auth-Token -Sent to the BIG-IQ to URI GET mgmt/cm/sslo/tasks/api/{{status_id}} HTTP/1.1 -With Content-Type header set to: Application/JSON Once the status of the task changes to FINISHED.The configuration is successfully completed.You can now check the F5 SSLO interface to make sure the new topology has been created.The BIG-IQ interface will show the new topology as depicted in the example below: The new topology has been deployed to the BIG-IP automatically.You can connect to the BIG-IP to verify, the interface should like the one depicted below: Congratulations, you now have successfully deployed a fully functional topology that your users can start using. Note that, you can also use the BIG-IQ REST API to delete the items that were just created.This is done by sending HTTP DELETE to the different API endpoints for the topology, service, security profile etc. For example, for the example above, you would be sending HTTP DELETE requests to the following URI’s: -For the topology: /mgmt/cm/sslo/api/topology/sslo_NewTopology_Dec -For the service chain: /mgmt/cm/sslo/api/service-chain/ssloSC_NewServiceChain_Dec -For the ICAP service: /mgmt/cm/sslo/api/ssl/ssloT_NewSsl_Dec All the requests listed above need to be sent to the BIG-IQ system to its management IP address with the following 2 headers: -content-type: application/json -X-F5-Auth-Token: [value of the authentication token obtained during authentication] Conclusion BIG-IQ makes it easier to manage SSLO Topologies thanks to its REST API.You can now make supported, standardized API calls to the BIG-IQ to create and modify topologies and deploy the changes directly to BIG-IP.705Views1like0CommentsParsing complex BIG-IP json structures made easy with Ansible filters like json_query
JMESPath and json_query JMESPath (JSON Matching Expression paths) is a query language for searching JSON documents. It allows you to declaratively extract elements from a JSON document. Have a look at this tutorial to learn more. The json_query filter lets you query a complex JSON structure and iterate over it using a loop structure.This filter is built upon jmespath, and you can use the same syntax as jmespath. Click here to learn more about the json_query filter and how it is used in Ansible. In this article we are going to use the bigip_device_info module to get various facts from the BIG-IP and then use the json_query filter to parse the output to extract relevant information. Ansible bigip_device_info module Playbook to query the BIG-IP and gather system based information. - name: "Get BIG-IP Facts" hosts: bigip gather_facts: false connection: local tasks: - name: Query BIG-IP facts bigip_device_info: provider: validate_certs: False server: "xxx.xxx.xxx.xxx" user: "*****" password: "*****" gather_subset: - system-info register: bigip_facts - set_fact: facts: '{{bigip_facts.system_info}}' - name: debug debug: msg="{{facts}}" To view the output on a different subset, below are a few examples to change the gather_subset and set_fact values in the above playbook from gather_subset: system-info, facts: bigip_facts.system_info to any of the below: gather_subset: vlans , facts: bigip_facts.vlans gather_subset: self-ips, facts: bigip_facts.self_ips gather_subset: nodes. facts: bigip_facts.nodes gather_subset: software-volumes, facts: bigip_facts.software_volumes gather_subset: virtual-servers, facts: bigip_facts.virtual_servers gather_subset: system-info, facts: bigip_facts.system_info gather_subset: ltm-pools, facts: bigip_facts.ltm_pools Click here to view all the information that can be obtained from the BIG-IP using this module. Parse the JSON output Once we have the output lets take a look at how to parse the output. As mentioned above the jmespath syntax can be used by the json_query filter. Step 1: We will get the jmespath syntax for the information we want to extract Step 2: We will see how the jmespath syntax and then be used with json_query in an Ansible playbook The website used in this article to try out the below syntax: https://jmespath.org/ Some BIG-IP sample outputs are attached to this article as well (Check the attachments section after the References). The attachment file is a combined output of a few configuration subsets.Copy paste the relevant information from the attachment to test the below examples if you do not have a BIG-IP. System information The output for this section is obtained with above playbook using parameters: gather_subset: system-info, facts: bigip_facts.system_info Once the above playbook is run against your BIG-IP or if you are using the sample configuration attached, copy the output and paste it in the relevant text box. Try different queries by placing them in the text box next to the magnifying glass as shown in image below # Get MAC address, serial number, version information msg.[base_mac_address,chassis_serial,platform,product_version] # Get MAC address, serial number, version information and hardware information msg.[base_mac_address,chassis_serial,platform,product_version,hardware_information[*].[name,type]] Software volumes The output for this section is obtained with above playbook using parameters: gather_subset: software-volumes facts: bigip_facts.software_volumes # Get the name and version of the software volumes installed and its status msg[*].[name,active,version] # Get the name and version only for the software volume that is active msg[?active=='yes'].[name,version] VLANs and Self-Ips The output for this section is obtained with above playbook using parameters gather_subset: vlans and self-ips facts: bigip_facts Look at the following example to define more than one subset in the playbook # Get all the self-ips addresses and vlans assigned to the self-ip # Also get all the vlans and the interfaces assigned to the vlan [msg.self_ips[*].[address,vlan], msg.vlans[*].[full_path,interfaces[*]]] Nodes The output for this section is obtained with above playbook using parameters gather_subset: nodes, facts: bigip_facts.nodes # Get the address and availability status of all the nodes msg[*].[address,availability_status] # Get availability status and reason for a particular node msg[?address=='192.0.1.101'].[full_path,availability_status,status_reason] Pools The output for this section is obtained with above playbook using parameters gather_subset: ltm-pools facts: bigip_facts.ltm_pools # Get the name of all pools msg[*].name # Get the name of all pools and their associated members msg[*].[name,members[*]] # Get the name of all pools and only address of their associated members msg[*].[name,members[*].address] # Get the name of all pools along with address and status of their associated members msg[*].[name,members[*].address,availability_status] # Get status of pool members of a particular pool msg[?name=='/Common/pool'].[members[*].address,availability_status] # Get status of pool # Get address, partition, state of pool members msg[*].[name,members[*][address,partition,state],availability_status] # Get status of a particular pool and particular member (multiple entries on a member) msg[?full_name=='/Common/pool'].[members[?address=='192.0.1.101'].[address,partition],availability_status] Virtual Servers The output for this section is obtained with above playbook using parameters gather_subset: virtual-servers facts: bigip_facts.virtual_servers # Get destination IP address of all virtual servers msg[*].destination # Get destination IP and default pool of all virtual servers msg[*].[destination,default_pool] # Get me all destination IP of all virtual servers that a particular pool as their default pool msg[?default_pool=='/Common/pool'].destination # Get me all profiles assigned to all virtual servers msg[*].[destination,profiles[*].name] Loop and display using Ansible We have seen how to use the jmespath syntax and extract information, now lets see how to use it within an Ansible playbook - name: Parse the output hosts: localhost connection: local gather_facts: false tasks: - name: Setup provider set_fact: provider: server: "xxx.xxx.xxx.xxx" user: "*****" password: "*****" server_port: "443" validate_certs: "no" - name: Query BIG-IP facts bigip_device_info: provider: "{{provider}}" gather_subset: - system_info register: bigip_facts - debug: msg="{{bigip_facts.system_info}}" # Use json query filter. The query_string will be the jmespath syntax # From the jmespath query remove the 'msg' expression and use it as it is - name: "Show relevant information" set_fact: result: "{{bigip_facts.system_info | json_query(query_string)}}" vars: query_string: "[base_mac_address,chassis_serial,platform,product_version,hardware_information[*].[name,type]]" - debug: "msg={{result}}" Another example of what would change if you use a different query (only highlighting the changes that need to made below from the entire playbook) - name: Query BIG-IP facts bigip_device_info: provider: "{{provider}}" gather_subset: - ltm-pools register: bigip_facts - debug: msg="{{bigip_facts.ltm_pools}}" - name: "Show relevant information" set_fact: result: "{{bigip_facts.ltm_pools | json_query(query_string)}}" vars: query_string: "[*].[name,members[*][address,partition,state],availability_status]" The key is to get the jmespath syntax for the information you are looking for and then its a simple step to incorporate it within your Ansible playbook References Try the queries - https://jmespath.org/ Learn more jmespath syntax and example - https://jmespath.org/tutorial.html Ansible lab that can be used as a sandbox - https://clouddocs.f5.com/training/automation-sandbox/2.1KViews2likes2CommentsWorking with iControl REST Data on the Command Line
I'm still pretty entrenched in "old school" iControl with the soap interface, but with Colin's new series on the iControl REST interface underway, I thought I'd start taking more than a glance. While Colin's sed-fu in his latest article is impressive, I want to see the json data returned from iControl REST calls parsed properly (no offense, Colin!) This can be done with any number of bash, perl, python, and more tools, but I ran across one on twitter last night that is somewhat magical. jq is a single binary you can download and it just works. So why bother? Well, as Colin pointed out, the returned json data (by default) is none too fabulous in format: But with jq, you get nice pretty text with field callouts (jq .): But wait! There's more! You can also narrow the selection to the fields you want (jq '{ name, apiAnonymous }'): If you want to pull all the rules together, but still only return the name and code, just index by item (jq '.items[] | { name, partition, apiAnonymous }'):533Views0likes4CommentsThe Next Evolution of Application Architecture is Upon Us
And what it means to #devops One of the realities of application development is that there are a lot of factors that go into its underlying architecture. I came of age during the Epoch of Client-Server Architecture (also known as the 1990s and later the .com era) and we were taught, very firmly, that despite the implication of "client-server" that there were but two, distinct "tiers" comprising an application, there were actually three. The presentation layer, the GUI, resided on the client. All business logic resided on the server, and a third, data access tier completed the trifecta. This worked well because developers tailored each application to a specific business purpose. Thus, implementation of business logic occurred along with application logic. There was no real reason to separate it out. As applications grew in complexity and use, SOA came of age. SOA introduced the principles of reuse we still adhere to today, and the idea that business logic should be consistent and shared across all applications that might need it. I know, revolutionary, wasn't it? Before SOA could complete its goal of world domination, however, the Internet and deployment models changed. Dramatically. [Interestingly enough, a post two years ago on this topic was fairly accurate on this migration ] THE NEW APP WORLD ORDER Today, an "application" is a no longer defined necessarily by business function, but by a unique combination of client and business function. It's not just the business logic, it's the delivery and presentation mechanism that make it unique and of course, more challenging for operations. Business logic has moved into a converged business logic-data tier more commonly known as The API. The client is still, of course, responsible for the presentation. But application logic - the domain of state and access - is in flux for some. As illustrated by the ill-advised decision to place application-logic in the presentation layer, some developers haven't quite adopted the practice of deploying an application (or domain) logic tier to intermediate and maintain consistent application behavior. But there always exist unique processing that must occur based on context. In some cases, some data might be marked cacheable as a means to achieve better performance for mobile clients when communicating over a mobile network while in others, it might not be because the client is a web-based application running on a PC over a LAN. A native mobile application deals with the state of the user "are they logged in" differently than a web application relying on cookies. The business logic should not be impacted by this. Ultimately the business logic for retrieving order X for customer Y does not inherently change based on client characteristics. It cannot, in fact, be shared - reused - if it contains application specific details regarding the validation of state, unless such an implementation uses a lot of conditional statements (that must be modified every time a new method is introduced, by the way) to determine whether a user is logged in or not. Thus, we move the application-specific, the domain, logic to its own tier, usually implemented by or on a proxy that intermediates between the API and the client. Domain Logic Tier Business Logic Tier • Maintains an internally consistent model representation on both sides of the app (client and server) • Is ontological • Often involves application state and access requirements such as "the user must be logged-in and an admin" or "this object is cacheable" • Consists of elements and functions that are specific to this application • Concerned with coordinating valid interactions between presentation and data (client and data) • Is teleological • Often involves direct access to data or application of business requirements such as “if order is > 1000 apply discount X” • Consists of elements that are common to all applications, i.e. does not rely on a given UI or interface What's most fascinating about this change is that a "proxy" tier traditionally proxies for the server-side application, but in this new model it is proxying for the client. That's not odd for developers, because if you break down the traditional model the "server" piece of the three-tiered architecture really was just a big application-specific proxy to the data. But it is odd for operations, because the new model takes advantage of a converged application-network proxy that is capable of performing tasks like load balancing and authentication and caching as well as mediating for an API, which may include transformative and translative functions. IMPACT ON DEVOPS What this ultimately means for devops is an increasing role in application architecture, from inception to production. It means devops will need to go beyond concerns of web performance or application deployment lifecycle management and into the realm of domain logic implementation and deployment. That may mean a new breed of developer; one that is still focused on development but does so in a primarily operational environment, in the network. It means enterprise architects will need to extend their view into operations, into the network, and codify for developers the lines of demarcation between domain and business logic. It means a very interesting new application model that basically adopts the premise of application delivery but adds domain logic to its catalog of services. It means devops is going to get even more interesting as more applications adopt this new, three-tiered architecture.187Views0likes0CommentsProgrammable Cache-Control: One Size Does Not Fit All
#webperf For addressing challenges related to performance of #mobile devices and networks, caching is making a comeback. It's interesting - and almost amusing - to watch the circle of technology run around best practices with respect to performance over time. Back in the day caching was the ultimate means by which web application performance was improved and there was no lack of solutions and techniques that manipulated caching capabilities to achieve optimal performance. Then it was suddenly in vogue to address the performance issues associated with Javascript on the client. As Web 2.0 ascended and AJAX-based architectures ruled the day, Javascript was Enemy #1 of performance (and security, for that matter). Solutions and best practices began to arise to address when Javascript loaded, from where, and whether or not it was even active. And now, once again, we're back at the beginning with caching. In the interim years, it turns out developers have not become better about how they mark content for caching and with the proliferation of access from mobile devices over sometimes constrained networks, it's once again come to the attention of operations (who are ultimately responsible for some reason for performance of web applications) that caching can dramatically improve the performance of web applications. [ Excuse me while I take a breather - that was one long thought to type. ] Steve Souders, web performance engineer extraordinaire, gave a great presentation at HTML5DevCon that was picked up by High Scalability: Cache is King!. The aforementioned articles notes: Use HTTP cache control mechanisms: max-age, etag, last-modified, if-modified-since, if-none-match, no-cache, must-revalidate, no-store. Want to prevent HTTP sending conditional GET requests, especially over high latency mobile networks. Use a long max-age and change resource names any time the content changes so that it won't be cached improperly. -- Better Browser Caching Is More Important Than No Javascript Or Fast Networks For HTTP Performance The problem is, of course, that developers aren't putting all these nifty-neato-keen tags and meta-data in their content and the cost to modify existing applications to do so may result in a prioritization somewhere right below having an optional, unnecessary root canal. In other cases the way in which web applications are built today - we're still using AJAX-based, real-time updates of chunks of content rather than whole pages - means simply adding tags and meta-data to the HTML isn't necessarily going to help because it refers to the page and not the data/content being retrieved and updated for that "I'm a live, real-time application" feel that everyone has to have today. Too, caching tags and meta-data in HTML doesn't address every type of data. JSON, for example, commonly returned as the response to an API call (used as the building blocks for web applications more and more frequently these days) aren't going to be impacted by the HTML caching directives. That has to be addressed in a different way, either on the server side (think Apache mod_expire) or on the client (HTML5 contains new capabilities specifically for this purpose and there are usually cache directives hidden in AJAX frameworks like jQuery). The Programmable Network to the Rescue What you need is the ability to insert the appropriate tags, on the appropriate content, in such a way as to make sure whatever you're about to do (a) doesn't break the application and (b) is actually going to improve the performance of the end-user experience for that specific request. Note that (b) is pretty important, actually, because there are things you do to content being delivered to end users on mobile devices over mobile networks that might make things worse if you do it to the same content being delivered to the same end user on the same device over the wireless LAN. Network capabilities matter, so it's important to remember that. To avoid rewriting applications (and perhaps changing the entire server-side architecture by adding on modules) you could just take advantage of programmability in the network. When enabled as part of a full-proxy, network intermediary the ability to programmatically modify content in-flight becomes invaluable as a mechanism for improving performance, particularly with respect to adding (or modifying) cache headers, tags, and meta-data. By allowing the intermediary to cache the cacheable content while simultaneously inserting the appropriate cache control headers to manage the client-side cache, performance is improved. By leveraging programmability, you can start to apply device or network or application (or any combination thereof) logic to manipulate the cache as necessary while also using additional performance-enhancing techniques like compression (when appropriate) or image optimization (for mobile devices). The thing is that a generic "all on" or "all off" for caching isn't going to always result in the best performance. There's logic to it that says you need the capability to say "if X and Y then ON else if Z then OFF". That's the power of a programmable network, of the ability to write the kind of logic that takes into consideration the context of a request and takes the appropriate actions in real-time. Because one size (setting) simply does not fit all.209Views0likes0CommentsInfrastructure Architecture: Whitelisting with JSON and API Keys
Application delivery infrastructure can be a valuable partner in architecting solutions …. AJAX and JSON have changed the way in which we architect applications, especially with respect to their ascendancy to rule the realm of integration, i.e. the API. Policies are generally focused on the URI, which has effectively become the exposed interface to any given application function. It’s REST-ful, it’s service-oriented, and it works well. Because we’ve taken to leveraging the URI as a basic building block, as the entry-point into an application, it affords the opportunity to optimize architectures and make more efficient the use of compute power available for processing. This is an increasingly important point, as capacity has become a focal point around which cost and efficiency is measured. By offloading functions to other systems when possible, we are able to increase the useful processing capacity of an given application instance and ensure a higher ratio of valuable processing to resources is achieved. The ability of application delivery infrastructure to intercept, inspect, and manipulate the exchange of data between client and server should not be underestimated. A full-proxy based infrastructure component can provide valuable services to the application architect that can enhance the performance and reliability of applications while abstracting functionality in a way that alleviates the need to modify applications to support new initiatives. AN EXAMPLE Consider, for example, a business requirement specifying that only certain authorized partners (in the integration sense) are allowed to retrieve certain dynamic content via an exposed application API. There are myriad ways in which such a requirement could be implemented, including requiring authentication and subsequent tokens to authorize access – likely the most common means of providing such access management in conjunction with an API. Most of these options require several steps, however, and interaction directly with the application to examine credentials and determine authorization to requested resources. This consumes valuable compute that could otherwise be used to serve requests. An alternative approach would be to provide authorized consumers with a more standards-based method of access that includes, in the request, the very means by which authorization can be determined. Taking a lesson from the credit card industry, for example, an algorithm can be used to determine the validity of a particular customer ID or authorization token. An API key, if you will, that is not stored in a database (and thus requires a lookup) but rather is algorithmic and therefore able to be verified as valid without needing a specific lookup at run-time. Assuming such a token or API key were embedded in the URI, the application delivery service can then extract the key, verify its authenticity using an algorithm, and subsequently allow or deny access based on the result. This architecture is based on the premise that the application delivery service is capable of responding with the appropriate JSON in the event that the API key is determined to be invalid. Such a service must therefore be network-side scripting capable. Assuming such a platform exists, one can easily implement this architecture and enjoy the improved capacity and resulting performance boost from the offload of authorization and access management functions to the infrastructure. 1. A request is received by the application delivery service. 2. The application delivery service extracts the API key from the URI and determines validity. 3. If the API key is not legitimate, a JSON-encoded response is returned. 4. If the API key is valid, the request is passed on to the appropriate web/application server for processing. Such an approach can also be used to enable or disable functionality within an application, including live-streams. Assume a site that serves up streaming content, but only to authorized (registered) users. When requests for that content arrive, the application delivery service can dynamically determine, using an embedded key or some portion of the URI, whether to serve up the content or not. If it deems the request invalid, it can return a JSON response that effectively “turns off” the streaming content, thereby eliminating the ability of non-registered (or non-paying) customers to access live content. Such an approach could also be useful in the event of a service failure; if content is not available, the application delivery service can easily turn off and/or respond to the request, providing feedback to the user that is valuable in reducing their frustration with AJAX-enabled sites that too often simply “stop working” without any kind of feedback or message to the end user. The application delivery service could, of course, perform other actions based on the in/validity of the request, such as directing the request be fulfilled by a service generating older or non-dynamic streaming content, using its ability to perform application level routing. The possibilities are quite extensive and implementation depends entirely on goals and requirements to be met. Such features become more appealing when they are, through their capabilities, able to intelligently make use of resources in various locations. Cloud-hosted services may be more or less desirable for use in an application, and thus leveraging application delivery services to either enable or reduce the traffic sent to such services may be financially and operationally beneficial. ARCHITECTURE is KEY The core principle to remember here is that ultimately infrastructure architecture plays (or can and should play) a vital role in designing and deploying applications today. With the increasing interest and use of cloud computing and APIs, it is rapidly becoming necessary to leverage resources and services external to the application as a means to rapidly deploy new functionality and support for new features. The abstraction offered by application delivery services provides an effective, cross-site and cross-application means of enabling what were once application-only services within the infrastructure. This abstraction and service-oriented approach reduces the burden on the application as well as its developers. The application delivery service is almost always the first service in the oft-times lengthy chain of services required to respond to a client’s request. Leveraging its capabilities to inspect and manipulate as well as route and respond to those requests allows architects to formulate new strategies and ways to provide their own services, as well as leveraging existing and integrated resources for maximum efficiency, with minimal effort. Related blogs & articles: HTML5 Going Like Gangbusters But Will Anyone Notice? Web 2.0 Killed the Middleware Star The Inevitable Eventual Consistency of Cloud Computing Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage Cloud-Tiered Architectural Models are Bad Except When They Aren’t The Database Tier is Not Elastic The New Distribution of The 3-Tiered Architecture Changes Everything Sessions, Sessions Everywhere3.1KViews0likes0CommentsEt Tu, Browser?
Friends, foes, Internet-denizens … lend me your browser. Were you involved in any of the DDoS attacks that occurred over the past twelve months? Was your mom? Sister? Brother? Grandfather? Can you even answer that question with any degree of certainty? Reality is that the reason for attack on the web is subtly shifting to theft not necessarily of data, but of resources. While the goal may still be to obtain personal credentials for monetary gain, it is far more profitable to rip hundreds or thousands of credentials from a single source than merely getting one at a time. From a miscreant’s point of view, the return on investment is simply much higher targeting a site than it is targeting you, directly. But that doesn’t mean you’re off the hook. In fact, quite the opposite. For there are other just as nefarious purposes to which your resources can be directed, including inadvertently participating in a grand-scale DDoS attack for what is now-a-days called “hactivism.” In both cases, you are still a victim, but you may not be aware of it as the goal is to stealth-install the means by which your compute resources can be harnessed to perpetrate an attack and it may not be caught by the security you have in place (you do have some in place, right?). You can’t necessarily count on immunity from infection because you only visit “safe sites”. That’s because one of the ways in which attackers leverage your compute resources is not through installation of adware or other malware, but directly through JavaScript loaded via infected sites. At issue is the possible collision between web application and browser security. attackers are recommending to develop a system by which people are lured to some other content, such as SPAM SPAM SPAM !!!graphy, but by visiting the website would invisibly launch the DDOS JavaScript tool. -- Researchers say: DDoS "Low Orbit Ion Cannon" attackers could be easily traced Now consider the number of serious vulnerabilities reported by WhiteHat Security during the Fall of 2010. Consider the rate across Social Networking sites. Assume an attacker managed to exploit one of those vulnerabilities and plant the DDoS JavaScript tool such that unsuspecting visitors end up playing a role in a DDoS attack. It gets worse, as far as the potential impact goes. The recent revelation of a new SSL/TLS vulnerability (BEAST) includes a pre-condition that JavaScript be injected into the browser. CSRF (Cross-Site Request Forgery) is a fairly common method of managing such a trick, and is listed by WhiteHat in the aforementioned report as having increased to 24% of all vulnerabilities. So, too, is XSS (Cross-Site Scripting) which ranks even higher in WhiteHat’s list, tying “information leakage” for the number one spot at 64%. In order to execute their attack, Rizzo and Duong use BEAST (Browser Exploit Against SSL/TLS) against a victim who is on a network on which they have a man-in-the-middle position. Once a victim visits a high-value site, such as PayPal, that uses TLS 1.0, and logs in and receives a cookie, they inject the client-side BEAST code into the victim's browser. This can be done through the use of an iframe ad or just loading the BEAST JavaScript into the victim's browser. -- New Attack Breaks Confidentiality Model of SSL, Allows Theft of Encrypted Cookies Such an attack is designed to steal high-value data such as might be stored in an encrypted cookie used to conduct transactions with Paypal or an online banking service. Depending on the level of protection at the web application layer, the delivery of such JavaScript may go completely unnoticed. Most web application security focuses on verifying user input, not application responses, are free from infection. And too many consumers believe running anti-virus scanning solutions are enough to detect and prevent infection in general, not realizing that a dynamically injected JavaScript (something many sites do all the time for monitoring performance and to enable real-time interaction) may, in fact, be “malicious” or at the very least an attempt at resource theft. How do you stop a browser that essentially stabs you in the back by accepting, without question, questionable content? Without layering additional security on the browser that parses through each and every piece of content delivered, there isn’t a whole lot you can do – other than turn off the ability to execute JavaScript which, today, essentially renders the Internet useless. GO to the SOURCE If we look at the source of browser infections we invariably find the only viable, reasonable, effective answer is to eliminate the source. When you have a pandemic you figure out what’s causing it and you go to the source. Yes, you treat the symptoms of the victims if possible, but what you want to really do is locate and whack the source so it stops spreading. In “Perceptions about Network Security” (Ponemon Institute, June 2011) surveys show that the top three sources of a breach were insider abuse (52%), malicious software download (48%), and malware from a website (43%). Interestingly, 29% indicated the breach resulted from malicious content coming from a “social networking site”, which would – when added to the malware from a website (of which social networking sites are certainly a type) that source tops the chart with 72% of all causes of breaches being a direct result of the failure of a website to secure itself, and essentially allow itself to become a carrier of an outbreak. Certainly if you have control over the desktops, laptops, and mobile devices from which a client will interact with your web site or network and you have the capability to deploy policies on those clients that can aid in securing and protecting that client, you should. But that capability is rapidly dwindling with the introduction of a vast host of clients with wildly different OS footprints and the incompatibility of client-side, OS specific agents and apps capable of supporting a holistic client-side security strategy. Enforcing policies regarding interaction with corporate resources is really your best and most complete option. Like a DDoS attack, you are unlikely to be able to stop the infection of a client. You can, however, stop the spread and possible infection of your corporate resources as a carrier. The more organizations that attend to their own house’s security and protection, the better off end-users will likely be. Reducing the sources of the pandemic of client-side infections will reduce the risk not only to your own organization and users, but to others. And if we can all reduce the potential sources down to sites relying on user’s specifically visiting an infected site, the client-side mechanisms in place to protect users against known malware distribution sites will get us further to a safer and more enjoyable Internet. When the Data Center is Under Siege Don’t Forget to Watch Under the Floor The Many Faces of DDoS: Variations on a Theme or Two Spanish police website hit by Anonymous hackers (June 2011) What We Learned from Anonymous: DDoS is now 3DoS Custom Code for Targeted Attacks Defense in Depth in Context The Big Attacks are Back…Not That They Ever Stopped (IP) Identity Theft in Cloud Computing Environments If Security in the Cloud Were Handled Like Car Accidents207Views0likes0CommentsThe Real News is Not that Facebook Serves Up 1 Trillion Pages a Month…
It’s how much load that really generates and how it scales to meet the challenge. There’s some amount of debate whether Facebook really crossed over the one trillion page view per month threshold. While one report says it did, another respected firm says it did not; that its monthly page views are a mere 467 billion per month. In the big scheme of things, the discrepancy is somewhat irrelevant, as neither show the true load on Facebook’s infrastructure – which is far more impressive a set of numbers than its externally measured “page view” metric. Mashable reported in “Facebook Surpasses 1 Trillion Pageviews per Month” that the social networking giant saw “approximately 870 million unique visitors in June and 860 million in July” and followed up with some per visitor statistics, indicating “each visitor averaged approximately 1,160 page views in July and 40 per visit — enormous by any standard. Time spent on the site was around 25 minutes per user.” From an architectural standpoint it’s not just about the page views. It’s about requests and responses, many of which occur under the radar from metrics and measurements typically gathered by external services like Google. Much of Facebook’s interactive features are powered by AJAX, which is hidden “in” the page and thus obscured from external view and a “page view” doesn’t necessarily include a count of all the external objects (scripts, images, etc…) that comprises a “page”. So while 1 trillion (or 467 billion, whichever you prefer) is impressive, consider that this is likely only a fraction of the actual requests and responses handled by Facebook’s massive infrastructure on any given day. Let’s examine what the actual requests and responses might mean in terms of load on Facebook’s infrastructure, shall we? SOME QUICK MATH Loading up Facebook yields 125 requests to load various scripts, images, and content. That’s a “page view”. Sitting on the page for a few minutes and watching Firebug’s console, you’ll note a request to update content occurs approximately every minute you are on a page. If we do the math – based on approximate page views per visitor, each of which incurs 125 GET requests – we can math that up to an approximation of 19,468 RPS (Requests per Second). That’s only an approximation, mind you, and doesn’t take into consideration the time factor, which also incurs AJAX-based requests to update content occurring on a fairly regular basis. These also add to the overall load on Facebook’s massive infrastructure. And that’s before we start considering the impact from “unseen” integrated traffic via Facebook’s API which, according to the most recently available data (2009) was adding 5 billion requests a day to that load. If you’re wondering, that’s an additional 57,870 requests per second, which gives us a more complete number of 77,338 requests per second. SOURCE: 2009 Interop F5 Keynote Let’s take a moment to digest that, because that’s a lot of load on a site – and I’m sure it still isn’t taking into consideration everything. We also have to remember that the load at any given time could be higher – or lower – based on usage patterns. Averaging totals over a month and distilling down to a per second average is just that – a mathematical average. It doesn’t take into consideration that peaks and valleys occur in usage throughout the day and that Facebook may be averaging only a fraction of that load with spikes two and three times as high throughout the day. That realization should be a bit sobering, as we’ve seen recent DDoS attacks that have crippled and even toppled sites with less traffic than Facebook handles in any given minute of the day. The question is, how do they do it? How do they manage to keep the service up and available despite the overwhelming load and certainty of traffic spikes? IT’S the ARCHITECTURE Facebook itself does a great job of discussing exactly how it manages to sustain such load over time while simultaneously managing growth, and its secret generally revolves around architectural choices. Not just the “Facebook” application architecture, but its use of infrastructure architecture as well. That may not always be apparent from Facebook’s engineering blog, which generally focuses on application and software architecture topics, but it is inherent in those architectural decisions. Take, for example, an engineer’s discussion on Facebook’s secrets to scaling to over 500 million users and beyond. The very first point made is to “scale horizontally”. This isn't at all novel but it's really important. If something is increasing exponentially, the only sensible way to deal with it is to get it spread across arbitrarily many machines. Remember, there are only three numbers in computer science: 0, 1, and n. (Scaling Facebook to 500 Million Users and Beyond (Facebook Engineering Blog)) Horizontal scalability is, of course, enabled via load balancing which generally (but not always) implies infrastructure components that are critical to an overall growth and scalability strategy. The abstraction afforded by the use of load balancing services also has the added benefit of enabling agile operations as it becomes cost and time effective to add and remove (provision and decommission) compute resources as a means to meet scaling challenges on-demand, which is a key component of cloud computing models. In other words, in addition to Facebook’s attention to application architecture as a means to enable scalability, it also takes advantage of infrastructure components providing load balancing services to ensure that its massive load is distributed not just geographically but efficiently across its various clusters of application functionality. It’s a collaborative architecture that spans infrastructure and application tiers, taking advantage of the speed and scalability benefits afforded by both approaches simultaneously. Yet Facebook is not shy about revealing its use of infrastructure as a means to scale and implement its architecture; you just have to dig around to find it. Consider as an example of a collaborative architecture the solution to some of the challenges Facebook has faced trying to scale out its database, particularly in the area of synchronization across data centers. This is a typical enterprise challenge made even more difficult by Facebook’s decision to separate “write” databases from “read” to enhance the scalability of its application architecture. The solution is found in something Facebook engineers call “Page Routing” but most of us in the industry call “Layer 7 Switching” or “Application Switching”: The problem thus boiled down to, when a user makes a request for a page, how do we decide if it is "safe" to send to Virginia or if it must be routed to California? This question turned out to have a relatively straightforward answer. One of the first servers a user request to Facebook hits is called a Load balancer; this machine's primary responsibility is picking a web server to handle the request but it also serves a number of other purposes: protecting against denial of service attacks and multiplexing user connections to name a few. This load balancer has the capability to run in Layer 7 mode where it can examine the URI a user is requesting and make routing decisions based on that information. This feature meant it was easy to tell the load balancer about our "safe" pages and it could decide whether to send the request to Virginia or California based on the page name and the user's location. (Scaling Out (Facebook Engineering Blog)) That’s the hallmark of the modern, agile data center and the core of cloud computing models: collaborative, dynamic infrastructure and applications leveraging technology to enable a cost-efficient, scalable architectures able to maintain growth along with the business. SCALABILITY TODAY REQUIRES a COMPREHENSIVE ARCHITECTURAL STRATEGY Today’s architectures – both application and infrastructure – are growing necessarily complex to meet the explosive growth of a variety of media and consumers. Applications alone cannot scale themselves out – there simply aren’t physical machines large enough to support the massive number of users and load on applications created by the nearly insatiable demand consumers have for online games, shopping, interaction, and news. Modern applications must be deployed and delivered collaboratively with infrastructure if they are to scale and support growth in an operationally and financially efficient manner. Facebook’s ability to grow and scale along with demand is enabled by its holistic, architectural approach that leverages both modern application scalability patterns as well as infrastructure scalability patterns. Together, infrastructure and applications are enabling the social networking giant to continue to grow steadily with very few hiccups along the way. Its approach is one that is well-suited for any organization wishing to scale efficiently over time with the least amount of disruption and with the speed of deployment required of today’s demanding business environments. Facebook Hits One Trillion Page Views? Nope. Facebook Surpasses 1 Trillion Pageviews per Month Scaling Out (Facebook Engineering Blog) Scaling Facebook to 500 Million Users and Beyond (Facebook Engineering Blog) WILS: Content (Application) Switching is like VLANs for HTTP Layer 7 Switching + Load Balancing = Layer 7 Load Balancing Infrastructure Scalability Pattern: Partition by Function or Type Infrastructure Scalability Pattern: Sharding Sessions Architecturally, Is There Such A Thing As Too Scalable? Forget Hyper-Scale. Think Hyper-Local Scale.218Views0likes0CommentsF5 Friday: If Only the Odds of a Security Breach were the Same as Being Hit by Lightning
#v11 AJAX, JSON and an ever increasing web application spread increase the odds of succumbing to a breach. BIG-IP ASM v11 reduces those odds, making it more likely you’ll win at the security table When we use analogy often enough it becomes pervasive, to the point of becoming an idiom. One such idiom is the expression of unlikelihood of an event by comparing it to being hit by lightning. The irony is that the odds of being hit by lightning are actually fairly significant – about 1:576,000. Too many organizations view their risk of a breach as bring akin to being hit by lightning because they’re small, or don’t have a global presence or what have you. The emergence years ago of “mass” web attacks rendered – or should have rendered - such arguments ineffective. Given the increasing number of web transactions on the Internet and the success of web-based attacks to enact a breach, even comparing the risk to the odds of being hit by lightning does little but prove that eventually, you’re going to get hit. Research by ZScaler earlier this year indicated an average (median) number of web transactions per day, per user at 1912. Analysts put the number of Internet users at about two billion. That translates into more than three trillion web transactions per day. Every day, three trillion transactions are flying around the web. Based on the odds of being hit by lightning, that means over 6 million of those transactions would breach an organization. The odds suddenly aren’t looking as good as they might seem, are they? If you think that’s bad, you ain’t read the most recent Ponemon results, which recently concluded that the odds of being breached in the next year were a “statistical certainty.” No, it’s not paranoia if they really are out to get you and guess what? Apparently they are out to get you. Truth be told, I’m not entirely convinced of the certainty of a breach because it assumes precautionary measures and behavior is not modified in the face of such a dire prediction. If organizations were to say, change their strategy as a means to get better odds, then the only statistical certainty would likely be that a breach would be attempted – but not necessarily be successful. The bad news is that even if you have protections in place, the bad guys methods are evolving. If your primary means of protection are internal to your applications, the possibility remains that a new attack will require a rewrite – and redeployment. And even if you are taking advantage of external protection such as a web application firewall like BIG-IP ASM (Application Security Manager) it’s possible that it hasn’t provided complete coverage or accounted for what are misconfiguration errors: typographical case-sensitivity errors that can effectively erode protections. The good news is that even as the bad guys are evolving, so too are those external protective mechanisms like BIG-IP ASM. BIG-IP ASM v11 introduced significant enhancements that provide better protection for emerging development format standards as well as address those operational oops that can leave an application vulnerable to being breached. BIG-IP v11 Enhancements AJAX and JSON Support AJAX growth over the past few years have established it as the status quo for building interactive web applications. Increasingly these interaction exchanges via AJAX use JSON as their preferred data format of choice. Previous versions of BIG-IP ASM were unable to properly parse and therefore secure JSON payloads. A secondary issue with AJAX is related to the blocking pages generally returned by web application firewalls. For example, a BIG-IP ASM blocking page is HTML-based. When an AJAX embedded control triggers a policy violation, this means it can't present the blocking page because it doesn't expect to receive back HTML – it expects JSON. This leaves operators in the dark as it makes troubleshooting AJAX issues very difficult. To address both these issues, BIG-IP ASM v11 can now parse JSON payloads and enforce proper security policies. This is advantageous not only for protecting AJAX-exchanged payloads, but for managing integration via JSON-based APIs from external sources. Being able to secure what is essentially third-party content is paramount to ensuring a positive security posture regardless of external providers’ level of security. BIG-IP ASM v11 can also now also display a blocking page by injecting JavaScript into the response that will popup a window with a support ID, traceable by operators for easier troubleshooting. The ability to display a blocking page and ID is unique to BIG-IP ASM v11. Case Insensitivity Case sensitivity in general is derived from the underlying web server OS. While having a case sensitivity policy is an advantage on Unix/Linux platforms it can be painful to manage on other platforms. This is due to the fact that many times developers will write code without considering sensitivity. For example, a web server configured to serve a single file type, “html”, may also need to configure Html, hTml, HTml, etc… because a developer may have fat-fingered links in the code with these typographical errors. On Windows platforms, this is not a problem for the application, but it becomes an issue for the web application firewall because it is sensitive to case necessarily. BIG-IP ASM v11 now includes a simple checkbox-style flag that indicates it should ignore case, making it more adaptable to Windows-based platforms in which case may be variable. This is important in reducing false positives – situations where the security device thinks a request is malicious but in reality it is not. As web application firewalls generally contain very granular, URI-level policies to better protect against injection-style attacks, they often flag case differences as being “errors” or “possible attacks.” If configured to block such requests, the web application firewall would incorrectly reject requests for pages or URIs with case differences caused by typographical errors. This enhancement allows operators to ignore case and focus on securing the payload. BIG-IP ASM VE BIG-IP ASM is now available in a virtual form-factor, ASM VE. A virtual form-factor makes it easier to evaluate and test in lab environments, as well as enabling developers to assist in troubleshooting when vulnerabilities or issues arise that involve the application directly. Virtual patching, as well, is better enabled by a virtual form factor, as is the ability to deploy remotely in cloud computing environments. There is no solution short of a scissors that can reduce your risk of breach to 0. But there are solutions that can reduce that risk to a more acceptable level, and one of those solutions is BIG-IP ASM. Getting hit by lightning on the Internet is a whole lot more likely than the idiom makes it sound, and anything that can reduce the odds is worth investigating sooner rather than later. More BIG-IP ASM v11 Resources: Application Security in the Cloud with BIG-IP ASM Securing JSON and AJAX Messages with F5 BIG-IP ASM BIG-IP Application Security Manager Page Audio White Paper - Application Security in the Cloud with BIG-IP ASM F5 Friday: You Will Appsolutely Love v11 SQL injection – past, present and future Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page F5 Monday? The Evolution To IT as a Service Continues … in the Network F5 Friday: The Gap That become a Chasm All F5 Friday Posts on DevCentral ABLE Infrastructure: The Next Generation – Introducing v11242Views0likes0Comments