json
39 TopicsWorking with JSON data in iRules - Part 1
When TMOS version 21 dropped a few months ago, I released a three part article series focused on managing MCP in iRules. MCP is JSON-RPC2.0 based, so this was a great use case for the new JSON commands. But it's not the only use case. JSON has been the default data format for the web transport for well over a decade. And until v21, doing anything with JSON in iRules was not for the faint of heart as the Tcl version iRules uses has no native parsing capability. In this article, i'll do a JSON overview, introduce the test scripts to pass simple JSON payloads back and forth, and get the BIG-IP configured to manage this traffic. In part two, we'll dig into the iRules. JSON Structure & Terminology Let's start with some example JSON, then we'll break it down. { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [1, 2, 3], "my_object": { "nested_string": "I'm nested", "nested_array": ["a", "b", "c"] } } JSON is pretty simple. The example shown there is a JSON object. Object delimeters are the curly brackets you see on lines 1 and 11, but also in the nested object in lines 7 and 10. Every key in JSON must be a string enclosed in double quotes. The keys are the left side of the colon on lines 2-9. The colon is the separator between the key and its value The comma is the separator between key/value pairs There are 6 data types in JSON String - should be enclosed with double quotes like keys Number - can be integer, floating point, or exponential format Boolean - can only be true or false, without quotes, no capitals Null - this is an intentional omission of a value Array - this is called a list in python and Tcl Object - this is called a dictionary in python and Tcl Objects can be nested. (If you've ever pulled stats from iControl REST, you know this to be true!) Creating a JSON test harness Since iControl REST is JSON based, I could easily pass payloads from my desktop through a virtual server and onward to an internal host for the iControl REST endpoints, but I wanted something I could simplify with a pre-defined client and server payload. So I vibe coded a python script to do just that if you want to use it. I have a ubuntu desktop connected to both the client and server networks of the v21 BIG-IP in my lab. First I tested on localhost, then got my BIG-IP set up to handle the traffic as well. Local test Clientside jrahm@udesktop:~/scripts$ ./cspayload.py client --host 10.0.3.95 --port 8088 [Client] Connecting to http://10.0.3.95:8088/ [Client] Sending JSON payload (POST): { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Client] Received response (Status: 200): { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Serverside jrahm@udesktop:~/scripts$ ./cspayload.py server --host 0.0.0.0 --port 8088 [Server] Starting HTTP server on 0.0.0.0:8088 [Server] Press Ctrl+C to stop [Server] Received JSON payload: { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Server] Sent JSON response: { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Great, my JSON payload is properly flowing from client to server on localhost. Now let's get the BIG-IP setup to manage this traffic. BIG-IP config This is a pretty basic setup, just need a JSON profile on top of the standard HTTP virtual server setup. My server is listening on 10.0.3.95:8088, so i'll add that as a pool member and then create the virtual in my clientside network at 10.0.2.50:80. Config is below. ltm virtual virtual.jsontest { creation-time 2026-01-29:15:10:10 destination 10.0.2.50:http ip-protocol tcp last-modified-time 2026-01-29:16:21:58 mask 255.255.255.255 pool pool.jsontest profiles { http { } profile.jsontest { } tcp { } } serverssl-use-sni disabled source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vlans { ext } vlans-enabled vs-index 2 } ltm pool pool.jsontest { members { 10.0.3.95:radan-http { address 10.0.3.95 session monitor-enabled state up } } monitor http } ltm profile json profile.jsontest { app-service none maximum-bytes 3000 maximum-entries 1000 maximum-non-json-bytes 2000 } BIG-IP test, just traffic, no iRules yet Ok, let's repeat the same client/server test to make sure we're flowing properly through the BIG-IP. I'll just show the clientside this time as the serverside would be the same as before. Note the updated IP and port in the client request should match the virtual server you create. jrahm@udesktop:~/scripts$ ./cspayload.py client --host 10.0.2.50 --port 80 [Client] Connecting to http://10.0.2.50:80/ [Client] Sending JSON payload (POST): { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Client] Received response (Status: 200): { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Ok. Now we're cooking and BIG-IP is managing the traffic. Part two will drop as soon as I can share some crazy good news about a little thing happening at AppWorld you don't want to miss!245Views3likes2CommentsWorking with JSON data in iRules - Part 2
In part one, we covered JSON at a high level, got scripts working to pass JSON payload back and forth between client and server, and got the BIG-IP configured to manage this traffic. In this article, we'll start with an overview of the new JSON events, walk through an existing Tcl procedure that will print out the payload in log statements and explain the JSON:: iRules commands in play, and then we'll create a proc or two of our own to find keys in a JSON payload and log their values. But before that, we're going to have a little iRules contest at this year's AppWorld 2026 in Vegas. Are you coming? REGISTER in the AppWorld mobile app for the contest (to be released soon)...seats are limited! when CONTEST_SUBMISSION { set name [string toupper [string replace Jason 1 1 ""]] log local0. "Hey there...$name here." log local0. "You might want to speak my language: structured, nested, and curly-braced." } Some details are being withheld until we gather at AppWorld for the contest, but there just might be a hint in that psuedo-iRule code above. Crawl, Walk, Run! Crawling Let's start by crawling. With the new JSON profile, there are several new events: JSON_REQUEST JSON_REQUEST_MISSING JSON_REQUEST_ERROR JSON_RESPONSE JSON_RESPONSE_MISSING JSON_RESPONSE_ERROR From there let's craft a basic iRule to see what triggers the events. Simple log statements in each. when HTTP_REQUEST { log local0. "HTTP request received: URI [HTTP::uri] from [IP::client_addr]" } when JSON_REQUEST { log local0. "JSON Request detected successfully." } when JSON_REQUEST_MISSING { log local0. "JSON Request missing." } when JSON_REQUEST_ERROR { log local0. "Error processing JSON request. Rejecting request." } when JSON_RESPONSE { log local0. "JSON response detected successfully." } when JSON_RESPONSE_MISSING { log local0. "JSON Response missing." } when JSON_RESPONSE_ERROR { log local0. "Error processing JSON response." } Now we need some client and server payload. Thankfully we have that covered with the script I shared in part one. We just need to unleash it! I have my Visual Studio Code IDE fired up with the F5 Extension and iRules editor marketplace extensions connected to my v21 BIG-IP, I have the iRule above loaded up in the center pane, and then I have the terminal on the right pane split three ways so I can a) generate traffic in the top terminal, b) view the server request/response in the middle terminal, and c) watch the logs from BIG-IP in the bottom terminal. Handy to have all that in one view in the IDE while working. For the first pass, I'll send a request expected to work through the BIG-IP and get a response back from my test server. That command is: ./cspayload.py client --host 10.0.2.50 --port 80 And the result can be seen in the picture below (shown here to show the VS Code setup, I'll just show text going forward.) You can see that the request triggered HTTP_REQUEST, JSON_REQUEST, and JSON_RESPONSE as expected. Now, I'll send an empty payload to verify that JSON_REQUEST_MISSING will fire. The command for that is: ./cspayload1.py client --host 10.0.2.50 --port 80 --no-json We get the event triggered as expected, but interestingly, the request is still processed and sent to the backend and the response is sent back just fine. (timestamps removed) Rule /Common/irule.jsontest <HTTP_REQUEST>: HTTP request received: URI / from 10.0.2.95 Rule /Common/irule.jsontest <JSON_REQUEST_MISSING>: JSON Request missing. Rule /Common/irule.jsontest <JSON_RESPONSE>: JSON response detected successfully. My test script serverside code doesn't balk at an empty payload, but most services likely will, so you'll likely want to manage a reject or response as appropriate in this event. Now let's trigger an error by sending some invalid JSON. The command I sent is: ./cspayload1.py client --host 10.0.2.50 --port 80 --malformed-custom '{invalid: "no quotes on key"}' And that resulted in a successfully triggered JSON_REQUEST_ERROR and no payload was sent back to the backend server. Rule /Common/irule.jsontest <HTTP_REQUEST>: HTTP request received: URI / from 10.0.2.95 Rule /Common/irule.jsontest <JSON_REQUEST_ERROR>: Error processing JSON request. Rejecting request. Walking After validating our events are triggering, let's take a look at the example iRule below that will use a procedure to print out the JSON payload. when JSON_REQUEST { set json_data [JSON::root] call print $json_data } proc print { e } { set t [JSON::type $e] set v [JSON::get $e] set p0 [string repeat " " [expr {2 * ([info level] - 1)}]] set p [string repeat " " [expr {2 * [info level]}]] switch $t { array { log local0. "$p0\[" set size [JSON::array size $v] for {set i 0} {$i < $size} {incr i} { set e2 [JSON::array get $v $i] call print $e2 } log local0. "$p0\]" } object { log local0. "$p0{" set keys [JSON::object keys $v] foreach k $keys { set e2 [JSON::object get $v $k] log local0. "$p${k}:" call print $e2 } log local0. "$p0}" } string - literal { set v2 [JSON::get $e $t] log local0. "$p\"$v2\"" } default { set v2 [JSON::get $e $t] if { $v2 eq "" && $t eq "null" } { log local0. "${p}null" } elseif { $v2 == 1 && $t eq "boolean" } { log local0. "${p}true" } elseif { $v2 == 0 && $t eq "boolean" } { log local0. "${p}false" } else { log local0. "$p$v2" } } } } If you build a lot of JSON utilities, I'd recommend creating an iRule that is just a library of procedures you can call from the iRule where your application-specific logic is. In this case, it's instructional so I'll keep the proc local to the iRule. Let's take this line by line. Lines 1-4 are the logic of the iRule. Upon the JSON_REQUEST event trigger, use the JSON::root command to load the JSON payload into the json_data variable, then pass that data to the print proc to, well, print it (via log statements.) Lines 5-47 detail the print procedure. It takes the variable e (for element) and acts on that throughout the proc. Lines 6-7 set the type and value of the element to the t and v variables respectively Lines 8-9 are calculating whitespace requirements for each element's value that will be printed Lines 10-38 are conditional logic controlled by the switch statement based on the element's type set by the JSON::type command, with lines 11-19 handling an array, lines 20-29 handling an object, lines 30-33 a string or literal, and lines 34-27 the default catchall. Lines 11 - 19 cover the JSON array, which in Tcl is a list. The JSON::array size command gets the list size and iterates through each list item in the for loop. The JSON::array get command then sets the value at that index in the loop to a second element variable (e2) and recursively calls the proc to start afresh on the e2 element. Lines 20-29 cover the JSON object, which in Tcl is a key/value dictionary. The JSON::object keys command gets the keys of the element and iterates through each key. The rest of this action is identical to the JSON array with the exception here of using the JSON::object get command. Lines 30-33 cover the string and literal types. Simple action here, uses the JSON::get command with the element and type and then logs it. For lines 34-43, this is the catch all for other types. Tcl represents a null type as an empty string, and the boolean values of true and false as 1 and 0 respectively. But since we're printing out the JSON values sent, it's nice to make sure they match, so I modified the function to print a literal null as a string for that type, and a literal true/false string for their 1/0 Tcl counterparts. Otherwise, it will print as is. Ok, let's run the test and see what we see. Clientside view: ./cspayload2.py client --host 10.0.2.50 --port 80 [Client] Connecting to http://10.0.2.50:80/ [Client] Sending JSON payload (POST): { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Client] Received response (Status: 200): { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Serverside view: jrahm@udesktop:~/scripts$ ./cspayload2.py server --host 0.0.0.0 --port 8088 [Server] Starting HTTP server on 0.0.0.0:8088 [Server] Mode: Normal JSON responses [Server] Press Ctrl+C to stop [Server] Received JSON payload: { "my_string": "Hello World", "my_number": 42, "my_boolean": true, "my_null": null, "my_array": [ 1, 2, 3 ], "my_object": { "nested_string": "I'm nested", "nested_array": [ "a", "b", "c" ] } } [Server] Sent JSON response: { "message": "Hello from server", "type": "response", "status": "success", "data": { "processed": true, "timestamp": "2026-01-29" } } Resulting log statements on BIG-IP (with timestamp through rule name removed for visibility): <JSON_REQUEST>: { <JSON_REQUEST>: my_string: <JSON_REQUEST>: "Hello World" <JSON_REQUEST>: my_number: <JSON_REQUEST>: 42 <JSON_REQUEST>: my_boolean: <JSON_REQUEST>: true <JSON_REQUEST>: my_null: <JSON_REQUEST>: null <JSON_REQUEST>: my_array: <JSON_REQUEST>: [ <JSON_REQUEST>: 1 <JSON_REQUEST>: 2 <JSON_REQUEST>: 3 <JSON_REQUEST>: ] <JSON_REQUEST>: my_object: <JSON_REQUEST>: { <JSON_REQUEST>: nested_string: <JSON_REQUEST>: "I'm nested" <JSON_REQUEST>: nested_array: <JSON_REQUEST>: [ <JSON_REQUEST>: "a" <JSON_REQUEST>: "b" <JSON_REQUEST>: "c" <JSON_REQUEST>: ] <JSON_REQUEST>: } <JSON_REQUEST>: } The print procedure is shown here to include the whitespace necessary to prettify the output. Neat! Running Now that we've worked our way through the print function, let's do something useful! You might have a need to evaluate the value of a key somewhere in the JSON object and act on that. For this example, we're going to look for the nested_array key, retrieve it's value, and if an item value of b is found, reject the request by building a new JSON object to return status to the client. First, we need to build a proc we'll name find_key that is similar to the print one above to recursively search the JSON payload. While learning my way through this, I also discovered I needed to create an additional proc we'll name stringify to, well, "stringify" the values of objects because they are still encoded. stringify proc proc stringify { json_element } { set element_type [JSON::type $json_element] set element_value [JSON::get $json_element] set output "" switch -- $element_type { array { append output "\[" set array_size [JSON::array size $element_value] for {set index 0} {$index < $array_size} {incr index} { set array_item [JSON::array get $element_value $index] append output [call stringify $array_item] if {$index < $array_size - 1} { append output "," } } append output "\]" } object { append output "{" set object_keys [JSON::object keys $element_value] set key_count [llength $object_keys] set current_index 0 foreach current_key $object_keys { set nested_element [JSON::object get $element_value $current_key] append output "\"${current_key}\":" append output [call stringify $nested_element] if {$current_index < $key_count - 1} { append output "," } incr current_index } append output "}" } string - literal { set actual_value [JSON::get $json_element $element_type] append output "\"$actual_value\"" } default { set actual_value [JSON::get $json_element $element_type] append output "$actual_value" } } return $output } There really isn't any new magic in this proc, though I did expand variable names to make it a little more clear than our original example. It's basically a redo of the print function, but instead of printing it's just creating the string version of objects so I can execute a conditional against that string. Nothing new to learn, but necessary in making the find_key proc work. find_key proc proc find_key { json_element search_key } { set element_type [JSON::type $json_element] set element_value [JSON::get $json_element] switch -- $element_type { array { set array_size [JSON::array size $element_value] for {set index 0} {$index < $array_size} {incr index} { set array_item [JSON::array get $element_value $index] set result [call find_key $array_item $search_key] if {$result ne ""} { return $result } } } object { set object_keys [JSON::object keys $element_value] foreach current_key $object_keys { if {$current_key eq $search_key} { set found_element [JSON::object get $element_value $current_key] set found_type [JSON::type $found_element] if {$found_type eq "object" || $found_type eq "array"} { set found_value [call stringify $found_element] } else { set found_value [JSON::get $found_element $found_type] } return $found_value } set nested_element [JSON::object get $element_value $current_key] set result [call find_key $nested_element $search_key] if {$result ne ""} { return $result } } } } return "" } In the find_key proc, the magic happens in line 10 for a JSON array (Tcl list) and in lines 18-32 for a JSON object (Tcl dictionary.) Nothing new in the use of the JSON commands, but rather than printing all the keys and values found, we're looking for a specific key so we can return its value. For the array we are iterating through list items that will have a single value, but that value might be an object that needs to be stringified. For the object, we need to iterate through all the keys and their values, also which might be objects or nested objects to be stringified. Recursion for the win! Hopefull you're starting to get the hang of using all the interrogating JSON commands we've covered, because now wer'e going to create something with some new commands! iRule logic Once we have the procs defined to handle their specific jobs, the iRule to find the key and then return the rejected status message becomes much cleaner: when JSON_REQUEST priority 500 { set json_data [JSON::root] if {[call find_key $json_data "nested_array"] contains "b" } { set cache [JSON::create] set rootval [JSON::root $cache] JSON::set $rootval object set obj [JSON::get $rootval object] JSON::object add $obj "[IP::client_addr] status" string "rejected" set rendered [JSON::render $cache] log local0. "$rendered" HTTP::respond 200 content $rendered "Content-Type" "application/json" } } Let's walk through this one line by line. Lines 1 and 13 wrap the JSON_REQUEST payload. Line 2 retrieves the current JSON::root, which is our payload, and stores it in the json_data variable. Lines 3 and 12 wrap the if conditional, which is using our find_key proc to look for the nested_array key, and if that stringified value includes b, reject the response. (in real life looking for "b" would be a terrible contains pattern to look for, but go with me here.) Line 4 creates a JSON context for the system. Think of this as a container we're going to do JSON stuff in. Line 5 gets the root element of our JSON container. At this point it's empty, we're just getting a handle to whatever will be at the top level. Line 6 now actually adds an object to the JSON container. At this point, it's just "{ }". Line 7 gets the handle of that object we just created so we can do something with it. Line 8 adds the key value pair of "status" and our reject message. Line 9 now takes the entire JSON context we just created and renders it to a JSON string we can log and respond with. Line 10 logs to /var/log/ltm Line 11 responds with the reject message in JSON format. Note I'm using a 200 error code instead of a 403. That's just because the cilent test script won't show the status message with a 403 and I wanted to see it. Normally you'd use the appropriate error code. Now, I offer you a couple challenges. lines 4-9 in the JSON_REQUEST example above should really be split off to become another proc, so that the logic of the JSON_REQUEST is laser-focused. How would YOU write that proc, and how would you call it from the JSON_REQUEST event? The find_key proc works, but there's a Tcl-native way to get at that information with just the JSON::object subcommands that is far less complex and more performant. Come at me! Conclusion When I started this JSON article series, I knew A LOT less about the underlying basics of JSON than I thought I knew. It's funny how working with things on the wire requires a little more insight into protocols and data formats than you think you need. Happy iRuling out there, and I hope to see you at AppWorld next month!180Views3likes0CommentsDeclaration for loading Cert/PrivKey in Common
Dear F5 enthousiasts, I want to add a certificate and a private key to my F5 through a AS3 declaration under System > Certificate Management. The certificate must be placed under the /Common partition only, and no path is necessary. The declaration I created looks as follow: { "$schema": "https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/latest/as3-schema.json", "class": "AS3", "action": "deploy", "declaration": { "class": "ADC", "schemaVersion": "3.45.0", "id": "import-cert", "label": "Certificate Import", "Common": { "class": "Tenant", "myCertName": { "class": "Certificate", "certificate": { "base64": "<base64 encoded certificate>" }, "privateKey": { "base64": "<base64 encoded private key>" } } } } } But when I POST this declaration to my F5 server I get the following message back: { "code": 422, "errors": [ "/Common: should NOT have additional properties" ], "message": "declaration is invalid", "host": "localhost", "tenant": [ "Common:" ], "declarationId": "import-cert" } I tried to find answers but cloudn't find anything and I would appreciate help. Thanks in advance, Kr XavierSolved278Views0likes3CommentsBypass "Bad unescape" in Body POST (ASM, POST, JSON)
Here the Block. As you can see is "%" is detected without encoding meaning. This is normal since the "%" is in the Body of the post as JSON data (see below) Of course if I disable the "Bad unescape" in " Learning and Blocking Settings" it works, but my Goal is to bypass using rule on parameter or similar, till now without success. Does anyone have a solution ? ======= JSON on POST Dody Request =======================511Views0likes11Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unified toolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.509Views0likes0CommentsDisable buffer overflow in json parameters
Hi In a file upload using api we allow the file name in base64 encoding. However this triggers the Generic buffer overflow attempt 1 . As of this post it seems signature at json parameter level cannot be disabled. https://devcentral.f5.com/questions/disable-attack-signature-on-particular-json-parametercomment77010 Has there been any change in this? I am using v12? In my json content profile I do not see the buffer overflow sig at all on filtering with the name.2.4KViews0likes8CommentsHTTPS Monitor: JSON File behind login page
Hello, we've already a https monitor that reads the contents of a json file, it looks like this: send string: GET /intern/api/v1/health/portal-status HTTP/1.1\r\nHost: host.company.com\r\n\r\n receive strimg: \"portalStatus\":\"AVAILABLE\" this works fine. but now I need a similar monitor, but the json file is passwort protectet, when I add username/password to the monitor, it doesn't work. when I do a curl: curl -sku monitor:<pw> -H "Content-Type: application/json" -X GET https://host.company.int/monitor/wga/widgets/health.json I get the logon-page for the username/pw I tried some other parameter for curl, but I don't get the content of the json-file. Any ideas? thanks in advance1.6KViews0likes14CommentsASM JSON/XML payload detection & Automatically detect advanced protocols
Hello team, I have a question regarding the learning suggestions, I want to know if it is possible for the ASM to suggest the association of an XML profile to a specific URL. In other words, is there a way to configure the ASM so that when XML traffic passes through it then a learning suggestion rises saying "you have to associate an XML for this URL" In this article : https://support.f5.com/kb/en-us/products/big-ip_asm/manuals/product/asm-getting-started-12-1-0/3.html The Policy Builder builds the security policy as follows: Examines application content and creates XML or JSON profiles as needed (if the policy includes JSON/XML payload detection) ...etc we can read explicitly that it is possible IF we enable the "JSON/XML payload detection" then the answer to my question is "Yes" . The problem is that I can't find this "JSON/XML payload detection" option in the GUI. Could you please help on this ? Many thanks, Karim768Views0likes3CommentsIs there ressources to mock iControlREST?
Hello all, Is there a huge zip file somewhere with a complete dump of a "classic" F5 installation with virtual servers, policies, etc. I want to mock a iControlREST for unit testing and integration testing of one of our service that use iControl REST but not with production data that can be sensitive and doesn’t belong in a CI/CD chain. Thank you!684Views0likes4CommentsAuthorization Header Defined as Unparsable by F5_ASM
One of our development teams is adding a new OAUTH token feature to an application. Sending the JSON call with the Authorization header creates an error within F5_ASM (ver 15.13): HTTP Validation Unparsable request content Details Unparsable authorization header value I also see that the Authorization header's data is masked, though there's no settings for the Authorization header in the policy. What are my best options for troubleshooting this issue?1.1KViews0likes3Comments