node
7 TopicsLineRate HTTP to HTTPS redirect
Here's a quick LineRate proxy code snippet to convert an HTTP request to a HTTPS request using the embedded Node.js engine. The relevant parts of the LineRate proxy config are below, as well. By modifying the redirect_domain variable, you can redirect HTTP to HTTPS as well as doing a non-www to a www redirect. For example, you can redirect a request for http://example.com to https://www.example.com . The original URI is simply appended to the redirected request, so a request for http://example.com/page1.html will be redirected to https://www.example.com/page1.html . This example uses the self-signed SSL certificate that is included in the LineRate distribution. This is fine for testing, but make sure to create a new SSL profile with your site certificate and key when going to production. As always, the scripting docs can be found here. redirect.js: Put this script in the default scripts directory - /home/linerate/data/scripting/proxy/ and update the redirect_domain and redirect_type variables for your environment. "use strict"; var vsm = require('lrs/virtualServerModule'); // domain name to which to redirect var redirect_domain = 'www.example.com'; // type of redirect. 301 = temporary, 302 = permanent var redirect_type = 302; vsm.on('exist', 'vs_example.com', function(vs) { console.log('Redirect script installed on Virtual Server: ' + vs.id); vs.on('request', function(servReq, servResp, cliReq) { servResp.writeHead(redirect_type, { 'Location': 'https://' + redirect_domain + servReq.url }); servResp.end(); }); }); LineRate config: real-server rs1 ip address 10.1.2.100 80 admin-status online ! virtual-ip vip_example.com ip address 192.0.2.1 80 admin-status online ! virtual-ip vip_example.com_https ip address 192.0.2.1 443 attach ssl profile self-signed admin-status online ! virtual-server vs_example.com attach virtual-ip vip_example.com default attach real-server rs1 ! virtual-server vs_example.com_https attach virtual-ip vip_example.com_https default attach real-server rs1 ! script redirect source file "proxy/redirect.js" admin-status online Example: user@m1:~/ > curl -L -k -D - http://example.com/test HTTP/1.1 302 Found Location: https://www.example.com/test Date: Wed, 03-Sep-2014 16:39:53 GMT Transfer-Encoding: chunked HTTP/1.1 200 OK Content-Type: text/plain Date: Wed, 03-Sep-2014 16:39:53 GMT Transfer-Encoding: chunked hello world222Views0likes0CommentsLineRate Performance Tip: Logging in Node.js
LineRate's Node.js engine lets you program in the datapath with JavaScript. Once you start embedding business logic in your proxy, you'll need to debug this logic and report on the actions your proxy is taking. This can take several forms from the simple to the complex. When you're just getting started, nothing beats the venerableconsole.log()andutil.inspect()for simplicity. Whether your service scales to millions of requests per second, or just a trickle, there are some simple performance considerations that you should keep in mind. Knowing these up front prevents you from learning expensive habits that are harder to fix later. 1. Use log levels If your logging is impacting performance, one easy way to fix it is to channel your inner Bob Newhart: "Stop it!" But you don't want to be left without any logs when you're debugging a problem. A very common approach regardless of language is a logging helper that respects log levels like "error", "warning", "info" or "debug". Your code contains all the logging statements you could need, but by tuning the logging level you adjust which ones fire. function handleRequest(servReq, servRes, cliReq) { log.debug('Caught new request for %s at %s', servReq.url, new Date()); if (url.toLowerCase() != url) { log.warning('Request URL %s is not lower case', servReq.url); try { fixLowercaseUrl(servReq); } catch (err) { log.error('Error fixing URL: %s, sending 502', err); send502(servReq, servRes); return } // ... } It's easy to find a node.js logging library that supports this, or you can make your own. Here's a simple gist that I use for logging (download from my gist😞 var log = (function () { "use strict"; var format = require('util').format; // Customize levels, but do not name any level: "log" "getLevel" "setLevel" var levels = { ERROR: 10, WARN: 20, INFO: 30, DEBUG: 40, TRACE: 50 }; var currentLevel = 'INFO'; function doGetLevel() { return currentLevel; } function doSetLevel(level) { if (!levels[level]) { throw new Error('No such level ' + level); } currentLevel = level; } function doLog(level, otherArgs) { if (!levels[level] || levels[level] > levels[currentLevel]) { return; } var args = [].slice.call(arguments); args.shift(); console.log(level + ': ' + format.apply(this, args)); } var toRet = { log: doLog, getLevel: doGetLevel, setLevel: doSetLevel }; Object.getOwnPropertyNames(levels).forEach(function (level) { this[level.toLowerCase()] = function () { var args = [].slice.call(arguments); args.unshift(level); return doLog.apply(this, args); } }, toRet); return toRet; })(); 2. Defer string concatenation What's the difference between this call log.debug('The request is ' + util.inspect(servReq) + ', and the response is ' + util.inspect(servRes)); and this one? log.debug('The request is ', servReq, ', and the response is ', servRes); These statements both produce the same output. Let's consider what happens if you're not running at the debug log level. Suppose we're running at logLevel INFO (the default in my simpleLog library). Let's see what happens in the first call. Our code says to call log.debug() with one argument. The argument is a long string formed by concatenating 4 smaller strings: 'The request is' util.inspect(servReq) ', and the response is ' util.inspect(servRes) You can check out theimplementation of util.inspect()and probably notice that there's a lot of things going on, including recursive calls. This is all building up lots of little strings, which are concatenated into bigger strings and finally returned from util.inspect(). The javascript language is left-associative for string concatenation, but different engines may optimize as long as there aren't observable side effects; conceptually you can think about the costs by thinking about the tree of strings that the engine is building: The 2 hard-coded strings and the 2 strings returned from util.inspect() are concatenated one last time into the argument to log.debug(). Then we call log.debug(). If you're using my simpleLog, you can see that the first thing it does is bail out without doing anything: if (!levels[level] || levels[level] > levels[currentLevel]) { return; } So we went through all the effort of building up a big string detailing all the aspects of the server request and server response, just to unceremoniously discard it. What about the other style of call (spoiler alert: it's better :-)? Remember the second form passed four arguments; 2 arguments are just plain strings, and the other 2 are objects that we want to dump: log.debug('The request is ', servReq, ', and the response is ', servRes); This style is taking advantage of aconsole.log()behavior: "If formatting elements are not found in the first string, then util.inspect is used on each argument." I'm breaking up the arguments and asking console.log() to call util.inspect() on my behalf. Why? What's the advantage here? This structure avoids calling util.inspect() until it is needed. In this case, if we're at loglevel INFO, it's not called at all. All those recursive calls and concatenating tiny strings into bigger and bigger ones? Skipped entirely. Here's what happens: The four separate arguments (2 strings, 2 objects) are passed to the log.debug() function. The log.debug() function checks the loglevel, sees that it is INFO, which means log.debug() calls should not actually result in a log message. The log.debug() function returns immediately, without even bothering to check any of the arguments. The call to log.debug() is pretty cheap, too. The 2 strings are compile-time static, so it's easy for a smart optimizing engine like v8 to avoid making brand new strings each time. The 2 objects (servReq and servRes) were already created; v8 is also performant when it comes to passing references to objects. The log.debug() helper, Node's built-inconsole.log(), and logging libraries on NPM let you defer by using either the comma-separated arguments approach, or the formatted string (%s) style. A couple of final notes on this topic: First, the optimization in console.log() is awesomeness straight from plain node.js; LineRate's node.js scripting engine just inherited it. Second, remember that JavaScript allows overriding theObject.prototype.toString()method to customize how objects are stringified. util.inspect() has code to avoid invoking toString() on an object it is inspecting, but the simple '+' operator does not. A simple statement like this: console.log('x is ' + x); could be invoking x.toString() without you knowing it. If you didn't write x.toString(), you may have no idea how expensive it is. 3. Stdout/stderr is synchronous In node.js, process.stdout and process.stderr are two special streams for process output,in the UNIX style. console.log() writes to process.stdout, and console.error() writes to process.stderr. It's important to know that writes to these streams may be synchronous: depending on whether the stream is connected to a TTY, file or pipe, node.js may ensure that the write succeeds before moving on. Successful writes can include storing in an intermediate buffer, especially if you're using a TTY, so in normal operation this synchronous behavior is no problem. The performance pain comes if you're producing writes to stdout or stderr faster than the consumer can consume them: eventually the buffer will fill up. This causes node.js to block waiting on the consumer. Fundamentally, no amount of buffer can save you from a producer going faster than a consumer forever. There are only two choices: limit the producer to the rate of the consumer ("blocking"), or discard messages between the producer and consumer ("dropping"). But if your program's log production is bursty, then a buffer can help by smoothing out the producer. The risk is if the consumer fails while data is left in the buffer (this may be lost forever). LineRate's node.js environment provides a little bit of both: LineRate hooks your node.js script up to our system logging infrastructure; you can use ourCLIorREST APIto configure where the logs end up. We route logs to those configured destinations as fast as we can with buffering along the way. If the buffers are exceeded, we do drop messages (adding a message to the logs indicating that we dropped). We ensure the logging subsystem (our consumer) stays up even if your script fails with an error. All this ensures you get the best logs you can, and protects you from catastrophically blocking in node.js. 4. Get counters on demand One last approach we've used to reduce logging is to provide a mechanism to query stats from our node.js scripts. LineRateincludesan excellent key-value store calledredis. LineRate scripts can use redis to store and retrieve arbitrary data, like script-specific counters. If all you're tring to do is count page requests for a specific URL, you could use a simple script like: function serveCounter(servRes, counter) { redisClient.get(counter, function (error, buffer) { if (error) { servRes.writeHead(502); servRes.end(); } else { servRes.writeHead(200, { 'Content-Type' : 'application/json', 'Content-Length' : buffer.length }); servRes.end(buffer); } }); } vsm.on('exist', 'vsStats', function (vs) { vs.on('request', function (servReq, servRes, cliReq) { if (servReq.url === '/path/to/counterA') { serveCounter(servRes, 'counterA'); } else { cliReq(); // Or maybe an error? } }); }); or as fancy as a mini-webserver that serves back an HTML webpage comprising a dashboard and client-side AJAX to periodically pull down new stats and plot them. If you're in plain node.js, you'd use http.createServer() instead of vs.on('request', ...) and you'd have to generate an error page yourself instead of calling cliReq(), but the idea is the same. Conclusion I've presented a summary of good and bad logging behaviors in node.js. The combination of JavaScript's syntactic simplicity and node.js' awesome non-blocking I/O capabilities mean that you can spend more time on your business logic and less time shuffling bytes around. But, there are a few sharp corners. Logging libraries can smooth out those sharp corners, as long as you're aware of how to use them and the problems like output buffering that they can't solve. All of my advice applies to node.js and LineRate's node.js scripting engine. I hope it serves you well for both. If you're looking for a proxy you can program in node.js, or if you'd just like to try out the examples, remember you can deploy LineRate todayfor free.304Views0likes0CommentsDevops Needs Application Affinity
Infrastructure must balance between applications and the network because otherwise werewolves would cease to exist. In science we're taught that gravity is the law. As it relates to us living here on earth (I can't speak for all you displaced aliens, sorry) there are two gravitational forces at work: the earth and the moon. The earth's gravity, of course, keeps us grounded. It's foundational. Without it, we're kind of up a creek (or an atmosphere, as it were) without a paddle. The moon's gravitational pull is a bit different in that's it's pulling in the opposite direction. It's pulling upwards whereas the earth's gravity pulls us downward. Both of these forces are equally important. The loss of either would be devastating. I'm sure there's a made for SyFy movie about that happening. There's a made for SyFy movie about every disastrous scenario we can think of, after all. Just think of what would happen to the werewolves - especially the sparkly ones in teenage novels - if the moon disappeared. Yeah, they might disappear too. The data center, too, has two equally important gravitational forces and they, like the moon and the earth, are pulling in opposite directions. The data center needs both too, because we don't want werewolves to disappear. Why yes, mixing metaphors and creating non-sequitors amuses me, why do you ask? In the data center, Devops is being acted on by two gravitational forces: the network and the application. The network naturally pulls devops downward. All the myriad infrastructure necessary to support and deliver an application ultimately require networking. But devops is, at its heart and soul, concerned about enabling continuous delivery of applications. And it is the application that pulls devops upward, toward the higher layers of the stack. That means the growing number of "devops" focused infrastructure - think application and API proxies, for example - must recognize their role as an infrastructure (network) component while simultaneously exhibiting affinity for the applications it will be servicing. Not only does the network need to accommodate the application, but it also needs to accommodate the folks who deploy and manage the application. To do that, infrastructure components must become more attuned to the needs of applications and their owners; the network needs application affinity. Application Affinity As Andi Mann is wont to say, "Devops is people." Ultimately any "network" tool that's designed for devops has to balance between the gravitational forces at work from both the network and the application and offer to devops both the power of the network without requiring certification in twenty or so TLA protocols. It needs to focus on the people, not the product. It needs to fall somewhere between a "network thing" and an "application." That doesn't mean less networking or network-related capabilities, but it does mean moving toward a more application-like model in which the networking complexity is not exposed. What is exposed to devops and developers, for that matter, is a more application-like approach to deploying and managing infrastructure services. Services like SSL termination, URI rewriting, virtual hosting, caching, etc... are exposed via APIs or a method that's much less complex than existing command line "fire up vi or emacs and edit this file" mechanisms. These platforms need to be more agile, more scriptable, more programmatic. They need to be more like an application and less like a networking device trying to be software. These platforms need to enable devops to ultimately build devops services that imbue the "network" with the flexibility that comes only from being programmatic. Need to extend the "network" so it routes application requests based on user identity or referring application? A proper devops platform enables that "network application" to be developed, deployed and managed as a service. That means not only a less complex configuration model, but a programmatic interface that enables devops to extend services into the network to continuously deliver applications. A platform that can programmatically modify its behavior based on context - real time data about the client, the network, the application, and the request itself - must have application affinity; it must enable direct access to delivery services so that "configuration" can be modified in real-time. If your platform of choice includes the directive "now save and restart so the new configuration is loaded" then it's not programmable, it's not dynamic, it's not enabling continuous delivery the way devops is going to need continuous delivery enabled. * There are differing opinions on how the loss of the moon would effect human beings. That it would have an effect on the oceans and the rotation of the earth and its orbit is fairly well established,but the actual impact on human beings directly is highly disputed. Unless you're a werewolf. If you're a werewolf, it's bad, period.201Views0likes0CommentsSnippet #7: OWASP Useful HTTP Headers
If you develop and deploy web applications then security is on your mind. When I want to understand a web security topic I go to OWASP.org, a community dedicated to enabling the world to create trustworthy web applications. One of my favorite OWASP wiki pages is the list of useful HTTP headers. This page lists a few HTTP headers which, when added to the HTTP responses of an app, enhances its security practically for free. Let’s examine the list… These headers can be added without concern that they affect application behavior: X-XSS-Protection Forces the enabling of cross-site scripting protection in the browser (useful when the protection may have been disabled) X-Content-Type-Options Prevents browsers from treating a response differently than the Content-Type header indicates These headers may need some consideration before implementing: Public-Key-Pins Helps avoid *-in-the-middle attacks using forged certificates Strict-Transport-Security Enforces the used of HTTPS in your application, covered in some depth by Andrew Jenkins X-Frame-Options / Frame-Options Used to avoid "clickjacking", but can break an application; usually you want this Content-Security-Policy / X-Content-Security-Policy / X-Webkit-CSP Provides a policy for how the browser renders an app, aimed at avoiding XSS Content-Security-Policy-Report-Only Similar to CSP above, but only reports, no enforcement Here is a script that incorporates three of the above headers, which are generally safe to add to any application: And that's it: About 20 lines of code to add 100 more bytes to the total HTTP response, and enhanced enhanced application security! Go get your own FREE license and try it today!739Views0likes2CommentsProgrammability in the Network: Traffic Replication
#nodejs #linerate #devops Why waste time generating fake data to test when you can use the real deal? There are a plethora of tactics used to test applications as they move to production. For example, as a developer I've created data and captured data for testing and - my favorite - ignored testing altogether. After all, it worked on my machine. The reason there are so many tactics is there are so many ways in which real users - and thus real data - can expose all sorts of errors and defects in software. From logic flow interruption to corrupting databases to unintentionally writing over the end of a length limited string to wipe out some other piece of data in an adjacent memory location*, the possibilities of how real users will interact with and invariably mess up an application are virtually limitless. Optimally (for the business, for the developer and even for the user) the best scenario is one in which real users and the data they input (sounds like that would be a good book, wouldn't it?) would be used to test new versions of applications. In most cases, this is ultimately what happens when the application finally makes it into production because no matter how hard you try you're going to miss a corner case (or three). Even if you tested for weeks, once real data and users hit the application you're going to find problems. The solution is to replicate real user data and interactions on pre-production versions of applications transparently. Turns out this is fairly easily accomplished - if you've got the right tools. Application Proxy to the Rescue Production application traffic replication can be easily accomplished using an application proxy and a bit of node.js. By deploying the proxy in front of the applications it can be directed (via node.js) to send incoming user requests to both the production and staging (or pre-production or test or qa or whatever you call the environment right before production. You do have an environment right before production, right?) applications. The trick is, of course, that you only want one of the responses (the one from the production version) to return to the user. Which means you've got to catch the response from the staging version and discard it (or log it or whatever you want to do with it - except send it back to the user). A proxy by design manages this scenario quite well, with no disruption to the network. Network-minded folks will instantly recognize this pattern as one implemented by spanning ports or mirroring traffic (or sessions) at higher layers of the network stack. It's a well-proven and tested pattern that enables developers and devops to really test out an application - from logic to data handling - using real user data and interactions. F5 LineRate Proxy is able to easily implement this scenario. You can learn more about (and download a free version of) LineRate Proxy here. * Hey, we used to play with pointers in my day, all the time. If you weren't careful you could do some really, uh, interesting things to an application.359Views0likes1CommentOn Building a Commercial Networking Product for the DevOps Community. A Founder's Perspective.
I’m excited for tomorrow. Tomorrow is the day that F5 responds to the needs of the DevOps community by re-launching the LineRate brand and product line. You might be asking yourself what is so important about a DevOps networking product? You might also be asking yourself what is so important about a commercial DevOps networking product? You might be thinking, why would I buy anything when I can roll my own using open source and a bit of ingenuity?! A confession. I was a roll your own kinda guy when I started LineRate, but I learned a valuable lesson in the process of standing up the company. The first thing I was going to do was set up email with a postfix server when my co-founder Manish slapped some sense into me and convinced me to use Gmail as it just worked. A couple of hours later we had fully a functional email system that “just worked”. Sure it didn’t have a few features that I’d have liked, but we were up and running in no time which was point! Did we go commercial for everything? Of course not. We bought a pile of disks and built our own filer using FreeBSD and ZFS - still running today - because we couldn’t afford a commercial solution. We focused on what mattered: time-to-market and money. Not long after the great LineRate infrastructure debates, I had a series of company defining discussions with a variety of companies. One was a large infrastructure SaaS company that would ultimately become our first customer – all we had to do was deliver in a timely fashion. Another was a smaller company that had rolled their own solution using linux and Nginx – maintaining the solution became a serious problem for them as patches kept rolling in requiring repeating retuning and other reintegration efforts. The important common threads were that they all wanted an advanced Layer 7 traffic management platform with the following three key “features” that they couldn’t get any other way: · Support · Capacity Pricing · Full Programmability Beginning with support, they wanted someone to call to get complex fixes implemented and delivered quickly, without having to hire talented kernel and networking engineers. When pressed there were three key drivers behind the decision to get support from a commercial entity. First, the opportunity cost of deploying precious headcount to non-revenue generating activities, such as networking, was too high. Second, they realized that once those projects were staffed, they’d be staffed for the life of the core product(s) for incremental features and on-going product maintenance. Finally, even if they wanted to, hiring the requisite networking and kernel engineering talent to build high-performance, flexible, and manageable networking platform is a difficult task to accomplish for a non-core product capacity. However, the support calculus above only makes sense if the pricing model can be aligned with the business’ needs. So after some discussion with the SaaS company we derived (borrowed) a capacity based subscription model that aligned the costs of infrastructure capacity with the traffic and thus revenue of the overall business. This had the added benefit of eliminating the need for complex capacity planning exercises, as one would buy off-the-shelf servers and load software directly onto them. We also added the concept of bursting to the model so that in the event of an unplanned increase in traffic one merely had to add an extra server to the pool and true-up after the fact at a pre-negotiated overage rate. We also recognized that programmable customization is another key driver for rolling one’s own solution, and customized data path elements can play a crucial role in revenue generating activities. The plan we embarked on was to initially build a rock-solid high-performance vertically scalable Layer 7 networking platform with all the core features and then add extensibility to that data path. In terms of performance and scale, we targeted and delivered with our 1.6 release in April 2012 millions of simultaneous active connections and approximately 200,000 connections per second on an Intel Westmere class platform ($5,000 server at the time). Why did we limit ourselves to implementing the core features only? Because we realized that there was no way that we as a startup could deliver the breadth of features needed to satisfy the DevOps community and their insatiable quest for differentiated advantages. One can’t call a feature differentiated if everyone gets the feature at the same time… The plan for our 2.x release train was to expose those core features/primitives via a programmable data path and allow our users to develop the rich features they wanted/needed. Want the ability to easily implement a custom load balancing algorithm? Want the ability to query an application server for authorization decisions from the network? Want the ability to translate API queries from one protocol to another? Check, check, and Check. We even talked about ability to bridge HTTP to NFS – possible if someone writes a Node.js NFS module! Why Node.js and not Python, Perl, Lua, or your other favorite language of choice? That’s a good question, our initial plan was to use Unladen Swallow (Python), but we switched to Node.js for a few important reasons. The most important reason is that we wanted a language that would make it easy for developers to write high-performance data path code without having to be data path programming experts. Python and most other languages have blocking I/O primitives that make it difficult to not block the data path during many operations while Node.js is an event driven model and one has to work to block the data path. Second, we wanted to expose a system with a large and active community; as of this writing the NPM repository has almost 50,000 packages and over 30 million package downloads per week. Finally, node.js is built on top of Google’s V8 javascript engine that delivers amazing performance allowing for complex tasks to be accomplished using a full-featured language rather than us creating a networking optimized language. With these key features in place, we believe that the incentive is now properly aligned on revenue generating rather than cost minimization activities. So what is so important about a commercial DevOps networking product? Freedom. Freedom to focus on what is important. Freedom to focus day in and day out on new, different, cool, and exciting problems! With that said, I hope everyone will join me in congratulating the team by downloading the full-featured free tier from linerate.f5.com and exploring what the product can do for you. Thanks and happy coding! John Giacomoni Founder, LineRate Systems Senior Architect, Product Development, F5 Networks287Views0likes0CommentsProgrammability in the Network: Because #BigData is Often Unstructured
#devops #node #proxy #sdn Adaptability of network-hosted services requires programmability because data doesn't always follow the rules Evans Data recently released its Data & Advanced Analytics Survey 2013 which focuses on "tools, methodologies, and concerns related to efficiently storing, handling, and analyzing large datasets and databases from a wide range of sources." Glancing through the sample pages reveals this nugget on why developers move from traditional databases to more modern systems, like Hadoop: The initial motivating factors to move away from traditional database solutions were the total size of the data being processed – it being big data – and the data’s complexity or unstructured nature. It's that second reason (and the data from Evans says it's only second by a partial percentage point) that caught my eye. Because yes, we all know big data is BIG. REALLY BIG. HUGE. Otherwise we'd call it something else. It's the nature of the data, the composition, that's just as important - not only to developers and how that data is represented in a data store, but to the network and how it interacts with that data. See, the "network" and network-hosted services (firewalls, load balancers, caches, etc...) are generally used to seeing very clearly (RFC) defined structured data. Switches are routers are fast because the data they derive decisions from is always in the same, fixed schema. In an increasingly application-driven data center, however, it is the application - and often its data - that drives network =decisions. This is particularly true for higher-order network services (L4-7) which specifically act on application data to improve performance, security and increasingly, supportive devops-oriented architectural topologies such as A/B testing, Canary Deployments and Blue/Green Architectures. That data is more often than not unstructured. It's the mechanism by which the unstructured big data referenced by Evans is transferred to the application that ultimately deposits it in a data base somewhere. Any data, structured or not, traverses a network of services before it reaches its penultimate destination. Programmability is required for devops and networking teams to implement the architectures and services necessary to support those applications and systems which exchange unstructured data. Certainly there is value in programmability when applied to structured data, particularly in cases where more complex logic is required to make decisions, but it is not necessarily required to enable that value. That's because capabilities that act on structured (fixed) data can be integrated into a network-hosted service and exposed as a configurable but well-understood feature. But when data is truly unstructured and where there is no standard - de facto or otherwise - then programmability in the network is necessary to unlock architectural capabilities. The reason intermediaries can be configured to "extract and act" on data that appears unstructured, like HTTP headers, is because there are well-defined key-value pairs. Consider "Cookie" and "Cache-control" and "X-Forward-For" (not officially part of the standard, hence the "X" but accepted as an industry, de facto standard) as good examples. While not fixed, there is a structure to HTTP headers that lends itself well to both programmability and "extract and act" systems. To interact with non-standard headers, however, or to get at unstructured data in a payload, requires programmability on the level of executable logic rather than simple, configurable options. A variety of devops-related architectures and API proxy capabilities require programmability due to the extreme variability in implementation. There's simply no way for an intermediary or proxy to "out of the box" support such a wide-open set of possibilities because the very definition of the data is not common. Even though it may be structured in the eyes of the developer, it's still unstructured because there is no schema to describe it (think JSON as opposed to XML) and it follows no accepted, published standard. The more unstructured data we see traversing the network, the more we're going to need programmability in the network to enable the modern architectures required to support it.252Views0likes1Comment