network-side scripting
20 TopicsClickjacking Protection Using X-FRAME-OPTIONS Available for Firefox
But browser support is only half the solution, don’t forget to implement the server-side, too. Clickjacking, unlike more well-known (and understood) web application vulnerabilities, has been given scant amount of attention despite its risks and its usage. Earlier this year, for example, it was used as an attack on Twitter, but never really discussed as being a clickjacking attack. Maybe because aside from rewriting applications to prevent CSRF (adding nonces and validation of the same to every page) or adding framekillers there just haven’t been many other options to prevent the attack technique from being utilized against users. Too, it is one of the more convoluted attack methods out there so it would be silly to expect non-technical media to understand it let alone explain how it works to their readers. There is, however, a solution on the horizon. IE8 has introduced an opt-in measure that allows developers – or whomever might be in charge of network-side scripting implementations – to prevent clickjacking on vulnerable pages using a custom HTTP header to prevent them from being “framed” inappropriately: X-FRAME-OPTIONS. The behavior is described in the aforementioned article as: If the X-FRAME-OPTIONS value contains the token DENY, IE8 will prevent the page from rendering if it will be contained within a frame. If the value contains the token SAMEORIGIN, IE will block rendering only if the origin of the top level-browsing-context is different than the origin of the content containing the X-FRAME-OPTIONS directive. For instance, if http://shop.example.com/confirm.asp contains a DENY directive, that page will not render in a subframe, no matter where the parent frame is located. In contrast, if the X-FRAME-OPTIONS directive contains the SAMEORIGIN token, the page may be framed by any page from the exact http://shop.example.com origin. But that’s only IE8, right? Well, natively, yes. But a development version of NoScript has been released that supports the X-FRAME-OPTIONS header and will provide the same protections as are natively achieved in IE8. The problem is that this is only half the equation: the X-FRAME-OPTIONS header needs to exist before the browser can act on it and the preventive measure for clickjacking completed. As noted in the Register, “some critics have contended the protection will be ineffective because it will require millions of websites to update their pages with proprietary code.” That’s not entirely true as there is another option that will provide support for X-FRAME-OPTIONS without updating pages/applications/sites with proprietary code: network-side scripting. The “proprietary” nature of custom HTTP headers is also debatable, as support for Firefox was provided quickly via NoScript and if the technique is successful will likely be adopted by other browser creators. HOW-TO ADD X-FRAME-OPTIONS TO YOUR APPLICATION – WITH or WITHOUT CODE CHANGES Step 1: Add the custom HTTP header “X-FRAME-OPTIONS” with a value of “DENY” or “SAMEORIGIN” before returning a response to the client Really, that’s it. The browser takes care of the rest for you. OWASP has a great article on how to implement a ClickjackFilter for JavaEE and there are sure to be many more blogs and articles popping up describing how one can implement such functionality in their language-of-choice. Even without such direct “how-to” articles and code samples, it is merely a matter of adding a new custom HTTP header – examples of which ought to be easy enough to find. Similarly a solution can be implemented using network-side scripting that requires no modification to applications. In fact, this can be accomplished via iRules in just one line of code: when HTTP_RESPONSE { HTTP::header insert "X-FRAME-OPTIONS" “(DENY || SAMEORIGIN)”} I believe the mod_rewrite network-side script would be as simple, but as I am not an expert in mod_rewrite I will hope someone who is will leave an appropriate example as a comment or write up a blog/article and leave a pointer to it. A good reason to utilize the agility of network-side scripting solutions in this case is that it is not necessary to modify each application requiring protection, which takes time to implement, test, and deploy. An even better reason is that a single network-side script can protect all applications, regardless of language and deployment platform, without a lengthy development and deployment cycle. Regardless of how you add the header, it would be a wise idea to add it as a standard part of your secure-code deployment requirements (you do have those, don’t you?) because it doesn’t hurt anything for the custom HTTP header to exist and visitors using X-FRAME-OPTIONS enabled browsers/solutions will be a lot safer than without it. Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting 9 ways to use network-side scripting to architect faster, scalable, more secure applications2KViews0likes3CommentsMobile versus Mobile: An Identity Crisis
#mobileThe expansive options consumers revel in creates an identity crisis for IT that is best resolved via context-aware mobile mediation. Back in the days of the browser wars, when standards were still largely ignored and the battle for the desktop was highly competitive, developers had to make choices and compromises. They could either write extensive client-side scripts to detect the user’s browser and address the peculiarities of that environment or they could simply ignore them with a disclaimer that “this site (works best when viewed in | was written for) browser X.” As time went by, developers were able to discontinue this annoying practice as browser features converged and a common, standardized platform emerged upon which all applications were able to be delivered to any popular browser without concern. Then mobile phones appeared, and the user experience degraded again, this time driven by the relatively feeble processing power of the platform. Small screens aside, the memory and processing power available on a mobile phone was such that – when combined with a constrained networking environment – the delivery of increasingly chunky, graphic heavy, interactive applications to mobile phones was simply a bad idea for both organizations and its visitors. Developers and web ops returned to the inspection of HTTP headers to determine whether or not a visitor was using a mobile platform, and began writing leaner, more compact interfaces specifically for those platforms. Enter tablets. Neither desktop nor phone, these admittedly mobile platforms are compact but nearly as powerful as their overweight tethered cousins without sacrificing the mobility of their anorexic form-factor brethren. COMMON GROUND: MOBILE OPERATING SYSTEM Tablets are certainly able to take advantage of emerging web application standards such as HTML5 to deliver a full, rich interactive desktop-style experience on a mobile platform, but are rarely offered it. They are by default offered up a mobile experience because of a common element: the operating system. Even if the operating system was masked, still they’ll find out by checking a second, lesser known HTTP header: HTTP_X_WAP_PROFILE. And if not there, then potentially in HTTP_PROFILE. Unfortunately, this is not necessarily the fault of the developer. There are no standards prescribing what should or should not identity a mobile device, and thus manufacturers are left to their own devices (pun only somewhat intended). There is no reliable way, today, to identify a mobile device accurately. Developers are left writing complex scripts that strip apart user agents, profile strings and whatever other contextual data they can extract from HTTP headers to determine how best to serve up content. That often means visitors are served content that is not wholly appropriate for their device. Even if they could, the market is so volatile at this point that it’s a sure bet that a new device will enter soon that requires modification to the application yet again. More code, more string manipulation, more latency in processing. CONTEXT-AWARE MOBILE MEDIATION It would be nice if developers could simply receive accurate inbound HTTP headers. Headers that clearly identify the device not only from an operating system perspective, but from a form-factor and network perspective in a standard way. But there are no standards, yet, and may never be. Thus a solution may be found in imposing standards upon inbound requests specifically for developers to better address the disparities in resolution, functionality, and performance between the various mobile device types. This requires some amount of pre-planning. Design, if you will, or architecture up front. It requires that developers and devops sit down together and determine a standard means of communicating information between infrastructure and the applications it supports. Consider the possibility of two custom HTTP headers, one identifying the network type and one specifying form-factor and device: HTTP_X_NETWORK = “WIFI | MOBILE | LAN” HTTP_X_DEVICE = “TABLET | PHONE | DESKTOP” If developers could rely on these two custom HTTP headers to exist for every inbound request, they could then develop applications based upon these characteristics that were more appropriate for the given device and network over which the device connected. Implementation requires only minimal inspection and insertion on a context-aware mediating device such as a network-side scripting capable application delivery controller. Because the application delivery controller is topologically positioned in a strategic point of control, it has visibility into the network, client, and server-side environments. This gives it the ability to better interpret and execute policies that govern the delivery of applications to optimize performance and assure availability, but it also provides the ability to extract and share pertinent data with applications and other infrastructure. This data can be shared in a number of ways, including modification of the payload, of the headers, insertion of new headers, removal of old headers, etc… The flexibility inherent in network-side scripting solutions, particularly those capable of side-band connectivity, allows devops and developers to design and develop a solution that works for them – in their environment. The advantage to such a solution lies not only in more accurate, actionable data to share with applications, but in its ability to easily be modified without negatively impacting the application. A second advantage is the ability of developers to also take into consideration the network characteristics of the mobile device, data generally not available or, if available, generally inaccurate. A mobile device today may be accessing an application via WiFi or a mobile network, and that piece of information is quite pertinent as the performance and capabilities of each network are quite different and have a significant impact on the end-user experience from a delivery perspective. Yet this data is not available by default to developers and it cannot reliably be inferred from device type. By leveraging a context-aware mediating solution, however, it becomes possible to share this data with developers such that they are able to take that information into consideration when putting together a response to a given request. While not a panacea, such a solution certainly provides a more consistent and overall accurate environment in which to deliver applications to the increasingly broad and diverse spectrum of mobile devices. Stack Overflow: How do detect Android Tablets in general. Useragent? Mobile Browsing Reaches All Time High The Magic of Mobile Cloud Understanding network-side scripting At the Intersection of Cloud and Control… Cloud-Tiered Architectural Models are Bad Except When They Aren’t WILS: WPO versus FEO Fire and Ice, Silk and Chrome, SPDY and HTTP Grokking the Goodness of MapReduce and SPDY182Views0likes0CommentsInfrastructure Architecture: Whitelisting with JSON and API Keys
Application delivery infrastructure can be a valuable partner in architecting solutions …. AJAX and JSON have changed the way in which we architect applications, especially with respect to their ascendancy to rule the realm of integration, i.e. the API. Policies are generally focused on the URI, which has effectively become the exposed interface to any given application function. It’s REST-ful, it’s service-oriented, and it works well. Because we’ve taken to leveraging the URI as a basic building block, as the entry-point into an application, it affords the opportunity to optimize architectures and make more efficient the use of compute power available for processing. This is an increasingly important point, as capacity has become a focal point around which cost and efficiency is measured. By offloading functions to other systems when possible, we are able to increase the useful processing capacity of an given application instance and ensure a higher ratio of valuable processing to resources is achieved. The ability of application delivery infrastructure to intercept, inspect, and manipulate the exchange of data between client and server should not be underestimated. A full-proxy based infrastructure component can provide valuable services to the application architect that can enhance the performance and reliability of applications while abstracting functionality in a way that alleviates the need to modify applications to support new initiatives. AN EXAMPLE Consider, for example, a business requirement specifying that only certain authorized partners (in the integration sense) are allowed to retrieve certain dynamic content via an exposed application API. There are myriad ways in which such a requirement could be implemented, including requiring authentication and subsequent tokens to authorize access – likely the most common means of providing such access management in conjunction with an API. Most of these options require several steps, however, and interaction directly with the application to examine credentials and determine authorization to requested resources. This consumes valuable compute that could otherwise be used to serve requests. An alternative approach would be to provide authorized consumers with a more standards-based method of access that includes, in the request, the very means by which authorization can be determined. Taking a lesson from the credit card industry, for example, an algorithm can be used to determine the validity of a particular customer ID or authorization token. An API key, if you will, that is not stored in a database (and thus requires a lookup) but rather is algorithmic and therefore able to be verified as valid without needing a specific lookup at run-time. Assuming such a token or API key were embedded in the URI, the application delivery service can then extract the key, verify its authenticity using an algorithm, and subsequently allow or deny access based on the result. This architecture is based on the premise that the application delivery service is capable of responding with the appropriate JSON in the event that the API key is determined to be invalid. Such a service must therefore be network-side scripting capable. Assuming such a platform exists, one can easily implement this architecture and enjoy the improved capacity and resulting performance boost from the offload of authorization and access management functions to the infrastructure. 1. A request is received by the application delivery service. 2. The application delivery service extracts the API key from the URI and determines validity. 3. If the API key is not legitimate, a JSON-encoded response is returned. 4. If the API key is valid, the request is passed on to the appropriate web/application server for processing. Such an approach can also be used to enable or disable functionality within an application, including live-streams. Assume a site that serves up streaming content, but only to authorized (registered) users. When requests for that content arrive, the application delivery service can dynamically determine, using an embedded key or some portion of the URI, whether to serve up the content or not. If it deems the request invalid, it can return a JSON response that effectively “turns off” the streaming content, thereby eliminating the ability of non-registered (or non-paying) customers to access live content. Such an approach could also be useful in the event of a service failure; if content is not available, the application delivery service can easily turn off and/or respond to the request, providing feedback to the user that is valuable in reducing their frustration with AJAX-enabled sites that too often simply “stop working” without any kind of feedback or message to the end user. The application delivery service could, of course, perform other actions based on the in/validity of the request, such as directing the request be fulfilled by a service generating older or non-dynamic streaming content, using its ability to perform application level routing. The possibilities are quite extensive and implementation depends entirely on goals and requirements to be met. Such features become more appealing when they are, through their capabilities, able to intelligently make use of resources in various locations. Cloud-hosted services may be more or less desirable for use in an application, and thus leveraging application delivery services to either enable or reduce the traffic sent to such services may be financially and operationally beneficial. ARCHITECTURE is KEY The core principle to remember here is that ultimately infrastructure architecture plays (or can and should play) a vital role in designing and deploying applications today. With the increasing interest and use of cloud computing and APIs, it is rapidly becoming necessary to leverage resources and services external to the application as a means to rapidly deploy new functionality and support for new features. The abstraction offered by application delivery services provides an effective, cross-site and cross-application means of enabling what were once application-only services within the infrastructure. This abstraction and service-oriented approach reduces the burden on the application as well as its developers. The application delivery service is almost always the first service in the oft-times lengthy chain of services required to respond to a client’s request. Leveraging its capabilities to inspect and manipulate as well as route and respond to those requests allows architects to formulate new strategies and ways to provide their own services, as well as leveraging existing and integrated resources for maximum efficiency, with minimal effort. Related blogs & articles: HTML5 Going Like Gangbusters But Will Anyone Notice? Web 2.0 Killed the Middleware Star The Inevitable Eventual Consistency of Cloud Computing Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage Cloud-Tiered Architectural Models are Bad Except When They Aren’t The Database Tier is Not Elastic The New Distribution of The 3-Tiered Architecture Changes Everything Sessions, Sessions Everywhere3.1KViews0likes0CommentsWho Took the Cookie from the Cookie Jar … and Did They Have Proper Consent?
Cookies as a service enabled via infrastructure services provide an opportunity to improve your operational posture. Fellow DevCentral blogger Robert Haynes posted a great look at a UK law regarding cookies. Back in May a new law went info effect regarding “how cookies and other “cookie-like” objects are stored on users’ devices.” If you haven’t heard about it, don’t panic – there’s a one-year grace period before enforcement begins and those £500 000 fines are being handed out. The clock is ticking, however. What do the new regulations say? Well essentially whereas cookies could be stored with what I, as a non-lawyer, would term implied consent, i.e. the cookies you set are listed along with their purpose and how to opt out in some interminable privacy policy on your site, you are now going to have to obtain a more active and informed consent to store cookies on a user’s device. -- The UK Cookie Law – Robert goes on to explain that the solution to this problem requires (1) capturing cookies and (2) implementing some mechanism to allow users to grant consent. Mind you, this is not a trivial task. There are logic considerations – not all cookies are set at the same time – as well as logistical issues – how do you present a request for consent? Once consent is granted, where do you store that? In a cookie? That you must gain consent to store? Infinite loops are never good. And of course, what do you if consent is not granted, but the application depends on that cookie existing? To top it all off, the process of gathering consent requires modification to application behavior, which means new code, testing and eventually deployment. Infrastructure services may present an alternative approach that is less disruptive technologically, but does not necessarily address the business or logic ramifications resulting from such a change. COOKIES as a SERVICE Cookies as a Service, a.k.a. cookie gateways, wherein cookie authorization and management is handled by an intermediate proxy, is likely best able to mitigate the expense and time associated with modifying applications to meet the new UK regulation. As Robert describes, he’s currently working on a network-side scripting solution to meet the UK requirements that leverages a full-proxy architecture’s ability to mediate for applications and communicate directly with clients before passing requests on to the application. Not only is this a valid approach to managing privacy regulations, it’s also a good means of providing additional security against attacks that leverage cookies either directly or indirectly. Cross-site scripting, browser vulnerabilities and other attacks that bypass the same origin policy of modern web browsers – sometimes by design to circumvent restrictions on integration methods – as well as piggy-backing on existing cookies as a means to gain unauthorized access to applications are all potential dangerous of cookies. By leveraging encryption of cookies in conjunction with transport layer security, i.e. SSL, organizations can better protect both users and applications from unintended security consequences. Implementing a cookie gateway should make complying with regulations like the new UK policy a less odious task. By centralizing cookie management on an intermediate device, they can be easily collected and displayed along with the appropriate opt-out / consent policies without consuming application resources or requiring every application to be modified to include the functionality to do so. AN INFRASTRUCTURE SERVICE This is one of the (many) ways in which an infrastructure service hosted in “the network” can provide value to both application developers and business stakeholders. Such a reusable infrastructure-hosted service can be leveraged to provide services to all applications and users simultaneously, dramatically reducing the time and effort required to support such a new initiative. Reusing an infrastructure service also reduces the possibility of human error during the application modification, which can drag out the deployment lifecycle and delay time to market. In the case of meeting the requirements of the new UK regulations, that delay could become costly. According to a poll of CIOs regarding their budgets for 2010, The Society for Information Management found that “Approximately 69% of IT spending in 2010 will be allocated to existing systems, while about 31% will be spent on building and buying new systems. This ratio remains largely the same compared to this year's numbers.” If we are to be more responsive to new business initiatives and flip that ratio such that we are spending less on maintaining existing systems and more on developing new systems and methods of management (i.e. cloud computing ) we need to leverage strategic points of control within the network to provide services that minimize resources, time and money on existing systems. Infrastructure services such as cookie gateways provide the opportunity to enhance security, comply with regulations and eliminate costs in application development that can be reallocated to new initiatives and projects. We need to start treating the network and its unique capabilities as assets to be leveraged and services to be enabled instead of a fat, dumb pipe. Understanding network-side scripting This is Why We Can’t Have Nice Things When the Data Center is Under Siege Don’t Forget to Watch Under the Floor IT as a Service: A Stateless Infrastructure Architecture Model You Can’t Have IT as a Service Until IT Has Infrastructure as a Service F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy The UK Cookie Law –332Views0likes1CommentF5 Friday: The Art of Efficient Defense
It’s not enough to have a strategic point of control; you’ve got to use it, too. One of the primary threats to the positive operational posture of an organization is that of extremely heavy load. Whether it’s from a concerted effort to take down the site (DDoS) or simply an unanticipated flood of legitimate users is really not as important to today’s discussion as understanding the impact both can have not just on your applications, but on their supporting infrastructure. You know, the network “stuff” that sits between the client and your applications, defending against all manner of vile miscreant-created traffic. Most often these network devices comprise components like IDS and IPS; components that traditionally are configured in a transparent topology requiring the use of mirroring at the network layer. This is most often accomplished by mirror/span port configurations on the physical device. In the face of overwhelming traffic, these components can become, well, overwhelmed. Like firewalls, they can become a bottleneck. Unlike firewalls –which can induce performance degradations in the face of high traffic volumes - the bottleneck for these transparent security devices becomes a slowdown in operational processes and degradation in reaction times to the discovery of malicious content. The typical reaction to such a potential problem is simple: acquire more resources and/or hardware to beef up capacity. But that’s not always the most efficient means of addressing the problem and in fact a more efficient solution would be to apply an architectural strategy that maintains an agile operational posture capable of reacting quickly to the presence of malicious content. CONTEXTUAL TRAFFIC MIRRORING Consider that in a traditional architecture comprising security devices such as IPS and IDS, all application traffic is mirrored to the device. All traffic, whether it needs to be inspected or not. Now that may at first sound like a silly statement. Of course all traffic needs to be inspected. But does it? Does a request for an image need to go through an IPS or IDS? Stenography has come a long way, sure, but the technique is generally used to hide data for human consumption; it rarely if ever contains malicious code because there’s not a commoditized means by which such data can be exploited to gain access to internal systems or carry out an attack against a data source. But in a traditional architecture, requests for images are going through the security infrastructure no matter what. You’ve got no control over that. Except you do, if you’ve got the right tools in your strategic arsenal. A tool like a network-side scripting that can leverage the innate capabilities of a full-proxy based application delivery controller. Yeah, I’m talking about iRules on an F5 BIG-IP LTM (Local Traffic Manager). Using the ability of BIG-IP LTM to intercept requests, inspect them, and then enforce routing, security and other application-centric policies upon those requests you can architect a solution that is makes more efficient use of the resources you have even in the face of higher traffic volumes. It’s a force-multiplier, making your limited security infrastructure resources more powerful by focusing them in on the attempts to outflank your information security strategy. Using the iRules clone command, you can instruct BIG-IP to mirror traffic to a designated pool (collection of resources, most often servers) or pool member (a single resource, such as an application instance – virtual or physical). By taking advantage of iRules ability to inspect incoming requests, you can then leverage this cloning capability to only mirror specified traffic to a pool of security infrastructure components, such as a group of IDS. The decision regarding which requests are mirrored can be based on just about any variable you can think of – URI, host name, client agent data, request type, HTTP headers, cookies, etc… Such decisions could also be used conditionally; that is, if traffic volume suddenly increases a sampling of traffic could be mirrored off instead of all traffic as a means to avoid overwhelming the infrastructure, or perhaps such a policy could be enforced only when it becomes clear that the security infrastructure is about to be overwhelmed. It’s about context – and the ability to leverage that context to make decisions that are smart, efficient, and consistent with business goals and requirements. For those of you who’d like to try it out yourself, here’s a sample from our Solution Architects’ collection of “Top 50” iRules. This sample uses the URI as the means of determining whether to clone the request to a pool of IDS but you can use just about any information in the network stack – whatever makes sense to enforce the topological control over the flow of traffic you need to meet business and operational goals.207Views0likes1CommentHow To Limit URI Length Without Recompiling Apache
Use network-side scripting, of course! While just about every developer and information security professional knows that a buffer-overflow exploit can result in the execution of malicious code not many truly grok the “why”. Fortunately, it’s not really necessary for either one to be able to walk through the execution stack and trace the byte-code as it overwrites registers and then jumps to execute it. They know it’s A Very Bad Thing™ and perhaps more importantly they know how to stop it. SECONDARY and TERTIARY DEFENSE REQUIRED The best place to prevent a buffer-overflow vulnerability is in the application code. Never trust input whether from machine or man. Period. A simple check on the length of a string-based parameter can prevent vulnerabilities that may exist at the platform or operating system layer from being exploited. That’s true of almost all vulnerabilities, not just buffer overruns, by the way. An overly long input parameter could be an attempt at XSS or an SQLi, as well. Both tend to extend the “normal” data and while often obfuscated, the sheer length of such strings can indicate the presence of something malicious and almost certainly something that should be investigated. Assuming for whatever reason that this isn’t possible (and we know from research and surveys and live data from organizations like WhiteHat Security that it isn’t for a number of very valid reasons) it is likely the case that information security and operational administrators will go looking for a secondary defense. As the majority of applications today are deployed as web applications, that generally means looking to the web or application server for limitations on URL and/or parameter lengths as those are the most likely avenues of attack. One defense can be easily found if you’re deploying on Apache in the “LimitRequestLine” compilation directive. Yes, I said compilation directive. You’ll have to recompile Apache but that’s what open source is all about, right? Customization. Rapid solutions to security vulnerabilities. Agile infrastructure. While you’re in there, you might want to consider evaluating the “LimitRequestFields” and “LimitRequestFieldSize” variables, too. These two variables control the number of HTTP headers allowed as well as the length of a header field and could potentially prevent an exploit of the platform and underlying operating system coming in through the HTTP headers. Yes, such exploits do exist, and as always, better safe than sorry. While all certainly true and valid statements regarding open source software the reality is that changing the core platform code results in a long-term commitment to re-applying those changes every time the core platform is upgraded or patched. Ask an enterprise PeopleSoft developer how that has worked for them over the past decade or so – but fair warning, you’ll want to be out of spitting range when you do. Compilation has a secondary negative – it’s not agile, even though open source is. If you run into a situation in which you need to change these values you’re going to have to recompile, retest, and redeploy. And you’re going to have to regression test every application deployed on that particular modified platform. Starting to see why the benefit of customization in open source software is rarely truly taken advantage of? There’s a better way, a more agile way, a flexible way. NETWORK-SIDE SCRIPTING Whenever a solution involves the inspection and potential rejection or modification of HTTP-borne data based on, well, attributes like length, encoding, schema, etc… it should ring a bell and give pause for thought. These are variables in every sense of the word. If you decide to restrict input based on these values you necessarily open yourself up to maintaining and, in particular, updating those limitations across the life of the application in question. Thus it seems logical that the actual implementation of such limitations would leverage a location and solution that has as little impact on the rest of the infrastructure as possible. You want to maximize the coverage of the implementation solution while minimizing the disruption and impact on the infrastructure. There happens to be a strategic point of control that very much fits this description: centralization of functionality at a point of aggregation that maximizes coverage while minimizing disruption. As an added bonus the coverage is platform-agnostic, which means Apache, IIS, can be automagically covered without modifying the platforms themselves. That strategic point in the architecture is a network-side scripting enabled application delivery controller (the Load balancer for you old skool operations folks). See, when you take advantage of a full-proxy and really leverage its capabilities you can implement solutions that are by definition application-aware whilst maintaining platform-agnosticism. That’s important because the exploits you’re looking to stop are generally specific to an application; even when they target the platform they take advantage of application data manipulation and associated loopholes in processing of that data. While the target may be the platform, the miscreant takes aim and transports the payload via, well, the payload. The data. That data is, of course, often carried in the query portion of the URI as a parameter. If it’s not in the URI then it’s in the payload, often as a www-url-encoded field submitted via HTTP POST method. The script can extract the URI and validate its total length and/or it can extract each individual name-value pairs (in the URI or in the body) and evaluate each one of them for length, doing whatever it is you want done with invalid length values: reject the request, chop the value to a specific size and pass it on, log it or even route the request to an application honey-pot. Example iRule snippet to check length of URI on submission: if {[string length $uri] > 1024} /* where $uri = HTTP::uri */ If you’re thinking that it’s going to be time consuming to map all the possible variables to maximum lengths, well, you’re right. It will. You can of course write a generic across-the-board length-limiting script or you could investigate a web application firewall instead. A WAF will “learn” the mapping in real-time and allow you to fine-tune or relax limitations on a parameter by parameter basis if you wish, with a lot less time investment. Both options, however, will do the job nicely and both provide a secondary line of defense in the face of a potential exploit that is completely avoidable if you’ve got the right tools in your security toolbox. Related blogs and articles:262Views0likes0CommentsExtend Cross-Domain Request Security using Access-Control-Allow-Origin with Network-Side Scripting
The W3C specification now offers the means by which cross-origin AJAX requests can be achieved. Leveraging network and application network services in conjunction with application-specific logic improves security of allowing cross-domain requests and has some hidden efficiency benefits, too. The latest version of the W3C working draft on “Cross-Origin Resource Sharing” lays out the means by which a developer can use XMLHTTPRequest (in Firefox) or XDomainRequest (in IE8) to make cross-site requests. As is often the case, the solution is implemented by extending HTTP headers, which makes the specification completely backwards and cross-platform compatible even if the client-side implementation is not. While this sounds like a good thing, forcing changes to HTTP headers is often thought to require changes to the application. In many cases, that’s absolutely true. But there is another option: network-side scripting. There are several benefits to using network-side scripting to implement this capability, but more importantly the use of a mediating system (proxy) enables the ability to include more granular security than is currently offered by the Cross-Domain Request specification.242Views0likes1CommentSix Lines of Code
The fallacy of security is that simplicity or availability of the solution has anything to do with time to resolution The announcement of the discovery of a way in which an old vulnerability might be exploited gained a lot of attention because of the potential impact on Web 2.0 and social networking sites that rely upon OAuth and OpenId, both of which use affected libraries. What was more interesting to me, however, was the admission by developers that the “fix” for this vulnerability would take only “six lines of code”, essentially implying a “quick fix.” For most of the libraries affected, the fix is simple: Program the system to take the same amount of time to return both correct and incorrect passwords. This can be done in about six lines of code, Lawson said. It sounds simple enough. Six lines of code. If you’re wondering (and I know you are) why it is that I’m stuck on “six lines of code” it’s because I think this perfectly sums up the problem with application security today. After all, it’s just six lines of code, right? Shouldn’t take too long to implement, and even with testing and deployment it couldn’t possibly take more than a few hours, right? Try thirty eight days, on average. That’d be 6.3 days per lines of code, in case you were wondering. SIMPLICITY OF THE SOLUTION DOES NOT IMPLY RAPID RESOLUTION Turns out that responsiveness of third-party vendors isn’t all that important, either. But a new policy announced on Wednesday by TippingPoint, which runs the Zero Day Initiative, is expected to change this situation and push software vendors to move more quickly in fixing the flaws. Vendors will now have six months to fix vulnerabilities, after which time the Zero Day Initiative will release limited details on the vulnerability, along with mitigation information so organizations and consumers who are at risk from the hole can protect themselves. -- Forcing vendors to fix bugs under deadline, C|Net News, August 2010 To which I say, six lines of code, six months. Six of one, half-a-dozen of the other. Neither is necessarily all that pertinent to whether or not the fix actually gets implemented. Really. Let’s examine reality for a moment, shall we? The least amount of time taken by enterprises to address a vulnerability is 38 days, according to WhiteHat Security’s 9th Website Security Statistic Report. Only one of the eight reasons cited by organizations in the report for not resolving a vulnerability is external to the organization: affected code is owned by an unresponsive third-party vendor. The others are all internal, revolving around budget, skills, prioritization, or simply that risk of exploitation is acceptable. Particularly of note is that in some cases, the “fix” for the vulnerability conflicts with a business use case. Guess who wins in that argument? What WhiteHat’s research shows, and what most people who’ve been inside the enterprise know, is that there are a lot of reasons why vulnerabilities aren’t patched or fixed. We can say it’s only six lines of code and yes, it’s pretty easy, but that doesn’t take into consideration all the other factors that go into deciding when, if ever, to resolve the vulnerability. Consider that one of the reasons cited for security features of underlying frameworks being disabled in WhiteHat’s report is that it breaks functionality. That means that securing one application necessarily breaks all others. Sometimes it’s not that the development folks don’t care, it’s just that their hands are essentially tied, too. They can’t fix it, because that breaks critical business functions that impact directly the bottom line, and not in a good way. For information security professionals this must certainly appear to be little more than gambling; a game which the security team is almost certain to lose if/when the vulnerability is actually exploited. But the truth is that information security doesn’t get to set business and development priorities, unfortunately, but yet they’re the ones responsible. All the accountability and none of the authority. It’s no wonder these folks are high-strung. INFOSEC NEEDS THEIR OWN INFRASTRUCTURE TOOLBOX This is one of the places that a well-rounded security toolbox can provide security teams some control over their own destiny, and some of the peace of mind needed for them to get some well-deserved sleep. If the development team can’t/won’t address a vulnerability, then perhaps it’s time to explore other options. IPS solutions with an automatically updated signature database for those vulnerabilities that can be identified by a signature can block known vulnerabilities. For exploits that may be too variable or subtle, a web application firewall or application delivery controller enabled with network-side scripting can provide the means by which the infosec professional can write their own “six lines of code” and at least stop-gap the risk of actively being exploited. Infosec also needs visibility into the effectiveness of their mitigating solutions. If a solution involves a web application firewall, then that web application firewall ought to provide an accurate report on the number of exploits, attacks, or even probing attempts it stopped. There’s no way for an application to report that data easily – infosec and operators end up combing through log files or investing in event correlation solutions to try and figure out what should be a standard option. Infosec needs to make sure it can report on the amount of risk mitigated in the course of a month, or a quarter, or a year. Being able to quantify in terms of hard dollars provides management and the rest of the organization (particularly the business) what they consider real “proof” of value of not just infosec but the solutions in which it invests to protect data and applications and business concerns. Every other group in IT has, in some way, embraced the notion of “agile” as an overarching theme. Infosec needs to embrace agile as not only an overarching theme but as a methodology in addressing vulnerabilities. Because the “fix” may be a simple “six lines of code”, but who implements that code and where is less important than when. An iterative approach that initially focuses on treating the symptoms (stop the bleeding now) and then more carefully considers long-term treatment of the disease (let’s fix the cause) may result in a better overall security posture for the organization. Related Posts When Is More Important Than Where in Web Application Security189Views0likes0CommentsOut, Damn’d Bot! Out, I Say!
Exorcising your digital demons Most people are familiar with Shakespeare’s The Tragedy of Macbeth. Of particularly common usage is the famous line uttered repeatedly by Lady Macbeth, “Out, damn’d spot! Out, I say” as she tries to wash imaginary bloodstains from her hands, wracked with the guilt of the many murders of innocent men, women, and children she and her husband have committed. It might be no surprise to find a similar situation in the datacenter, late at night. With the background of humming servers and cozily blinking lights shedding a soft glow upon the floor, you might hear some of your infosecurity staff roaming the racks and crying out “Out, damn’d bot! Out I say!” as they try to exorcise digital demons from their applications and infrastructure. Because once those bots get in, they tend to take up a permanent residence. Getting rid of them is harder than you’d think because like Lady Macbeth’s imaginary bloodstains, they just keep coming back – until you address the source.184Views0likes0CommentsF5 Friday: A Network Heatwave That’s Good For Operations
The grab bag of awesome that is network-side scripting is, in general, often overlooked. Generally speaking “network gear” isn’t flexible, nor is it adaptable, and it certainly isn’t extensible. But when you put network-side scripting into the mix, suddenly what was inflexible and static becomes extensible and dynamic. In many cases if you’ve ever said “I wish that thing could do X” well, in the case of application delivery it probably can – you just have to learn how. The how, in the case of F5, is iRules. iRules is network-side scripting, so it’s executing “in the network” as it were, as data is traversing from client to server and server to client. iRules lets you intercept data (it’s event-driven) and then, well, what do you want to do to it? You can transform it, apply conditional policies, log it, search it, reject it. Being network-side means you have context, and with context you can do a lot of application, location, and client-specific “things” to data and requests and sessions. Being “in the network” means you have access to the full network stack. If you need IP header information, you can get that. If you need application-specific information from within the request or response, you can get that. The entire stack is available to inspect and can ultimately be used to instruct BIG-IP. iRules figures prominently into F5’s vision of cloud computing as one component of the dynamic control plane necessary to realize the benefits of cloud computing and virtualization. It figures prominently into agile operations and the ability to respond rapidly to datacenter events, and it’s integral to emerging switching and routing architectures based on content and context, such as message-based load balancing and content-based routing. I don’t often get a chance to cook up iRules myself any more so those who do – and are masters at it – make me just a bit jealous. And it’d be hard to find someone in our kitchen that makes me greener than DevCentral’s own Colin Walker. He’s done it again, and I can’t say enough good things about this solution. Not only is it hot (pun intended) it’s highly applicable to a variety of uses that go beyond just generating eye-candy for operations. What makes it really interesting is the use of an external service to generate a dynamic view of real-time operations. Colin is leveraging Google’s charting API and relying on the confidence of GeoLocation data based on our integration with Quova to enable a real-time visual display of the locations from which HTTP requests are being received by a BIG-IP. It’s dynamic integration, it’s Infrastructure 2.0, it’s devops, it’s just … cool.186Views0likes0Comments