integration
85 TopicsInfrastructure Architecture: Whitelisting with JSON and API Keys
Application delivery infrastructure can be a valuable partner in architecting solutions …. AJAX and JSON have changed the way in which we architect applications, especially with respect to their ascendancy to rule the realm of integration, i.e. the API. Policies are generally focused on the URI, which has effectively become the exposed interface to any given application function. It’s REST-ful, it’s service-oriented, and it works well. Because we’ve taken to leveraging the URI as a basic building block, as the entry-point into an application, it affords the opportunity to optimize architectures and make more efficient the use of compute power available for processing. This is an increasingly important point, as capacity has become a focal point around which cost and efficiency is measured. By offloading functions to other systems when possible, we are able to increase the useful processing capacity of an given application instance and ensure a higher ratio of valuable processing to resources is achieved. The ability of application delivery infrastructure to intercept, inspect, and manipulate the exchange of data between client and server should not be underestimated. A full-proxy based infrastructure component can provide valuable services to the application architect that can enhance the performance and reliability of applications while abstracting functionality in a way that alleviates the need to modify applications to support new initiatives. AN EXAMPLE Consider, for example, a business requirement specifying that only certain authorized partners (in the integration sense) are allowed to retrieve certain dynamic content via an exposed application API. There are myriad ways in which such a requirement could be implemented, including requiring authentication and subsequent tokens to authorize access – likely the most common means of providing such access management in conjunction with an API. Most of these options require several steps, however, and interaction directly with the application to examine credentials and determine authorization to requested resources. This consumes valuable compute that could otherwise be used to serve requests. An alternative approach would be to provide authorized consumers with a more standards-based method of access that includes, in the request, the very means by which authorization can be determined. Taking a lesson from the credit card industry, for example, an algorithm can be used to determine the validity of a particular customer ID or authorization token. An API key, if you will, that is not stored in a database (and thus requires a lookup) but rather is algorithmic and therefore able to be verified as valid without needing a specific lookup at run-time. Assuming such a token or API key were embedded in the URI, the application delivery service can then extract the key, verify its authenticity using an algorithm, and subsequently allow or deny access based on the result. This architecture is based on the premise that the application delivery service is capable of responding with the appropriate JSON in the event that the API key is determined to be invalid. Such a service must therefore be network-side scripting capable. Assuming such a platform exists, one can easily implement this architecture and enjoy the improved capacity and resulting performance boost from the offload of authorization and access management functions to the infrastructure. 1. A request is received by the application delivery service. 2. The application delivery service extracts the API key from the URI and determines validity. 3. If the API key is not legitimate, a JSON-encoded response is returned. 4. If the API key is valid, the request is passed on to the appropriate web/application server for processing. Such an approach can also be used to enable or disable functionality within an application, including live-streams. Assume a site that serves up streaming content, but only to authorized (registered) users. When requests for that content arrive, the application delivery service can dynamically determine, using an embedded key or some portion of the URI, whether to serve up the content or not. If it deems the request invalid, it can return a JSON response that effectively “turns off” the streaming content, thereby eliminating the ability of non-registered (or non-paying) customers to access live content. Such an approach could also be useful in the event of a service failure; if content is not available, the application delivery service can easily turn off and/or respond to the request, providing feedback to the user that is valuable in reducing their frustration with AJAX-enabled sites that too often simply “stop working” without any kind of feedback or message to the end user. The application delivery service could, of course, perform other actions based on the in/validity of the request, such as directing the request be fulfilled by a service generating older or non-dynamic streaming content, using its ability to perform application level routing. The possibilities are quite extensive and implementation depends entirely on goals and requirements to be met. Such features become more appealing when they are, through their capabilities, able to intelligently make use of resources in various locations. Cloud-hosted services may be more or less desirable for use in an application, and thus leveraging application delivery services to either enable or reduce the traffic sent to such services may be financially and operationally beneficial. ARCHITECTURE is KEY The core principle to remember here is that ultimately infrastructure architecture plays (or can and should play) a vital role in designing and deploying applications today. With the increasing interest and use of cloud computing and APIs, it is rapidly becoming necessary to leverage resources and services external to the application as a means to rapidly deploy new functionality and support for new features. The abstraction offered by application delivery services provides an effective, cross-site and cross-application means of enabling what were once application-only services within the infrastructure. This abstraction and service-oriented approach reduces the burden on the application as well as its developers. The application delivery service is almost always the first service in the oft-times lengthy chain of services required to respond to a client’s request. Leveraging its capabilities to inspect and manipulate as well as route and respond to those requests allows architects to formulate new strategies and ways to provide their own services, as well as leveraging existing and integrated resources for maximum efficiency, with minimal effort. Related blogs & articles: HTML5 Going Like Gangbusters But Will Anyone Notice? Web 2.0 Killed the Middleware Star The Inevitable Eventual Consistency of Cloud Computing Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage Cloud-Tiered Architectural Models are Bad Except When They Aren’t The Database Tier is Not Elastic The New Distribution of The 3-Tiered Architecture Changes Everything Sessions, Sessions Everywhere3.1KViews0likes0CommentsF5 and Promon Have Partnered to Protect Native Mobile Applications from Automated Bots - Easily
This DevCentral article provides details on how F5 Bot Defense is used today to protect against mobile app automated bot traffic from wreaking havoc with origin server infrastructure and producing excessive nuisance load volumes, and in turn how the Promon Mobile SDK Integrator is leveraged to speed the solution's deployment. The integrator tool contributes to the solution by allowing the F5 anti-bot solution to be inserted into customer native mobile apps, for both Apple and Android devices, with a simple and quick “No Code” approach that can be completed in a couple of minutes.No source code of the native mobile app need be touched, only the final compiled binaries are required to add the anti bot security provisions as a quick, final step when publishing apps. Retrieving Rich, Actionable Telemetry in a Browser World In the realm of Internet browser traffic, whether the source be Chrome, Edge, Safari or any other standards-based web browser, the key to populating the F5 anti-bot analytics platform with actionable data is providing browsers clear instructions of when, specifically, to add telemetry to transactions as well as what telemetry is required.This leads to the determination, in real time, of whether this is truly human activity or instead automated, negative hostile traffic. The “when” normally revolves around the high value server-side endpoint URLs, things like “login” pages, “create account” links, “reset password” or “forgot username” pages, all acting like honeypot pages where attackers will gravitate to and direct brute force bot-based attacks to penetrate deep into the service. All the while, each bot will be cloaked to the best of their abilities as a seemingly legitimate human user.Other high-value transaction endpoints would likely be URLs corresponding to adding items to a virtual shopping cart and checkout pages for commercial sites. Telemetry, the “what” in the above discussion, is to be provided for F5 analytics and can range from dozens to over one hundred elements.One point of concern is inconsistencies in the User Agent field produced in the HTTP traffic, where a bot may indicate a Windows 10 machine using Chrome via the User Agent header, but various elements of telemetry retrieved indicate a Linux client using Firefox.This is eyebrow raising, if the purported user is misleading the web site with regards to the User Agent, what is the traffic creator’s endgame? Beyond analysis of configuration inconsistencies, behavioral items are noted such as unrealistic mouse motions at improbable speeds and uncanny precision suggesting automation, or perhaps the frequent use of paste functions to fill in form fields such as username, where values are more likely to be typed or be auto filled by browser cache features. Consider something as simple as the speed of key depressed and the key being released upward, if signals indicate an input typing pace that defies the physics of a keyboard, just milliseconds for a keystroke, this is another strong warning sign of automation. Telemetry in the Browser World Telemetry can be thought of as dozens and dozens of intelligence signals building a picture of the legitimacy of the traffic source. The F5 Bot Defense request for telemetry and subsequently provided values are fully obfuscated from bad actors and thus impossible to manipulate in any way. The data points lead to a real time determination of the key aspect of this user:is this really a human or is this automation? This set of instructions, directing browsers with regards to what transactions to decorate with requested signals, is achieved by forcing browsers to execute specific JavaScript inserted into the pages that browsers load when interacting with application servers.The JavaScript is easily introduced by adding JavaScript tags into HTML pages, specifically key returned transactions with “Content Type=text/HTML” that subsequently lead to high value user-side actions like submission of user credentials (login forms). The addition of the JavaScript tags is frequently inserted in-line to returned traffic through a F5 BIG-IP application delivery controller, the F5 Shape proxy itself, or a third-party tag manager, such as Google Tag Manager.The net result is a browser will act upon JavaScript tags to immediately download the F5 JavaScript itself.The script will result in browsers providing the prescribed rich and detailed set of telemetry included in those important HTTP transactions that involve high value website pages conducting sensitive actions, such as password resetting through form submissions. Pivoting to Identify Automation in the Surging Mobile Application Space With native mobile apps, a key aspect to note is the client is not utilizing a browser but rather what one can think of a “thick” app, to borrow a term from the computer world.Without a browser in play, it is no longer possible to simply rely upon actionable JavaScript tags that could otherwise be inserted in flight, from servers towards clients, through application delivery controllers or tag managers.Rather, with mobile apps, one must adjust the app itself, prior to posting to an app store or download site, to offer instructions on where to retrieve a dynamic configuration file, analogous to the instructions provided by JavaScript tag insertion in the world of browsers. The retrieved configuration instruction file will guide the mobile app in terms of which transactions will require “decorating” with telemetry, and of course what telemetry is needed.The telemetry will often vary from one platform such as Android to a differing platform such as Apple iOS.Should more endpoints within the server-side application need decorating, meaning target URLs, one simply adjusts the network hosted configuration instruction file. This adjustment in operations of a vendor’s native mobile application behavior is achieved through F5 Bot Defense Mobile SDK (Software Development Kit) and representative provided telemetry signals might include items like device battery life remaining, device screen brightness and resolution and indicators of device setup, such as a rooted device or an emulator posing as a device.Incorrectly emulated devices, such as one displaying mutually exclusive iOS and Android telemetry, concurrently, allows F5 to isolate troublesome, unwanted automated traffic from the human traffic required for ecommerce to succeed efficiently and unfettered. The following four-step diagram depicts the process of a mobile app being protected by F5 bot defense, from step one with the retrieval of instructions that include what transactions to decorate with telemetry data, through to step four where the host application (the mobile app) is made aware if the transaction was mitigated (blocked or flagged) by means of headers introduced by the F5 Antibot solution. The net result of equipping mobile apps with the F5 Bot Defense Mobile SDK, whether iOS or Android, is the ability to automatically act upon automated bot traffic observed, without a heavy administrative burden. A step which is noteworthy and unique to the F5 mobile solution is the final, fourth step, whereby a feedback mechanism is implemented in the form of the Parse Response header value.This notifies the mobile app that the transaction was mitigated (blocked) en route.One possible reason this can happen is the app was using a dated configuration file, and a sensitive endpoint URL had been recently adjusted to require telemetry.The result of the response in step 4 is the latest version of the config file, with up-to-date telemetry requirements, will automatically be re-read by the mobile app and the transaction can now take place successfully with proper decorated telemetry included. Promon SDK Integrator and F5 Bot Defense: Effortlessly Secure Your Mobile Apps Today One approach to infusing a native mobile app with the F5 Bot Defense SDK would be a programmatic strategy, whereby the source code of the mobile app would require modifications to incorporate the F5 additional code.Although possible, this may not be aligned with the skillsets of all technical resources requiring frequent application builds, for instance for quality assurance (QA) testing. Another issue might be a preference to only have obfuscated code at the point in the publishing workflow where the security offering of the SDK is introduced.In simple terms, the core mandate of native mobile app developers is the intellectual property contained within that app, adding valuable security features to combat automation is more congruent with a final checkmark obtained by protecting the completed application binary with a comprehensive protective layer of security. To simplify and speed up the time for ingestion of the SDK, F5 has partnered with Promon, of Oslo, Norway to make use of an integrator application from Promon which can achieve a “No Code” integration in just a few commands.The integrator, technically a .jar executable known as the Shielder tool at the file level, is utilized as per the following logic. The workflow to create the enhanced mobile app, using the Promon integration tool, with the resultant modified mobile app containing the F5 (Shape) SDK functions within it, consists of only two steps. 1.Create a Promon SDK Integrator configuration file 2.Perform the SDK injection An iOS example would take the follow form: Step 1: python3 create_config.py --target-os iOS --apiguard-config ./base_ios_config.json --url-filter *.domain.com --enable-logs --outfile sdk_integrator_ios_plugin_config.dat Step 2: java -jar Shielder.jar --plugin F5ShapeSDK-iOS-Shielder-plugin-1.0.4.dat --plugin sdk_integrator_ios_plugin_config.dat ./input_app.ipa --out . /output_app.ipa --no-sign Similarly, an Android example would remain also a simple two-step process looking much like this: Step 1: python3 create_config.py --target-os Android --apiguard-config ./base_android_config.json --url-filter *.domain.com --enable-logs --outfile sdk_integrator_android_plugin_config.dat Step 2: java -jar Shielder.jar --plugin F5ShapeSDK-Android-Shielder-plugin-1.0.3.dat --plugin sdk_integrator_android_plugin_config.dat ./input_app.apk --output ./output_app.apk In each respective “Step 1”, the following comments can be made about the create_config.py arguments: target-os specifies the platform (Apple or Android). apiguard-config will provide a base configuration .json file, which will server to provide an initial list of protected endpoints (corresponding to key mobile app exposed points such as “create account” or “reset password”), along with default telemetry required per endpoint.Once running, the mobile app equipped with the mobile SDK will immediately refresh itself with a current hosted config file. url-filter is a simple means of directing the SDK functions to only operate with specific domain name space (eg *.sampledomain.com).Url filtering is also available within the perpetually refreshed .json config file itself. enable-logs allows for debugging logs to be optionally turned on. outfile specifies for file naming of the resultant configuration.dat file, which is ingested in step 2, where the updated iOS or Android binary, with F5 anti-bot protections, will be created. For Step 2, where the updated binaries are created, for either Apple iOS or Android platforms, these notes regard each argument called upon by the java command: shielder.jar is the portion of the solution from Promon which will adjust the original mobile application binary to include F5 anti-bot mobile security, all without opening the application code. F5ShapeSDK-[Android or iOS]-Shielder-plugin-1.0.3.dat is the F5 antibot mobile SDK, provide in a format consumable by the Promon shielder. The remaining three arguments are simply the configuration output file of step 1, the original native mobile app binary and finally the new native mobile app binary which will now have anti-bot functions. The only additional step that is optionally run would be re-signing the new version of the mobile application, as the hash value will change with the addition of the security additions to the new output file. Contrasting the Promon SDK Integrator with Manual Integration and Mobile Application Requirements The advantage of the Promon Integrator approach is the speed of integration and the lack of any coding requirements to adjust the native mobile application.The manual approach to integrating F5 Bot Defense Mobile SDK is documented by F5 with separate detailed guides available for Android and for iOS.A representative summary of the steps involved include the following check points along the path to successful manual SDK integration: oImporting the provided APIGuard library into Android Studio (Android) and Xcode (iOS) environments oThe steps for iOS will differ depending on whether the original mobile app is using a dynamic or static framework, commands for both Swift and Objective-c are provided in the documentation oThe F5 SDK code is already obfuscated and compressed; care should be followed not to include this portion of the revised application in existing obfuscation procedures oWithin Android Studio, expand and adjust as per the documentation the Gradle scripts oInitialize the Mobile SDK; specific to Android and the Application Class the detailed functions utilized are available with the documentation, including finished initialization examples in Java and Kotlin oSpecific to iOS, initialize the mobile SDK in AppDelegate, this includes adding APIGuardDelegate to the AppDelegate class as an additional protocol, full examples are provided for Swift and Obective-c oBoth Android and iOS will require a GetHeaders functions be invoked through code additions for all traffic potentially to be decorated with telemetry, in accordance with the instructions of the base and downloaded configuration files As demonstrated the by the list length of these high-level and manual steps above, which involve touching application code, the alternative ease and simplicity of the Promon Integrator two command offering may be significant in many cases. The platform requirements for the F5 Bot Defense and Promon paired solution are not arduous and frequently reflect libraries used in many native mobile applications developed today.These supported libraries include: oAndroid: HttpURLConnection, OkHttp, Retrofit oiOS: NSURLSession, URLSession, Alamofire Finally, mobile applications that utilize WebViews, which is to say applications using browser-type technologies to implement the app, such as support for Cascading Style Sheets or JavaScript, are not applicable to the F5 Bot Defense SDK approach.In some cases, entirely WebView implemented applications may be candidates for support through the browser style JavaScript-oriented F5 Bot Defense telemetry gathering. Summary With the simplicity of the F5 and Promon workflow, this streamlined approach to integrating the anti-bot technology into a mobile app ecosystem allows for rapid, iterative usage. In development environments following modern CI/CD (continuous integration/continuous deployment) paradigms, with build servers creating frequently updated variants of native mobile apps, one could invoke the two steps of the Promon SDK integrator daily, there are no volume-based consumption constraints.1.5KViews1like0CommentsF5 Friday: Gracefully Scaling Down
What goes up, must come down. The question is how much it hurts (the user). An oft ignored side of elasticity is scaling down. Everyone associates scaling out/up with elasticity of cloud computing but the other side of the coin is just as important, maybe more so. After all, what goes up must come down. The trick is to scale down gracefully, i.e. to do it in such a way as to prevent the disruption of service to existing users while simultaneously trying to scale back down after a spike in demand. The ramifications of not scaling down are real in terms of utilization and therefore cost. Scaling up with the means to scale back down means higher costs, and simply shutting down an instance that is currently in use can result in angry users as service is disrupted. What’s necessary is to be able to gracefully scale down; to indicate somehow to the load balancing solution that a particular instance is no longer necessary and begin preparation for eventually shutting it down. Doing so gracefully requires that you are somehow able to quiesce or bleed off the connections. You want to continue to service those users who are currently connected to the instance while not accepting any new connections. This is one of the benefits of leveraging an application-aware application delivery controller versus a simple Load balancer: the ability to receive instruction in-process to begin preparation for shut down without interrupting existing connections. SERVING UP ACTIONABLE DATA BIG-IP users have always had the ability to specify whether disabling a particular “node” or “member” results in the rejection of all connections (including existing ones) or if it results in refusing new connections while allowing old ones to continue to completion. The latter technique is often used in preparation for maintenance on a particular server for applications (and businesses) that are sensitive to downtime. This method maintains availability while accommodating necessary maintenance. In version 10.2 of the core BIG-IP platform a new option was introduced that more easily enables the process of draining a server/application’s connections in preparation for being taken offline. Whether the purpose is maintenance or simply the scaling down side of elastic scalability is really irrelevant; the process is much the same. Being able to direct a load balancing service in the way in which connections are handled from the application is an increasingly important capability, especially in a public cloud computing environment because you are unlikely to have the direct access to the load balancing system necessary to manually engage this process. By providing the means by which an application can not only report but direct the load balancing service, some measure of customer control over the deployment environment is re-established without introducing the complexity of requiring the provider to manage the thousands (or more) credentials that would otherwise be required to allow this level of control over the load balancer’s behavior. HOW IT WORKS For specific types of monitors in LTM (Local Traffic Manager) – HTTP, HTTPS, TCP, and UDP – there is a new option called “Receive Disable String.” This “string” is just that, a string that is found within the content returned from the application as a result of the health check. In phase one we have three instances of an application (physical or virtual, doesn’t matter) that are all active. They all have active connections and are all receiving new connections. In phase two a health check on one server returns a response that includes the string “DISABLE ME.” BIG-IP sees this and, because of its configuration, knows that this means the instance of the application needs to gracefully go offline. LTM therefore continues to direct existing connections (sessions) with that instance to the right application (phase 3), but subsequently directs all new connection requests to the other instances in the pool (farm, cluster). When there are no more existing connections the instance can be taken offline or shut down with zero impact to users. The combination of “receive string” and “receive disable string” impacts the way in which BIG-IP interprets the instruction. A “receive string” typically describes the content received that indicates an available and properly executing application. This can be as simple as “HTTP 200 OK” or as complex as looking for a specific string in the response. Similarly the “receive disable” string indicates a particular string of text that indicates a desire to disable the node and begin the process of bleeding off connections. This could be as simple as “DISABLE” as indicated in the above diagram or it could just as easily be based solely on HTTP status codes. If an application instance starts returning 50x errors because it’s at capacity, the load balancing policy might include a live disable of the instance to allow it time to cool down – maintaining existing connections while not allowing new ones. Because action is based on matching a specific string, the possibilities are pretty much wide open. The following table describes the possible interactions between the two receive string types: LEVERAGING as a PROVIDER One of the ways in which a provider could leverage this functionality to provide differentiated value-added cloud services (as Randy Bias calls them) would be to define an application health monitoring API of sorts that allows customers to add to their application a specific set of URIs that are used solely for monitoring and can thus control the behavior of the load balancer without requiring per-customer access to the infrastructure itself. That’s a win-win, by the way. The customer gets control but so does the provider. Consider an health monitoring API that is a single URI: http://$APPLICATION_INSTANCE_HOSTNAME/health/check. Now provide a set of three options for customers to return (these are likely oversimplified for illustration purposes, but not by much): ENABLE QUIESCE DISABLE For all application instances the BIG-IP will automatically use an HTTP-derived monitor that calls $APP_INSTANCE/health/check and examines the result. The monitor would use “ENABLE” as the “receive string” and “QUIESCE” as the “receive disable” string. Based on the string returned by the application, the BIG-IP takes the appropriate action (as defined by the table above). Of course this can also easily be accomplished by providing a button on the cloud management interface to do the same via iControl, but this option is more able to be programmatically defined by customers and thus is more dynamic and allows for automation. And of course such an implementation isn’t relegated only to service providers; IT organizations in any environment can take advantage of such an implementation, especially if they’re working toward an automated data center and/or self-service provisioning/management of IT services. That is infrastructure as a service. Yes, this means modification to the application being deployed. No, I don’t think that’s a problem – cloud and Infrastructure as a Service (IaaS), at least real IaaS is going to necessarily require modifications to existing applications and new applications will need to include this type of integration in the future if we are to take advantage of the benefits afforded by a more application aware infrastructure and, conversely, a more infrastructure-aware application architecture. Related Posts706Views0likes1CommentWILS: The Data Center API Compass Rose
#SDN #cloud North, South, East, West. Defining directional APIs. There's an unwritten rule that says when describing a network architecture the perimeter of the data center is at the top. Similarly application data flow begins at the UI (presentation) layer and extends downward, toward the data tier. This directional flow has led to the use of the terms "northbound" and "southbound" to describe API responsibility within SDN (Software Defined Network) architectures and is likely to continue to expand to encompass in general the increasingly-API driven data center models. But while network aficionados may use these terms with alacrity, they are not always well described or described in a way that a broad spectrum of IT professionals will immediately understand. Too, these terms are increasingly used by systems other than those directly related to SDN to describe APIs and how they integrate with other systems within the data center. So let's set about rectifying that, shall we? NORTHBOUND The northbound API in an SDN architecture describes the APIs used to communicate with the controller. In a general sense, the northbound API is the interconnect with the management ecosystem. That is, with systems external to the device responsible for instructing, monitoring, or otherwise managing the device in some way. Examples in the enterprise data center would be integration with HP, VMware, and Microsoft management solutions for purposes of automation and orchestration and the sharing of actionable data between systems. SOUTHBOUND The southbound API interconnects with the network ecosystem. In an SDN this would be the switching fabric. In other systems this would be those network devices with which the device integrates for the purposes of routing, switching and otherwise directing traffic. Examples in the enterprise data center would be the use of OpenFlow to communicate with the switch fabric, network virtualization protocols, or the integration of a distributed delivery network. EASTBOUND Eastbound describes APIs used to integrate the device with external systems, such as cloud providers and cloud-hosted services. Examples in the enterprise data center would be a cloud gateway taking advantage of a cloud provider's API to enable a normalized network bridge that extends the data center eastward, into the cloud. WESTBOUND Westbound APIs are used to enable integration with the device, a la plug-ins to a platform. These APIs are internal-focused and enable a platform upon which third-party functionality can be developed and deployed. Examples in the enterprise data center would be proprietary APIs for network operating systems that enable a plug-in architecture for extending device capabilities beyond what is available "out of the box." Certainly others will have a slightly different take on directional API definitions, though north and south-bound API descriptions are generally similar throughout the industry at this time. However, you can assume these definitions are applicable if and when I use them in future blogs. Architecting Scalable Infrastructures: CPS versus DPS SDN is Network Control. ADN is Application Control. The Cloud Integration Stack Hybrid Architectures Do Not Require Private Cloud Identity Gone Wild! Cloud Edition Cloud Bursting: Gateway Drug for Hybrid Cloud The Conspecific Hybrid Cloud700Views0likes0CommentsF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM632Views0likes4CommentsSessions, Sessions Everywhere
If you’re replicating session state across application servers you probably need to rethink your strategy. There’s other options – more efficient options – than wasting RAM and, ultimately, money. Although the discussion of Oracle’s “cloud in a box” announcement at OpenWorld dominated much of the tweet-stream this week there were other discussions going on that proved to not only interesting but a good reminder of how cloud computing has brought to the fore the importance of architecture. Foremost in my mind was what started as a lamentation on the fact that Amazon EC2 does not support multicasting that evolved into a discussion on why that would cause grief for those deploying applications in the environment. Remember that multicast is essentially spraying the same data to a group of endpoints and is usually leveraged for streaming media topologies: In computer networking, multicast is the delivery of a message or information to a group of destination computers simultaneously in a single transmission from the source creating copies automatically in other network elements, such as routers, only when the topology of the network requires it. -- Wikipedia, multicast As it turns out, a primary reason behind the need for multicasting in the application architecture revolves around the mirroring of session state across a pool of application servers. Yeah, you heard that right – mirroring session state across a pool of application servers. The first question has to be: why? What is it about an application that requires this level of duplication? MULTICASTING for SESSIONS There are three reasons why someone would want to use multicasting to mirror session state across a pool of application servers. There may be additional reasons that aren’t as common and if so, feel free to share. The application relies on session state and, when deployed in a load balanced environment, broke because the tight-coupling between user and session state was not respected by the Load balancer. This is a common problem when moving from dev/qa to production and is generally caused by using a load balancing algorithm without enabling persistence, a.k.a. sticky sessions. The application requires high-availability that necessitates architecting a stateful-failover architecture. By mirroring sessions to all application servers if one fails (or is decommissioned in an elastic environment) another can easily re-establish the coupling between the user and their session. This is not peculiar to application architecture – load balancers and application delivery controllers mirror their own “session” state across redundant pairs to achieve a stateful failover architecture as well. Some applications, particularly those that are collaborative in nature (think white-boarding and online conferences) “spray” data across a number of sessions in order to enable the sharing in real time aspect of the application. There are other architectural choices that can achieve this functionality, but there are tradeoffs to all of them and in this case it is simply one of several options. THE COST of REPLICATING SESSIONS With the exception of addressing the needs of collaborative applications (and even then there are better options from an architectural point of view) there are much more efficient ways to handle the tight-coupling of user and session state in an elastic or scaled-out environment. The arguments against multicasting session state are primarily around resource consumption, which is particularly important in a cloud computing environment. Consider that the typical session state is 3-200 KB in size (Session State: Beyond Soft State ). Remember that if you’re mirroring every session across an entire cluster (pool) of application servers, that each server must use memory to store that session. Each mirrored session, then, is going to consume resources on every application server. Every application server has, of course, a limited amount of memory it can utilize. It needs that memory for more than just storing session state – it must also store connection tables, its own configuration data, and of course it needs memory in which to execute application logic. If you consume a lot of the available memory storing the session state from every other application server, you are necessarily reducing the amount of memory available to perform other important tasks. This reduces the capacity of the server in terms of users and connections, it reduces the speed with which it can execute application logic (which translates into reduced response times for users), and it operates on a diminishing returns principle. The more application servers you need to scale – and you’ll need more, more frequently, using this technique – the less efficient each added application server becomes because a good portion of its memory is required simply to maintain session state of all the other servers in the pool. It is exceedingly inefficient and, when leveraging a public cloud computing environment, more expensive. It’s a very good example of the diseconomy of scale associated with traditional architectures – it results in a “throw more ‘hardware’ at the problem, faster” approach to scalability. BETTER ARCHITECTURAL SOLUTIONS There are better architectural solutions to maintaining session state for every user. SHARED DATABASE Storing session state in a shared database is a much more efficient means of mirroring session state and allows for the same guarantees of consistency when experiencing a failure. If session state is stored in a database then regardless of which application server instance a user is directed to that application server has access to its session state. The interaction between the user and application becomes: User sends request Clustering/load balancing solution routes to application server Application server receives request, looks up session in database Application server processes request, creates response Application server stores updated session in database Application server returns response If a single database is problematic (because it is a single point of failure) then multicasting or other replication techniques can be used to implement a dual-database architecture. This is somewhat inefficient, but far less so than doing the same at the application server layer. PERSISTENCE-BASED LOAD BALANCING It is often the case that the replication of session state is implemented in response to wonky application behavior occurring only when the application is deployed in a scalable environment, a.k.a a load balancing solution is introduced into the architecture. This is almost always because the application requires tight-coupling between user and session and the load balancing is incorrectly configured to support this requirement. Almost every load balancing solution – hardware, software, virtual network appliance, infrastructure service – is capable of supporting persistence, a.k.a. sticky sessions. This solution requires, however, that the load balancing solution of choice be configured to support the persistence. Persistence (also sometimes referred to as “server affinity” when implemented by a clustering solution) can be configured in a number of ways. The most common configuration is to leverage the automated session IDs generated by application servers, e.g. PHPSESSIONID, ASPSESSIONID. These ids are contained in the HTTP headers and are, as a matter of fact, how the application server “finds” the appropriate session for any given user’s request. The load balancer intercepts every request (it does anyway) and performs the same type of lookup on its own session table (which is much, much higher capacity than an application server and leverages the same high-performance lookups used to store connection and network session tables) and routes the user to the appropriate application server based on the session ID. The interaction between the user and application becomes: User sends request Clustering/load balancing solution finds, if existing, the session-app server mapping. If it does not, it chooses the application server based on the load balancing algorithm and configured parameters Application server receives request, Application server processes request, creates response Application server returns response Clustering/load balancing solution creates the session-app server mapping if it did not already exist Persistence can generally be based on any data in the HTTP header or payload, but using the automatically generated session ids tends to be the most common implementation. YOUR INFRASTRUCTURE, GIVE IT TO ME Now, it may be the case when the multicasting architecture is the right one. It is impossible to say it’s never the right solution because there are always applications and specific scenarios in which an architecture that may not be a good idea in general is, in fact, the right solution. It is likely the case, however, in most situations that it is not the right solution and has more than likely been implemented as a workaround in response to problems with application behavior when moving through a staged development environment. This is one of the best reasons why the use of a virtual edition of your production load balancing solution should be encouraged in development environments. The earlier a holistic strategy to application design and architecture can be employed the fewer complications will be experienced when the application moves into the production environment. Leveraging a virtual version of your load balancing solution during the early stages of the development lifecycle can also enable developers to become familiar with production-level infrastructure services such that they can employ a holistic, architectural approach to solving application issues. See, it’s not always because developers don’t have the know how, it’s because they don’t have access to the tools during development and therefore can’t architect a complete solution. I recall a developer’s plaintive query after a keynote at [the now defunct] SD West conference a few years ago that clearly indicated a reluctance to even ask the network team for access to their load balancing solution to learn how to leverage its services in application development because he knew he would likely be denied. Network and application delivery network pros should encourage the use of and tinkering with virtual versions of application delivery controllers/load balancers in the application development environment as much as possible if they want to reduce infrastructure and application architectural-related issues from cropping up during production deployment. A greater understanding of application-infrastructure interaction will enable more efficient, higher performing applications in general and reduce the operational expenses associated with deploying applications that use inefficient methods such as replication of session state to address application architectural constraints. Related blogs & articles: Applying Scalability Patterns to Infrastructure Architecture Scalability Only One Half the Reliability Equation Service Virtualization Helps Localize Impact of Elastic Scalability Web 2.0: Integration, APIs, and Scalability Automating scalability and high availability services To Take Advantage of Cloud Computing You Must Unlearn, Luke. Scalability with multiple networks for Virtual Servers ... Cloud Lets You Throw More Hardware at the Problem Faster And That, Young Cloudwalker, Is Why You Fail581Views0likes0CommentsSDN is Network Control. ADN is Application Control.
#SDN is disruptive but it's not destructive, especially not at layer 4-7 In the wake of VMware's acquisition of SDN notable Nicira there was a whole lot of behind-the-scenes scrambling to understand the impact to the entire networking demesne – especially given speculation that after SDN focuses on L2/3 network virtualization it will move to encompass L4/7 services including ADCs and network security. That sounds more than disruptive, it sounds downright destructive. But the reality is that SDN is not designed for layers 4-7 and its focus on layer 2-3 – and specifically its packet-processing focus – has long been shown to be inadequate for managing application traffic at layer 4 and above. SDN is about moving packets seamlessly and dynamically. It solves issues that have long been problematic for very large networks and service providers but in the wake of virtualization and cloud computing have begun to emerge as a challenge for unlarge networks and, in particular, cloud providers. These challenges revolve around limitations in Ethernet routing and switching standards as well as managing the high volumes of change inherent in cloud computing environments. To resolve this, SDN proposes a centrally controlled, programmable network model that overlays traditional physical networks. This overlay addresses limitations (such as those imposed by the VLAN standard) and the problem of physical wiring required to properly route traffic across networks in the face of failures and route changes. SDN solves this by decoupling the control and data planes, leveraging programmable switching infrastructure and creating an overlay network without the limitations that hold back traditional networks. But at the end of the day, SDN is about routing and switching and forwarding of packets. ADN, on the other hand, is concerned with the routing and switching of applications and the forwarding of sessions. Its model is a mirror of that of SDN, but at the application layers (4-7 of the OSI model). ADN is not – and cannot be – primarily concerned with the forwarding of individual packets, as the application data upon which ADN works is an aggregation of multiple packets, i.e. application data which has been fragmented necessarily over multiple L3 packets by limitations imposed on it by Ethernet standards. Simply establishing a TCP connection, for example, requires the exchange of no less than 3 packets of data: SYN, SYN-ACK, and ACK. An ADC will not even begin to make a routing decision until all three packets have been exchanged and a session is established. This is the mechanism through which application security and routing are achieved – on a per session, per application request basis. ADCs, therefore, are at least half (and the best are full) proxies. Packets are not forwarded until a decision has been made, and thus are required to maintain large stores of session data in their systems. SDN does not provide at this time for such functionality (though it certainly could in the future). Furthermore, SDN would need to solve a significant processing issue around the variability in making decisions at layer 4 and above. Simply load balancing based on source IP or other primitive methods would be possible for SDN and unlikely to overtax the system too much, but such basic functionality is decades old and been proven ineffective for adequately scaling applications while maintaining acceptable performance. Improving beyond such rudimentary capabilities would be difficult for SDN, because it requires processing of each and every request. SDN is able to perform (right now) because only the first time a pattern is seen is the controller required to inform the switching fabric how to route matching packets. Each time thereafter the switching fabric uses its local FIB (Forwarding Information Base) and merrily forwards the packet. When attempting to add security to the mix, each and every request must be evaluated – often at a rate of thousands of requests per second. SDN is simply not designed to handle the application layer processing needs of layer 4-7 and is unlikely able to scale to the levels needed to detect and prevent a large scale application-layer DDoS let alone a massive and sudden rise in layer 7 requests (such as a web application) because the number of DPS (Decisions per Second) that can be made by the controller is limiting in an SDN model. The Intersection of SDN and ADN ADN and SDN are not competing technologies. ADN and SDN are highly complementary and serve to solve to same problems, they each merely do so at different layers of the networking stack. As Mike Fratto recently pointed out, SDN will augment networking, not replace it: In a perfect world--one invented 15 minutes ago--software-defined networking startups could build virtual network products that aren't encumbered by 30 years of tinkering with Ethernet. Ethernet has served us well, but as IT moves toward the software-defined data center , the network "is the barrier to cloud computing," according to Nicira (which was recently acquired by VMware). But, of course, the technology from Nicira will augment, not replace, traditional networking. -- VMware's SDN Strategy Is No Threat to Cisco, Juniper or Anyone Else Mike was focusing on traditional networking (layer 2 and 3) but the reality is that his words apply equally to layer 4-7. VMware's strategy is sound, but it is not a threat to its partners or others in the layer 4-7 space. SDN is disruptive to the space, but it's not destructive. SDN will force change as it is adopted, no doubt there. There will need to be adaption by layer 4-7 ADN vendors to adopt and support SDN programmability in its switching fabrics, as a means to integrate with a broader SDN initiative. ADN and SDN intersect where core layer 2 and 3 routing and switching functionality is required as a means to forward data to the proper systems after the ADC has determined the target. ADN will necessarily need to support SDN in its underlying switch fabrics, most likely by tying into the SDN management plane through API exchange, interpretation, and action, while maintaining its existing capabilities at layer 4-7 as a means to address volatility throughout the upper realms of network stack. As noted in other posts, ADN is architecturally SDN at layer 4-7, decoupling the data and control planes and enabling programmability through open APIs and scripting languages. The ADC provides the basis for expanding functionality of processing through plug-ins, much in the same way the SDN controller is designed to allow "applications" to extend processing functionality on a per-packet basis. This intersection provides a model framework for future network architecture in which SDN is used to manage intra-networks in which a high volume of change or large scale number of nodes is present that requires the flexibility and dynamism inherent in SDN. ADN will still provide the core perimeter application traffic routing it has – including security – but its interface to the applications and systems for which it manages traffic will be an SDN-enabled interface, controlled by a packet delivery controller (or by whatever name the vendor in question decides to give it). Referenced blogs & articles: F5 Friday: Performance, Throughput and DPS VMware's SDN Strategy Is No Threat to Cisco, Juniper or Anyone Else We iz in ur networkz, deep inspecting ur XML packetz. Wait, what? F5 Friday: ADN = SDN at Layer 4-7 Integration Topologies and SDN Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture SDN, OpenFlow, and Infrastructure 2.0 F5 Friday: Programmability and Infrastructure as Code454Views0likes0CommentsIncident Remediation with Cisco Firepower and F5 SSL Orchestrator
SSL Orchestrator Configuration steps This guide assumes you have a working SSL Orchestrator Topology, either Incoming or Outgoing, and you want to add a Cisco Firepower TAP Service.Both Topology types are supported, and configuration of the Cisco Remediation is the same. If you do not have a working SSL Orchestrator Topology you can refer to the BIG-IP SSL Orchestrator Dev Central article series for full configuration steps. In this guide we will outline the necessary steps to deploy the Cisco FTD with SSL Orchestrator.FTD can be deployed as a Layer 2/3 or TAP solution.SSL Orchestrator can be deployed as a Layer 2 or 3 solution. SSL Orchestrator gives you the flexibility to deploy in the manner that works best for you.As an example, SSL Orchestrator can be deployed in Layer 2 mode while FTD is deployed in Layer 3 mode, and vice versa. A familiarity with BIG-IP deployment concepts and technology as well as basic networking is essential for configuring and deploying the SSL Orchestrator components of the BIG-IP product portfolio. For further details on the configuration and networking setup of the BIG-IP, please visit the F5 support site at https://support.f5.com . The SSL Orchestrator Guided Configuration will walk you through configuration of the Services (Firepower nodes), Security Policy and more.Lastly, iRules will be applied. Guided Configuration: Create Services We will use the Guided Configuration wizard to configure most of this solution, though there are a few things that must be done outside of the Guided Configuration.In this example we will be working with an existing L2 Outbound Topology. From the BIG-IP Configuration Utility click SSL Orchestrator > Configuration > Services > Add. For Service properties, select Cisco Firepower Threat Defense TAP then click Add. Give it a name.Enter the Firepower MAC Address (or 12:12:12:12:12:12 if it is directly connected to the SSL Orchestrator). For the VLAN choose Create New, give it a Name (Firepower in this example) and select the correct interface (2.2 in this example).If you configured the VLAN previously then choose Use Existing and select it from the drop-down menu. Note: A VLAN Tag can be specified here if needed. Enabling the Port Remap is optional.Click Save & Next. Click the Service Chain name you wish to configure, sslo_SC_ServiceChain in this example. Note: If you don’t have a Service Chain you can add one now. Highlight the Firepower Service and click the arrow in the middle to move it to the Selected side.Click Save. Click Save & Next. Then click Deploy. Configuration of iRules and Virtual Servers We will create two iRules and two Virtual Servers. The first iRule listens for HTTP requests from the Firepower device. Firepower then responds via its Remediation API and sends an HTTP Request containing an IP Address and a timeout value. The address will be the source IP that is to be blocked by the SSL Orchestrator; the SSL Orchestrator will continue to block for the duration of the timeout period. For details and tutorials on iRules, please consult the F5 DevCentral site athttps://devcentral.f5.com. Create the first iRule on the SSL Orchestrator. Within the GUI, selectLocal Traffic > iRules then chooseCreate. Give it a name (FTD-Control in this example) then copy/paste the iRule text into the Definition field. Click Finished. This iRule will be associated with the Control Virtual Server. iRule text when HTTP_REQUEST { if { [URI::query [HTTP::uri] "action"] equals "blocklist" } { set blockingIP [URI::query [HTTP::uri] "sip"] set IPtimeout [URI::query [HTTP::uri] "timeout"] table add -subtable "blocklist" $blockingIP 1 $IPtimeout HTTP::respond 200 content "$blockingIP added to blocklist for $IPtimeout seconds" return } HTTP::respond 200 content "You need to include an ? action query" } Create the second iRule by clicking Create again. Give it a name (FTD-Protect in this example) then copy/paste the iRule text into the Definition field. Click Finished. This iRule will be associated with the Protect Virtual Server. iRule text when CLIENT_ACCEPTED { set srcip [IP::remote_addr] if { [table lookup -subtable "blocklist" $srcip] != "" } { drop log local0. "Source IP on block list " return } } Create the Virtual Servers from Local Traffic select Virtual Servers > Create. Give it a name, FTD-Control in this example. The type should be Standard.Enter “0.0.0.0/0” for the Source Address Host.This indicates any Source Address will match. The Destination Address/Mask is the IP address the SSL Orchestrator will listen on to accept API requests from Firepower.In this example it’s “10.5.9.77/32” which indicates that the SSL Orchestrator will only respond to connections TO that single IP address. Note: The Destination Address/Mask must be in the same subnet as the 2 nd Management Interface on the Firepower Management Center.We’ll go over this later. For VLANS and Tunnels Traffic it is preferred for this to be enabled on the specific VLAN that the Firepower 2 nd Management Interface will be using, rather than All VLANs and Tunnels. Choose Enabled on… Select the same VLAN that the Firepower 2 nd Management Interface will be using, in this example vlan509.Click the double << to move the vlan to Selected. In the Resources section click the FTD-Control iRule created previously.Click the double << to move it to Enabled. Click Finished when done. Click Create again. Give it a name, FTD-Protect in this example.Set the Type to Forwarding (IP).The Source Address in this example is set to 10.4.11.152/32.This Virtual Server will only accept connections with a Source IP of 10.4.11.152.It is being done this way for testing purposes to make sure everything works with a single test client.With an Incoming Topology the Source Address might be set to 0.0.0.0/0 which would allow connections from anywhere. The 10.5.11.0 network is the Destination the 10.4.11.0 network must take to pass through SSL Orchestrator. Under Available, select the ingress VLAN the SSL Orchestrator is receiving traffic on, Direct_all_vlan_511_2 in this example.Click the double << in the middle to move it from Available to Selected. In the Resources section click the FTD-Protect iRule created previously.Click the double << to move it to Enabled. Click Finished when done. Steps Performed: 1.Firepower TAP Service created 2.iRules created 3.Virtual Servers created 4.iRules attached to Virtual Servers Cisco Firepower (FTD) Setup and Configuration This Guide assumes you have Cisco Firepower and Firepower Management Center (FMC) deployed, licensed and working properly. After logging into the Firepower Management Center you will see the Summary Dashboard. Click System > Configuration to configure the Management settings. Click Management Interfaces on the left. A Management Interface on FMC must be configured for Event Traffic.This interface MUST be on the same subnet as the Control Virtual Server on SSL Orchestrator (10.5.9.77).If using a Virtual Machine for FMC you need to add a 2 nd NIC within the Hypervisor console, like this: Refer to your Hypervisor admin guide for more information on how to do this. To configure the 2 nd Management Interface click the pencil icon. Click Save when done. Firepower Access Policy This guide assumes that Intrusion and Malware policies are enabled for the Firepower device.The Policy should look something like the image below. Firepower Remediation Policies Next, we need to create a Firepower Remediation Policy. A Remediation policy can take a variety of different actions based on an almost infinite set of criteria.For example, if an Intrusion Event is detected, Firepower can tell SSL orchestrator to block the Source IP for a certain amount of time. From FMC click Policies > Responses > Modules. The F5 Remediation Module is installed here.Click Browse to install the Module.Locate the Module on your computer and select it, click Open then Install.Click the magnifying glass on the right after it’s installed. Note: The F5 Remediation Module can be downloaded from a link at the bottom of this article. Click Add to Configure an Instance. Give it a name, Block_Bad_Actors in this example.Specify the IP address of the SSL Orchestrator Control Virtual Server, 10.5.9.77 in this example.Optionally change the Timeout and click Create. Next, configure a Remediation by clicking Add. Give it a name, RemediateBlockIP in this example and click Create. Select Policies > Correlation > Create Policy to create a Correlation Policy to define when/how to initiate the Remediation. Give it a name, Remediation in this example and click Save. From the Rule Management tab click Create Rule. Give it a name, RemediateRule in this example. For the type of event select ‘an intrusion event occurs’ from the drop-down menu. For the Condition select Source Country > is > North Korea > Save Note: FMC can trigger a Remediation for a variety of different events, not just for Intrusion.In fact, while configuring Remediation you might want to use a different Event Type to make it easier to trigger an event and verify it was successfully Remediated.For example, you could choose ‘a connection event occurs’ then set the Condition to URL > contains the string > “foo”.In this way the Remediation rule should trigger if you attempt to go to the URL foo.com. Go back to Policy Management and click the Policy created previously, Remediation in this example.Click Add Rules. Select the RemediateRule and click Add. Click Save. Correlation policies can be enabled or disabled using the toggle on the right.Make sure the correct policy is enabled. Remediated Policy Reporting The status of Remediation Events can be viewed from Analysis > Correlation > Status. Here we can see the “Successful completion of remediation” message. Conclusion This concludes the recommended practices for configuring F5 BIG-IP SSL Orchestrator with the Cisco FTD. The architecture has been demonstrated to address both the SSL visibility and control and IPS Policy Based Traffic Steering and Blocking user scenarios. With the termination of SSL on the SSL Orchestrator, FTD sensors are provided visibility into both ingress and egress traffic to adapt and protect an organization’s applications, servers and other resources. By leveraging the Security Policy Based Traffic Steering, an organization can leverage this configuration and can continue to scale through the addition of more FTD managed devices in order to provide more traffic capacity for the protected networks and applications. This policy based flexibility which can be provided by the SSL Orchestrator, can also be leveraged to selectively direct traffic to different pools of resources based on business, security or compliance requirements.443Views0likes8Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.399Views0likes0CommentsF5 Friday: The Dynamic Control Plane
It’s not just cloud computing and virtualization that introduce volatility into the data center. The natural state of cloud computing is one of constant change. Applications and services and users interacting in ways that constantly change the landscape of the data center. But it isn’t just the volatility of cloud computing and virtualization that makes traditional data center architectures brittle and more apt to fail. It’s the constant barrage of users, devices, and locations against a static data center configuration that makes a traditional architecture fragile and inefficient. Pressures are mounting both from within and without on data center infrastructure to assure availability, security and high-performance of applications to every user, regardless of location or device type. With the rapidly changing landscape of devices and smartphones and locations of users, this means an ever changing array of policies governing access and assuring availability. The processes by which these are enforced is not sustainable in the face of such growth. The burden on operations, network and security teams can only grow based on such a static model. What’s needed to address the dynamism of the user and application environment is a dynamic infrastructure; a dynamic services model that adapts to the increasingly complex set of variables that can impact the way in which infrastructure should treat each and every individual application request – and response. It is this characteristic, this agility in infrastructure, that is critical to the implementation of a dynamic data center and an agile operational posture. Without the ability to adapt in an intelligent and programmable fashion, operations and infrastructure cannot hope to scale along with the growing demands on the network, application and storage infrastructure. THE DYNAMIC SERVICES MODEL The long-term answer to the challenge of efficient scalability in infrastructure operations lies in an architectural approach that anticipates change and enables rapid adaptation to any situation. This ideal state replaces point-to-point connections with a flexible dynamic services model that serves as an intelligent proxy between users and resources. It is important to stress that the dynamic services model is an ecosystem approach; it is not a single vendor solution or a point product. The programmatic and procedural resources provided to integrate, coordinate, and collaborate with other ecosystem elements are defining characteristics of F5’s dynamic control plane. This is evident in the illustrated solutions with VMware and Gomez, but these are only a few examples of the F5 dynamic control plane solving real-world problems today. F5 maintains formal relationships with leading technology providers including Dell, HP, IBM, ExtraHop, Infoblox, NetApp, CA, Symantec, webMethods, Secure Computing, RSA, WhiteHat Security, Splunk, TrendMicro, ByteMobile, Microsoft, and many others. These relationships include tested and documented integration with F5’s solutions and, as such, can be thought of as an extension of the dynamic control plane architecture. -- Ken Salchow, “Unleashing the True Potential of On-Demand IT” Within F5 we refer to this strategic point of control as the dynamic control plane; a platform that is adaptable, programmable and intelligent with regard to both its run-time and configuration-time operations. The full-proxy nature of F5’s underlying application delivery platform, TMOS, provides an interconnected and contextually-aware environment in which requests and responses can be collaboratively intercepted, inspected and if necessary, modified to assure the highest levels of availability, security and performance. By providing a common high-speed interconnect that shares context, F5 BIG-IP solutions are all capable of understanding not only the context of each individual request and response, but the business and operational requirements placed upon the data in a way that allows the platform to make real-time decisions regarding policy enforcement. From a deployment perspective, the dynamic control plane enables an agile operational posture; one that integrates via a standards-based, service-enabled API to provide the means by which BIG-IP can be integrated and collaborate with other data center management platforms to provide automated provisioning, context-aware monitoring, and infrastructure as a service. By enabling the platform with a common set of remote management interfaces, BIG-IP can be managed, monitored and informed through collaborative technologies that increase its abilities to make informed decisions regarding ingress and egress traffic such that the appropriate policies are enforced at the appropriate time on the appropriate end-user, device and location. Combining end-user, network, and application awareness means BIG-IP is enabled with the data necessary to adapt in real-time to conditions that exist now rather than conditions as they were five, ten or thirty minutes in the past. While historical trending is helpful in setting appropriate policies the ability to react quickly means unanticipated variables can be accounted for more rapidly, which means less time in which possible outages or breaches may occur. The results of such collaboration can be seen in joint solutions such as: Building an Enterprise Cloud with F5 and IBM – F5 Tech Brief F5 and Infoblox Integrated DNS Architecture – F5 Tech Brief F5 and Microsoft Delivering IT as a Service Achieving Enterprise Agility in the Cloud (VMware, F5 and BlueLock) All of these solutions (and others) take advantage of F5’s dynamic control plane both for automating the processes necessary to achieve the level of dynamism required as part of the solution to the challenge and for implementing the decision-making processes required at run-time to address the dynamism that drives the need for those processes to exist. IT’S AN AGILE THING What we’re really trying to say when we talk about addressing dynamism – whether internal to the data center, e.g. auto-scaling applications or external to the data center, e.g. new clients, locations and devices, is the need for a more agile operational posture. We’re trying to get to the point where IT has the ability to react to conditions as they change in a way that enhances the performance, availability and security of the data center as a whole. It’s about programmability and processability, about being able to specify policies to address “what-if” scenarios and then trusting that those policies will be enforced in the event they come to fruition. It’s about making infrastructure as agile as the conditions under which they must constantly deliver applications, and doing so as efficiently as possible. A dynamic services model enables operations to assume a more agile posture regarding deployment and delivery of applications. F5’s dynamic control plane makes it possible to do so efficiently, intelligently and collaboratively. What CIOs Can Learn from the Spartans Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The F5 Dynamic Services Model Unleashing the True Potential of On-Demand IT What is a Strategic Point of Control Anyway? Cloud is the How not the What Cloud Control Does Not Always Mean ‘Do it yourself’ The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Data Center Feng Shui: Process Equally Important as Preparation Some Services are More Equal than Others The Battle of Economy of Scale versus Control and Flexibility342Views0likes1Comment