integration
85 TopicsLoad Balancing Omnissa Unified Access Gateway with BIG-IP LTM
This article highlights the renewed partnership between Omnissa and F5, focusing on updated guides for integrating F5 BIG-IP LTM with the Omnissa Unified Access Gateway (UAG) for Horizon. UAG, formerly known as VMware Unified Access Gateway, provides secure access to desktop and application resources for remote users. F5’s products enhance the reliability, scalability, and security of UAG deployments, particularly for large Horizon environments requiring multiple pods or data centers. The guide offers two deployment options: leveraging the existing iApp or using a manual method. Both approaches ensure efficient, seamless implementation tailored to users’ needs. The collaboration between F5 and Omnissa continues to drive faster deployments while preparing for future growth and requirements. Readers are also encouraged to explore the integration guide for F5 APM with Omnissa Workspace ONE.320Views2likes0Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.401Views0likes0CommentsF5 and Promon Have Partnered to Protect Native Mobile Applications from Automated Bots - Easily
This DevCentral article provides details on how F5 Bot Defense is used today to protect against mobile app automated bot traffic from wreaking havoc with origin server infrastructure and producing excessive nuisance load volumes, and in turn how the Promon Mobile SDK Integrator is leveraged to speed the solution's deployment. The integrator tool contributes to the solution by allowing the F5 anti-bot solution to be inserted into customer native mobile apps, for both Apple and Android devices, with a simple and quick “No Code” approach that can be completed in a couple of minutes.No source code of the native mobile app need be touched, only the final compiled binaries are required to add the anti bot security provisions as a quick, final step when publishing apps. Retrieving Rich, Actionable Telemetry in a Browser World In the realm of Internet browser traffic, whether the source be Chrome, Edge, Safari or any other standards-based web browser, the key to populating the F5 anti-bot analytics platform with actionable data is providing browsers clear instructions of when, specifically, to add telemetry to transactions as well as what telemetry is required.This leads to the determination, in real time, of whether this is truly human activity or instead automated, negative hostile traffic. The “when” normally revolves around the high value server-side endpoint URLs, things like “login” pages, “create account” links, “reset password” or “forgot username” pages, all acting like honeypot pages where attackers will gravitate to and direct brute force bot-based attacks to penetrate deep into the service. All the while, each bot will be cloaked to the best of their abilities as a seemingly legitimate human user.Other high-value transaction endpoints would likely be URLs corresponding to adding items to a virtual shopping cart and checkout pages for commercial sites. Telemetry, the “what” in the above discussion, is to be provided for F5 analytics and can range from dozens to over one hundred elements.One point of concern is inconsistencies in the User Agent field produced in the HTTP traffic, where a bot may indicate a Windows 10 machine using Chrome via the User Agent header, but various elements of telemetry retrieved indicate a Linux client using Firefox.This is eyebrow raising, if the purported user is misleading the web site with regards to the User Agent, what is the traffic creator’s endgame? Beyond analysis of configuration inconsistencies, behavioral items are noted such as unrealistic mouse motions at improbable speeds and uncanny precision suggesting automation, or perhaps the frequent use of paste functions to fill in form fields such as username, where values are more likely to be typed or be auto filled by browser cache features. Consider something as simple as the speed of key depressed and the key being released upward, if signals indicate an input typing pace that defies the physics of a keyboard, just milliseconds for a keystroke, this is another strong warning sign of automation. Telemetry in the Browser World Telemetry can be thought of as dozens and dozens of intelligence signals building a picture of the legitimacy of the traffic source. The F5 Bot Defense request for telemetry and subsequently provided values are fully obfuscated from bad actors and thus impossible to manipulate in any way. The data points lead to a real time determination of the key aspect of this user:is this really a human or is this automation? This set of instructions, directing browsers with regards to what transactions to decorate with requested signals, is achieved by forcing browsers to execute specific JavaScript inserted into the pages that browsers load when interacting with application servers.The JavaScript is easily introduced by adding JavaScript tags into HTML pages, specifically key returned transactions with “Content Type=text/HTML” that subsequently lead to high value user-side actions like submission of user credentials (login forms). The addition of the JavaScript tags is frequently inserted in-line to returned traffic through a F5 BIG-IP application delivery controller, the F5 Shape proxy itself, or a third-party tag manager, such as Google Tag Manager.The net result is a browser will act upon JavaScript tags to immediately download the F5 JavaScript itself.The script will result in browsers providing the prescribed rich and detailed set of telemetry included in those important HTTP transactions that involve high value website pages conducting sensitive actions, such as password resetting through form submissions. Pivoting to Identify Automation in the Surging Mobile Application Space With native mobile apps, a key aspect to note is the client is not utilizing a browser but rather what one can think of a “thick” app, to borrow a term from the computer world.Without a browser in play, it is no longer possible to simply rely upon actionable JavaScript tags that could otherwise be inserted in flight, from servers towards clients, through application delivery controllers or tag managers.Rather, with mobile apps, one must adjust the app itself, prior to posting to an app store or download site, to offer instructions on where to retrieve a dynamic configuration file, analogous to the instructions provided by JavaScript tag insertion in the world of browsers. The retrieved configuration instruction file will guide the mobile app in terms of which transactions will require “decorating” with telemetry, and of course what telemetry is needed.The telemetry will often vary from one platform such as Android to a differing platform such as Apple iOS.Should more endpoints within the server-side application need decorating, meaning target URLs, one simply adjusts the network hosted configuration instruction file. This adjustment in operations of a vendor’s native mobile application behavior is achieved through F5 Bot Defense Mobile SDK (Software Development Kit) and representative provided telemetry signals might include items like device battery life remaining, device screen brightness and resolution and indicators of device setup, such as a rooted device or an emulator posing as a device.Incorrectly emulated devices, such as one displaying mutually exclusive iOS and Android telemetry, concurrently, allows F5 to isolate troublesome, unwanted automated traffic from the human traffic required for ecommerce to succeed efficiently and unfettered. The following four-step diagram depicts the process of a mobile app being protected by F5 bot defense, from step one with the retrieval of instructions that include what transactions to decorate with telemetry data, through to step four where the host application (the mobile app) is made aware if the transaction was mitigated (blocked or flagged) by means of headers introduced by the F5 Antibot solution. The net result of equipping mobile apps with the F5 Bot Defense Mobile SDK, whether iOS or Android, is the ability to automatically act upon automated bot traffic observed, without a heavy administrative burden. A step which is noteworthy and unique to the F5 mobile solution is the final, fourth step, whereby a feedback mechanism is implemented in the form of the Parse Response header value.This notifies the mobile app that the transaction was mitigated (blocked) en route.One possible reason this can happen is the app was using a dated configuration file, and a sensitive endpoint URL had been recently adjusted to require telemetry.The result of the response in step 4 is the latest version of the config file, with up-to-date telemetry requirements, will automatically be re-read by the mobile app and the transaction can now take place successfully with proper decorated telemetry included. Promon SDK Integrator and F5 Bot Defense: Effortlessly Secure Your Mobile Apps Today One approach to infusing a native mobile app with the F5 Bot Defense SDK would be a programmatic strategy, whereby the source code of the mobile app would require modifications to incorporate the F5 additional code.Although possible, this may not be aligned with the skillsets of all technical resources requiring frequent application builds, for instance for quality assurance (QA) testing. Another issue might be a preference to only have obfuscated code at the point in the publishing workflow where the security offering of the SDK is introduced.In simple terms, the core mandate of native mobile app developers is the intellectual property contained within that app, adding valuable security features to combat automation is more congruent with a final checkmark obtained by protecting the completed application binary with a comprehensive protective layer of security. To simplify and speed up the time for ingestion of the SDK, F5 has partnered with Promon, of Oslo, Norway to make use of an integrator application from Promon which can achieve a “No Code” integration in just a few commands.The integrator, technically a .jar executable known as the Shielder tool at the file level, is utilized as per the following logic. The workflow to create the enhanced mobile app, using the Promon integration tool, with the resultant modified mobile app containing the F5 (Shape) SDK functions within it, consists of only two steps. 1.Create a Promon SDK Integrator configuration file 2.Perform the SDK injection An iOS example would take the follow form: Step 1: python3 create_config.py --target-os iOS --apiguard-config ./base_ios_config.json --url-filter *.domain.com --enable-logs --outfile sdk_integrator_ios_plugin_config.dat Step 2: java -jar Shielder.jar --plugin F5ShapeSDK-iOS-Shielder-plugin-1.0.4.dat --plugin sdk_integrator_ios_plugin_config.dat ./input_app.ipa --out . /output_app.ipa --no-sign Similarly, an Android example would remain also a simple two-step process looking much like this: Step 1: python3 create_config.py --target-os Android --apiguard-config ./base_android_config.json --url-filter *.domain.com --enable-logs --outfile sdk_integrator_android_plugin_config.dat Step 2: java -jar Shielder.jar --plugin F5ShapeSDK-Android-Shielder-plugin-1.0.3.dat --plugin sdk_integrator_android_plugin_config.dat ./input_app.apk --output ./output_app.apk In each respective “Step 1”, the following comments can be made about the create_config.py arguments: target-os specifies the platform (Apple or Android). apiguard-config will provide a base configuration .json file, which will server to provide an initial list of protected endpoints (corresponding to key mobile app exposed points such as “create account” or “reset password”), along with default telemetry required per endpoint.Once running, the mobile app equipped with the mobile SDK will immediately refresh itself with a current hosted config file. url-filter is a simple means of directing the SDK functions to only operate with specific domain name space (eg *.sampledomain.com).Url filtering is also available within the perpetually refreshed .json config file itself. enable-logs allows for debugging logs to be optionally turned on. outfile specifies for file naming of the resultant configuration.dat file, which is ingested in step 2, where the updated iOS or Android binary, with F5 anti-bot protections, will be created. For Step 2, where the updated binaries are created, for either Apple iOS or Android platforms, these notes regard each argument called upon by the java command: shielder.jar is the portion of the solution from Promon which will adjust the original mobile application binary to include F5 anti-bot mobile security, all without opening the application code. F5ShapeSDK-[Android or iOS]-Shielder-plugin-1.0.3.dat is the F5 antibot mobile SDK, provide in a format consumable by the Promon shielder. The remaining three arguments are simply the configuration output file of step 1, the original native mobile app binary and finally the new native mobile app binary which will now have anti-bot functions. The only additional step that is optionally run would be re-signing the new version of the mobile application, as the hash value will change with the addition of the security additions to the new output file. Contrasting the Promon SDK Integrator with Manual Integration and Mobile Application Requirements The advantage of the Promon Integrator approach is the speed of integration and the lack of any coding requirements to adjust the native mobile application.The manual approach to integrating F5 Bot Defense Mobile SDK is documented by F5 with separate detailed guides available for Android and for iOS.A representative summary of the steps involved include the following check points along the path to successful manual SDK integration: oImporting the provided APIGuard library into Android Studio (Android) and Xcode (iOS) environments oThe steps for iOS will differ depending on whether the original mobile app is using a dynamic or static framework, commands for both Swift and Objective-c are provided in the documentation oThe F5 SDK code is already obfuscated and compressed; care should be followed not to include this portion of the revised application in existing obfuscation procedures oWithin Android Studio, expand and adjust as per the documentation the Gradle scripts oInitialize the Mobile SDK; specific to Android and the Application Class the detailed functions utilized are available with the documentation, including finished initialization examples in Java and Kotlin oSpecific to iOS, initialize the mobile SDK in AppDelegate, this includes adding APIGuardDelegate to the AppDelegate class as an additional protocol, full examples are provided for Swift and Obective-c oBoth Android and iOS will require a GetHeaders functions be invoked through code additions for all traffic potentially to be decorated with telemetry, in accordance with the instructions of the base and downloaded configuration files As demonstrated the by the list length of these high-level and manual steps above, which involve touching application code, the alternative ease and simplicity of the Promon Integrator two command offering may be significant in many cases. The platform requirements for the F5 Bot Defense and Promon paired solution are not arduous and frequently reflect libraries used in many native mobile applications developed today.These supported libraries include: oAndroid: HttpURLConnection, OkHttp, Retrofit oiOS: NSURLSession, URLSession, Alamofire Finally, mobile applications that utilize WebViews, which is to say applications using browser-type technologies to implement the app, such as support for Cascading Style Sheets or JavaScript, are not applicable to the F5 Bot Defense SDK approach.In some cases, entirely WebView implemented applications may be candidates for support through the browser style JavaScript-oriented F5 Bot Defense telemetry gathering. Summary With the simplicity of the F5 and Promon workflow, this streamlined approach to integrating the anti-bot technology into a mobile app ecosystem allows for rapid, iterative usage. In development environments following modern CI/CD (continuous integration/continuous deployment) paradigms, with build servers creating frequently updated variants of native mobile apps, one could invoke the two steps of the Promon SDK integrator daily, there are no volume-based consumption constraints.1.5KViews1like0CommentsIncident Remediation with Cisco Firepower and F5 SSL Orchestrator
SSL Orchestrator Configuration steps This guide assumes you have a working SSL Orchestrator Topology, either Incoming or Outgoing, and you want to add a Cisco Firepower TAP Service.Both Topology types are supported, and configuration of the Cisco Remediation is the same. If you do not have a working SSL Orchestrator Topology you can refer to the BIG-IP SSL Orchestrator Dev Central article series for full configuration steps. In this guide we will outline the necessary steps to deploy the Cisco FTD with SSL Orchestrator.FTD can be deployed as a Layer 2/3 or TAP solution.SSL Orchestrator can be deployed as a Layer 2 or 3 solution. SSL Orchestrator gives you the flexibility to deploy in the manner that works best for you.As an example, SSL Orchestrator can be deployed in Layer 2 mode while FTD is deployed in Layer 3 mode, and vice versa. A familiarity with BIG-IP deployment concepts and technology as well as basic networking is essential for configuring and deploying the SSL Orchestrator components of the BIG-IP product portfolio. For further details on the configuration and networking setup of the BIG-IP, please visit the F5 support site at https://support.f5.com . The SSL Orchestrator Guided Configuration will walk you through configuration of the Services (Firepower nodes), Security Policy and more.Lastly, iRules will be applied. Guided Configuration: Create Services We will use the Guided Configuration wizard to configure most of this solution, though there are a few things that must be done outside of the Guided Configuration.In this example we will be working with an existing L2 Outbound Topology. From the BIG-IP Configuration Utility click SSL Orchestrator > Configuration > Services > Add. For Service properties, select Cisco Firepower Threat Defense TAP then click Add. Give it a name.Enter the Firepower MAC Address (or 12:12:12:12:12:12 if it is directly connected to the SSL Orchestrator). For the VLAN choose Create New, give it a Name (Firepower in this example) and select the correct interface (2.2 in this example).If you configured the VLAN previously then choose Use Existing and select it from the drop-down menu. Note: A VLAN Tag can be specified here if needed. Enabling the Port Remap is optional.Click Save & Next. Click the Service Chain name you wish to configure, sslo_SC_ServiceChain in this example. Note: If you don’t have a Service Chain you can add one now. Highlight the Firepower Service and click the arrow in the middle to move it to the Selected side.Click Save. Click Save & Next. Then click Deploy. Configuration of iRules and Virtual Servers We will create two iRules and two Virtual Servers. The first iRule listens for HTTP requests from the Firepower device. Firepower then responds via its Remediation API and sends an HTTP Request containing an IP Address and a timeout value. The address will be the source IP that is to be blocked by the SSL Orchestrator; the SSL Orchestrator will continue to block for the duration of the timeout period. For details and tutorials on iRules, please consult the F5 DevCentral site athttps://devcentral.f5.com. Create the first iRule on the SSL Orchestrator. Within the GUI, selectLocal Traffic > iRules then chooseCreate. Give it a name (FTD-Control in this example) then copy/paste the iRule text into the Definition field. Click Finished. This iRule will be associated with the Control Virtual Server. iRule text when HTTP_REQUEST { if { [URI::query [HTTP::uri] "action"] equals "blocklist" } { set blockingIP [URI::query [HTTP::uri] "sip"] set IPtimeout [URI::query [HTTP::uri] "timeout"] table add -subtable "blocklist" $blockingIP 1 $IPtimeout HTTP::respond 200 content "$blockingIP added to blocklist for $IPtimeout seconds" return } HTTP::respond 200 content "You need to include an ? action query" } Create the second iRule by clicking Create again. Give it a name (FTD-Protect in this example) then copy/paste the iRule text into the Definition field. Click Finished. This iRule will be associated with the Protect Virtual Server. iRule text when CLIENT_ACCEPTED { set srcip [IP::remote_addr] if { [table lookup -subtable "blocklist" $srcip] != "" } { drop log local0. "Source IP on block list " return } } Create the Virtual Servers from Local Traffic select Virtual Servers > Create. Give it a name, FTD-Control in this example. The type should be Standard.Enter “0.0.0.0/0” for the Source Address Host.This indicates any Source Address will match. The Destination Address/Mask is the IP address the SSL Orchestrator will listen on to accept API requests from Firepower.In this example it’s “10.5.9.77/32” which indicates that the SSL Orchestrator will only respond to connections TO that single IP address. Note: The Destination Address/Mask must be in the same subnet as the 2 nd Management Interface on the Firepower Management Center.We’ll go over this later. For VLANS and Tunnels Traffic it is preferred for this to be enabled on the specific VLAN that the Firepower 2 nd Management Interface will be using, rather than All VLANs and Tunnels. Choose Enabled on… Select the same VLAN that the Firepower 2 nd Management Interface will be using, in this example vlan509.Click the double << to move the vlan to Selected. In the Resources section click the FTD-Control iRule created previously.Click the double << to move it to Enabled. Click Finished when done. Click Create again. Give it a name, FTD-Protect in this example.Set the Type to Forwarding (IP).The Source Address in this example is set to 10.4.11.152/32.This Virtual Server will only accept connections with a Source IP of 10.4.11.152.It is being done this way for testing purposes to make sure everything works with a single test client.With an Incoming Topology the Source Address might be set to 0.0.0.0/0 which would allow connections from anywhere. The 10.5.11.0 network is the Destination the 10.4.11.0 network must take to pass through SSL Orchestrator. Under Available, select the ingress VLAN the SSL Orchestrator is receiving traffic on, Direct_all_vlan_511_2 in this example.Click the double << in the middle to move it from Available to Selected. In the Resources section click the FTD-Protect iRule created previously.Click the double << to move it to Enabled. Click Finished when done. Steps Performed: 1.Firepower TAP Service created 2.iRules created 3.Virtual Servers created 4.iRules attached to Virtual Servers Cisco Firepower (FTD) Setup and Configuration This Guide assumes you have Cisco Firepower and Firepower Management Center (FMC) deployed, licensed and working properly. After logging into the Firepower Management Center you will see the Summary Dashboard. Click System > Configuration to configure the Management settings. Click Management Interfaces on the left. A Management Interface on FMC must be configured for Event Traffic.This interface MUST be on the same subnet as the Control Virtual Server on SSL Orchestrator (10.5.9.77).If using a Virtual Machine for FMC you need to add a 2 nd NIC within the Hypervisor console, like this: Refer to your Hypervisor admin guide for more information on how to do this. To configure the 2 nd Management Interface click the pencil icon. Click Save when done. Firepower Access Policy This guide assumes that Intrusion and Malware policies are enabled for the Firepower device.The Policy should look something like the image below. Firepower Remediation Policies Next, we need to create a Firepower Remediation Policy. A Remediation policy can take a variety of different actions based on an almost infinite set of criteria.For example, if an Intrusion Event is detected, Firepower can tell SSL orchestrator to block the Source IP for a certain amount of time. From FMC click Policies > Responses > Modules. The F5 Remediation Module is installed here.Click Browse to install the Module.Locate the Module on your computer and select it, click Open then Install.Click the magnifying glass on the right after it’s installed. Note: The F5 Remediation Module can be downloaded from a link at the bottom of this article. Click Add to Configure an Instance. Give it a name, Block_Bad_Actors in this example.Specify the IP address of the SSL Orchestrator Control Virtual Server, 10.5.9.77 in this example.Optionally change the Timeout and click Create. Next, configure a Remediation by clicking Add. Give it a name, RemediateBlockIP in this example and click Create. Select Policies > Correlation > Create Policy to create a Correlation Policy to define when/how to initiate the Remediation. Give it a name, Remediation in this example and click Save. From the Rule Management tab click Create Rule. Give it a name, RemediateRule in this example. For the type of event select ‘an intrusion event occurs’ from the drop-down menu. For the Condition select Source Country > is > North Korea > Save Note: FMC can trigger a Remediation for a variety of different events, not just for Intrusion.In fact, while configuring Remediation you might want to use a different Event Type to make it easier to trigger an event and verify it was successfully Remediated.For example, you could choose ‘a connection event occurs’ then set the Condition to URL > contains the string > “foo”.In this way the Remediation rule should trigger if you attempt to go to the URL foo.com. Go back to Policy Management and click the Policy created previously, Remediation in this example.Click Add Rules. Select the RemediateRule and click Add. Click Save. Correlation policies can be enabled or disabled using the toggle on the right.Make sure the correct policy is enabled. Remediated Policy Reporting The status of Remediation Events can be viewed from Analysis > Correlation > Status. Here we can see the “Successful completion of remediation” message. Conclusion This concludes the recommended practices for configuring F5 BIG-IP SSL Orchestrator with the Cisco FTD. The architecture has been demonstrated to address both the SSL visibility and control and IPS Policy Based Traffic Steering and Blocking user scenarios. With the termination of SSL on the SSL Orchestrator, FTD sensors are provided visibility into both ingress and egress traffic to adapt and protect an organization’s applications, servers and other resources. By leveraging the Security Policy Based Traffic Steering, an organization can leverage this configuration and can continue to scale through the addition of more FTD managed devices in order to provide more traffic capacity for the protected networks and applications. This policy based flexibility which can be provided by the SSL Orchestrator, can also be leveraged to selectively direct traffic to different pools of resources based on business, security or compliance requirements.468Views0likes8CommentsF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM655Views0likes4CommentsF5 Friday: HP Cloud Maps Help Navigate Server Flexing with BIG-IP
The economy of scale realized in enterprise cloud computing deployments is as much (if not more) about process as it is products. HP Cloud Maps simplify the former by automating the latter. When the notion of “private” or “enterprise” cloud computing first appeared, it was dismissed as being a non-viable model due to the fact that the economy of scale necessary to realize the true benefits were simply not present in the data center. What was ignored in those arguments was that the economy of scale desired by enterprises large and small was not necessarily that of technical resources, but of people. The widening gap between people and budgets and data center components was a primary cause of data center inefficiency. Enterprise cloud computing promised to relieve the increasing burden on people by moving it back to technology through automation and orchestration. As a means to achieve such a feat – and it is a non-trivial feat – required an ecosystem. No single vendor could hope to achieve the automation necessary to relieve the administrative and operational burden on enterprise IT staff because no data center is ever comprised of components provided by a single vendor. Partnerships – technological and practical partnerships – were necessary to enable the automation of processes spanning multiple data center components and achieve the economy of scale promised by enterprise cloud computing models. HP, while providing a wide variety of data center components itself, has nurtured such an ecosystem of partners. Combined with its HP Operations Orchestration, such technologically-focused partnerships have built out an ecosystem enabling the automation of common operational processes, effectively shifting the burden from people to technology, resulting in a more responsive IT organization. HP CLOUD MAPS One of the ways in which HP enables customers to take advantage of such automation capabilities is through Cloud Maps. Cloud Maps are similar in nature to F5’s Application Ready Solutions: a package of configuration templates, guides and scripts that enable repeatable architectures and deployments. Cloud Maps, according to HP’s description: HP Cloud Maps are an easy-to-use navigation system which can save you days or weeks of time architecting infrastructure for applications and services. HP Cloud Maps accelerate automation of business applications on the BladeSystem Matrix so you can reliably and consistently fast- track the implementation of service catalogs. HP Cloud Maps enable practitioners to navigate the complex operational tasks that must be accomplished to achieve even what seems like the simplest of tasks: server provisioning. It enables automation of incident resolution, change orchestration and routine maintenance tasks in the data center, providing the consistency necessary to enable more predictable and repeatable deployments and responses to data center incidents. Key components of HP Cloud Maps include: Templates for hardware and software configuration that can be imported directly into BladeSystem Matrix Tools to help guide planning Workflows and scripts designed to automate installation more quickly and in a repeatable fashion Reference whitepapers to help customize Cloud Maps for specific implementation HP CLOUD MAPS for F5 NETWORKS The partnership between F5 and HP has resulted in many data center solutions and architectures. HP’s Cloud Maps for F5 Networks today focuses on what HP calls server flexing – the automation of server provisioning and de-provisioning on-demand in the data center. It is designed specifically to work with F5 BIG-IP Local Traffic Manager (LTM) and provides the necessary configuration and deployment templates, scripts and guides necessary to implement server flexing in the data center. The Cloud Map for F5 Networks can be downloaded free of charge from HP and comprises: The F5 Networks BIG-IP reference template to be imported into HP Matrix infrastructure orchestration Workflow to be imported into HP Operations Orchestration (OO) XSL file to be installed on the Matrix CMS (Central Management Server) Perl configuration script for BIG-IP White papers with specific instructions on importing reference templates, workflows and configuring BIG-IP LTM are also available from the same site. The result is an automation providing server flexing capabilities that greatly reduces the manual intervention necessary to auto-scale and respond to capacity-induced events within the data center. Happy Flexing! Server Flexing with F5 BIG-IP and HP BladeSystem Matrix HP Cloud Maps for F5 Networks F5 Friday: The Dynamic Control Plane F5 Friday: The Evolution of Reference Architectures to Repeatable Architectures All F5 Friday Posts on DevCentral Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait What is a Strategic Point of Control Anyway? The F5 Dynamic Services Model Unleashing the True Potential of On-Demand IT310Views0likes1CommentSaaS Creating Eventually Consistent Business Model
Our reliance on #cloud and external systems has finally trickled down (or is it up?) to the business. The success of SOA, which grew out of the popular Object Oriented development paradigm, was greatly hampered by the inability of architects to enforce its central premise of reuse. But it wasn't just the lack of reusing services that caused it to fail to achieve the greatness predicted, it was the lack of adopting the idea of an authoritative source for business critical objects, i.e. data. A customer, an order, a lead, a prospect, a service call. These "business objects" within SOA were intended to represented by a single, authoritative source as a means to ultimately provide a more holistic view of a customer that could be then be used by various business applications to ensure more quality service. It didn't turn out that way, mores the pity, and while organizations adopted the protocols and programmatic methods associated with SOA, they never really got down to the business of implementing authoritative sources for business critical "objects". As organizations increasingly turn to SaaS solutions, particularly for CRM and SFA solutions (Gartner’s Market Trends: SaaS’s Varied Levels of Cannibalization to On-Premises Applications published: 29 October 2012) the ability to enforce a single, authoritative source becomes even more unpossible. What's perhaps even more disturbing is the potential inability to generate that holistic view of a customer that's so important to managing customer relationships and business processes. The New Normal Organizations have had to return to an integration-focused strategy in order to provide applications with the most current view of a customer. Unfortunately, that strategy often relies upon APIs from SaaS vendors who necessarily put limits on APIs that can interfere with that integration. As noted in "The Quest for a Cloud Integration Strategy", these limitations can strangle integration efforts to reassemble a holistic view of business objects as an organization grows: "...many SaaS applications have very particular usage restrictions about how much data can be sent through their API in a given time window. It is critical that as data volumes increase that the solution adequately is aware of and handles those restrictions." Note that the integration solution must be "aware of" and "handle" the restrictions. It is nearly a foregone conclusion that these limitations will eventually be met and there is no real solution around them save paying for more, if that's even an option. While certainly that approach works for the provider - it keeps the service available - the definition of availability with respect to data is that it's, well, available. That means accessible. The existence of limitations means that at times and under certain conditions, your data will not be accessible, ergo by most folks definition it's not available. If it's not available, the ability to put together a view of the customer is pretty much out of the question. But eventually, it'll get there, right? Eventually, you'll have the data. Eventually, the data you're basing decisions on, managing customers with, and basing manufacturing process on, will be consistent with reality. Kicking Costs Down the Road - and Over the Wall Many point to exorbitant IT costs to setup, scale, and maintain on-premise systems such as CRM. It is truth that a SaaS solution is faster and likely less expensive to maintain and scale. But it is also true that if the SaaS is unable to scale along with your business in terms of your ability to access, integrate, and analyze your own data, that you're merely kicking those capital and operating expenses down to the road - and over the wall to the business. The problem of limitations on cloud integration (specifically SaaS integration) methods are not trivial. A perusal of support forums shows a variety of discussion on how to circumvent, avoid, and workaround these limitations to enable timely integration of data with other critical systems upon which business stakeholders rely to carry out their daily responsibilities to each other, to their investors, and to their customers. Fulfillment, for example, may rely on data it receives as a result of integration with a SaaS. It is difficult to estimate fulfillment on data that may or may not be up to date and thus may not be consistent with the customer's view. Accounting may be relying on data it assumes is accurate, but actually is not. Most SaaS systems impose a 24 hour interval in which it enforces API access limits, which may set the books off by as much as a day - or more, depending on how much of a backlog may occur. Customers may be interfacing with systems that integrate with back-office SaaS that shows incomplete order histories, payments and deliveries, which in turn can result in increasing call center costs to deal with the inaccuracies. The inability to access critical business data has a domino effect on every other system in place. The more distributed the sources of authoritative data the more disruptive an effect the inability to access that data due to provider-imposed limitations has on the entire business. Eventually consistent business models are not optimal, yet the massive adoption of SaaS solutions make such a model inevitable for organizations of all sizes as they encounter artificial limitations imposed to ensure system wide availability but not necessarily individual data accessibility. Being aware of such limitations can enable the development and implementation of strategies designed to keep data - especially authoritative data - as consistent as possible. But ultimately, any strategy is going to be highly dependent upon the provider and its ability to scale to meet demand - and loosen limitations on accessibility.230Views0likes1CommentF5 Friday: Gracefully Scaling Down
What goes up, must come down. The question is how much it hurts (the user). An oft ignored side of elasticity is scaling down. Everyone associates scaling out/up with elasticity of cloud computing but the other side of the coin is just as important, maybe more so. After all, what goes up must come down. The trick is to scale down gracefully, i.e. to do it in such a way as to prevent the disruption of service to existing users while simultaneously trying to scale back down after a spike in demand. The ramifications of not scaling down are real in terms of utilization and therefore cost. Scaling up with the means to scale back down means higher costs, and simply shutting down an instance that is currently in use can result in angry users as service is disrupted. What’s necessary is to be able to gracefully scale down; to indicate somehow to the load balancing solution that a particular instance is no longer necessary and begin preparation for eventually shutting it down. Doing so gracefully requires that you are somehow able to quiesce or bleed off the connections. You want to continue to service those users who are currently connected to the instance while not accepting any new connections. This is one of the benefits of leveraging an application-aware application delivery controller versus a simple Load balancer: the ability to receive instruction in-process to begin preparation for shut down without interrupting existing connections. SERVING UP ACTIONABLE DATA BIG-IP users have always had the ability to specify whether disabling a particular “node” or “member” results in the rejection of all connections (including existing ones) or if it results in refusing new connections while allowing old ones to continue to completion. The latter technique is often used in preparation for maintenance on a particular server for applications (and businesses) that are sensitive to downtime. This method maintains availability while accommodating necessary maintenance. In version 10.2 of the core BIG-IP platform a new option was introduced that more easily enables the process of draining a server/application’s connections in preparation for being taken offline. Whether the purpose is maintenance or simply the scaling down side of elastic scalability is really irrelevant; the process is much the same. Being able to direct a load balancing service in the way in which connections are handled from the application is an increasingly important capability, especially in a public cloud computing environment because you are unlikely to have the direct access to the load balancing system necessary to manually engage this process. By providing the means by which an application can not only report but direct the load balancing service, some measure of customer control over the deployment environment is re-established without introducing the complexity of requiring the provider to manage the thousands (or more) credentials that would otherwise be required to allow this level of control over the load balancer’s behavior. HOW IT WORKS For specific types of monitors in LTM (Local Traffic Manager) – HTTP, HTTPS, TCP, and UDP – there is a new option called “Receive Disable String.” This “string” is just that, a string that is found within the content returned from the application as a result of the health check. In phase one we have three instances of an application (physical or virtual, doesn’t matter) that are all active. They all have active connections and are all receiving new connections. In phase two a health check on one server returns a response that includes the string “DISABLE ME.” BIG-IP sees this and, because of its configuration, knows that this means the instance of the application needs to gracefully go offline. LTM therefore continues to direct existing connections (sessions) with that instance to the right application (phase 3), but subsequently directs all new connection requests to the other instances in the pool (farm, cluster). When there are no more existing connections the instance can be taken offline or shut down with zero impact to users. The combination of “receive string” and “receive disable string” impacts the way in which BIG-IP interprets the instruction. A “receive string” typically describes the content received that indicates an available and properly executing application. This can be as simple as “HTTP 200 OK” or as complex as looking for a specific string in the response. Similarly the “receive disable” string indicates a particular string of text that indicates a desire to disable the node and begin the process of bleeding off connections. This could be as simple as “DISABLE” as indicated in the above diagram or it could just as easily be based solely on HTTP status codes. If an application instance starts returning 50x errors because it’s at capacity, the load balancing policy might include a live disable of the instance to allow it time to cool down – maintaining existing connections while not allowing new ones. Because action is based on matching a specific string, the possibilities are pretty much wide open. The following table describes the possible interactions between the two receive string types: LEVERAGING as a PROVIDER One of the ways in which a provider could leverage this functionality to provide differentiated value-added cloud services (as Randy Bias calls them) would be to define an application health monitoring API of sorts that allows customers to add to their application a specific set of URIs that are used solely for monitoring and can thus control the behavior of the load balancer without requiring per-customer access to the infrastructure itself. That’s a win-win, by the way. The customer gets control but so does the provider. Consider an health monitoring API that is a single URI: http://$APPLICATION_INSTANCE_HOSTNAME/health/check. Now provide a set of three options for customers to return (these are likely oversimplified for illustration purposes, but not by much): ENABLE QUIESCE DISABLE For all application instances the BIG-IP will automatically use an HTTP-derived monitor that calls $APP_INSTANCE/health/check and examines the result. The monitor would use “ENABLE” as the “receive string” and “QUIESCE” as the “receive disable” string. Based on the string returned by the application, the BIG-IP takes the appropriate action (as defined by the table above). Of course this can also easily be accomplished by providing a button on the cloud management interface to do the same via iControl, but this option is more able to be programmatically defined by customers and thus is more dynamic and allows for automation. And of course such an implementation isn’t relegated only to service providers; IT organizations in any environment can take advantage of such an implementation, especially if they’re working toward an automated data center and/or self-service provisioning/management of IT services. That is infrastructure as a service. Yes, this means modification to the application being deployed. No, I don’t think that’s a problem – cloud and Infrastructure as a Service (IaaS), at least real IaaS is going to necessarily require modifications to existing applications and new applications will need to include this type of integration in the future if we are to take advantage of the benefits afforded by a more application aware infrastructure and, conversely, a more infrastructure-aware application architecture. Related Posts735Views0likes1CommentTech Fractals: Technology Trends and Integration
#IDAM #Cloud #SSO Patterns repeat. Anything else is irrational. First, the paragraph that spawned this post: The increasing use of cloud-based services is driving the need for better and more interactive single sign-on (SSO) and federated identity management (FIM) services. It is building relationship dependencies between businesses, their partners and suppliers, and customers. -- Ovum Research, "Cloud: Transforming the IAM Industry" First, I beg to differ on the conclusion that cloud is "transforming" the IAM industry. It's pretty much the same as it's ever been. Single-sign on (SSO) is still about protocol transitioning; it's just the case that protocols have been abstracted into APIs. Federated Identity Management (FIM) is SAML wrapped up in a nice name. This is not transformational. Organizations have been integrating authentication and authorization across the Internet since after the dot com bust. XML gateways, anyone? WS-SEC? Seriously, this is not transformational. At best it's evolutionary. Now, if you know anything about fractals, you know that they're fascinating mathematical constructs because they are patterns from micro-versions of the same pattern. If you look closely at one of my favorites, you can see the small "dragon" is repeated to form the larger "dragon" in increasingly sized replicas of the same pattern. Fractals are fairly easily created using well-understood algorithms (okay, they're easy if you're a student of computer science and aren't afraid of math) and they are also found (and given cool names) in nature. Turns out they're also found in technology trend cycles. Every single new technology trend seems to go through the same set of technologies through the maturation process. It's kind of the Hype Cycle, only it's not focused on the maturity and value of the technology, but rather the realization that a certain technology is suddenly applicable or necessary to take the next step toward maturation of the trending technology. Single-sign on and identity federation are two similar technologies that appear in every technology trend cycle. Once adoption reaches about the half-way point (often considered mainstream) attention turns to enterprise-focused concerns about integration with corporate identity stores and how to include the distribution and supply-chain channels in the buy and sell-side process. It's a pattern. It happened with Web-based applications. Remember Passport? The Liberty Alliance? It happened when SOA was the trend du jour. There were literally hundreds of WS-* standards created by OASIS, most of them emerging at about the same point in the technology trend cycle as they did with the Web. And today the technology du jour is cloud. It should be no surprise that SSO and IDAM are rising to the fore. It's about time, after all. Adoption of cloud is well-established and organizations are beginning to turn to more corporatey, business concerns like how do I control who is using my services and how do I integrate my channel into the process. As SDN rises in ascendancy, we're going to see the same concerns in likely the same order raised. We're already started to see peppered here and there the inevitable "security" concerns that initially plagued and inhibited cloud adoption rise with respect to SDN. And soon after that we'll see interoperability with legacy networks rise to the fore as folks realize a hybrid approach (either transitory or by design) is necessary. Patterns. They happen on almost a predictable timetable when it comes to technology trends. Cloud and SDN are no different in that respect. The emergence of these concerns are not because of cloud, they're because it's a natural progression that stems from the greater implementation and adoption process. If you want to know what the next big thing is going to be for any given technology trend, just examine the last trend we left lying on the side of the information superhighway.269Views0likes0CommentsSocial Loginwall Failure
#devops #infosec #HTML5 It is not uncommon today to click an interesting link you see on Facebook only to be confronted by a "social loginwall". If you aren't familiar with that term it's probably because I just made it up to describe the use of CSS overlays to "hide" the content you want with a second overlay, usually containing a plaintive "login or register to see this content" dialog. It's annoying, particularly if it's a random site you're not sure you want to visit again and aren't comfortable openly sharing the gory details of your Facebook life with some third-party site. So what do you do? Close the tab? Swear? Sigh and move on? Not me because, well, I can read a DOM and I'm a developer by trade and Chrome has generously made sure I have access to a debugger that can modify in real-time just about any piece of a page. That "delete node" option neatly eliminates the "social loginwall" with only minimal irritation on my part. Couple clicks and voila! I'm reading what you thought you were gating. The lesson here is if your business model (and logic) require that a visitor be logged in to see certain content, you'd better make sure that it's enforced somewhere other than on the client. C'mon. I've got marketing in my title for crying out loud. If I can circumvent your attempts to enforce application logic flows then, well, lots of other people can and honestly, there's probably a plug-in that will do it automatically for folks who aren't trained as developers. DOMAIN (APPLICATION) LOGIC It seems increasingly there's a disconnect as application architecture transitions from its traditional client-server model to a modern, API-based model. That disconnect is caused by the reality that the API is focused on data and business logic - not domain (or what we might call application logic). So that logic that controls state, that controls access to data, ends up where it doesn't belong: on the client, in the presentation layer. And because the technologies used on the client, in the presentation layer, are almost exclusively* markup language that must be parsed and rendered, well... it's fairly easy to circumvent client-side application logic as well as the oft-times rudimentary security mechanisms. Evidence of that is seen in the OWASP Top Ten, where XSS and CSRF remain two of the top vulnerabilities developers (and devops) should be addressing. And yet the exigencies of the mobile explosion complexify (yes, I made that one up, too) addressing such issues. On the one hand, we could go back to a more traditional three-tier architecture, but that reduces the benefits of the emerging, API-centric model in which the server-side components are focused on data, while the client worries about presentation (GUI). On the other hand is a new, emerging model that more concretely implements the application best-practices model. There's That Strategic Point of Control Again That's the CLIENT INTEREMDIARY SERVER pattern, and it's important; it provides a light-weight, intermediate tier on which to provide security and application (domain) logic enforcement without disrupting the basic model. The proxy, like the application delivery controller model, provides a strategic point of control at which a variety of client and server-side operational risks can be addressed. This point of control is also the appropriate place to provide metering governance. The technical point of metering is, after all, to reduce the load on services to ensure availability. If the service has to make the determination whether a request puts a user/application/partner over quota, it defeats the purpose because the resources are being consumed anyway. Metering through an intermediary, however, insulates the service and provides a better assurance of availability. It also enables a programmatic point in the data path** where new authentication and authorization can be provided, without modifying the service itself. Most important, however, is the elimination of as much application (domain) logic from the client as possible to avoid the consequences of exploitation of both application and security-related logic. *Plug-ins, while theoretically safer, are not without their own risks. See "Adobe Sandbox: When the Broker is Broken" for a good example of this. **Starting to sound like Application Layer SDN? It should...221Views0likes0Comments