integration
90 TopicsLoad Balancing Omnissa Unified Access Gateway with BIG-IP LTM
This article highlights the renewed partnership between Omnissa and F5, focusing on updated guides for integrating F5 BIG-IP LTM with the Omnissa Unified Access Gateway (UAG) for Horizon. UAG, formerly known as VMware Unified Access Gateway, provides secure access to desktop and application resources for remote users. F5’s products enhance the reliability, scalability, and security of UAG deployments, particularly for large Horizon environments requiring multiple pods or data centers. The guide offers two deployment options: leveraging the existing iApp or using a manual method. Both approaches ensure efficient, seamless implementation tailored to users’ needs. The collaboration between F5 and Omnissa continues to drive faster deployments while preparing for future growth and requirements. Readers are also encouraged to explore the integration guide for F5 APM with Omnissa Workspace ONE.320Views2likes0Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.401Views0likes0Commentsuserid to ip mapping - F5 APM
I have been wrestling with how I can share user to ip mappings for VPN connections with internal security devices(namely palo alto firewalls). I found a few great suggestions on here regarding leveraging an irule to accomplish this, and while they appeared to work, adding DTLS broke most of the examples provided. reference: https://devcentral.f5.com/questions/userid-to-leasepool-ip-mapping So I spent some time attempting to figure out how I could accomplish this with DTLS enabled and this is what I came up with: when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable set hsl [HSL::open -proto UDP -pool hsl_pa-uid_pool] } when HTTP_REQUEST { if { [HTTP::uri] starts_with "/vdesk/timeoutagent-i.php" } { set vpnip [ACCESS::session data get "session.assigned.clientip"] log local0. "timeout beacon received" if { $vpnip != "" }{ set user [ACCESS::session data get "session.logon.last.username"] If pa-vpn table entry for ip does not equal the current user we need to update the firewall if { [table lookup -notouch "pa-vpn:$vpnip"] != $user } { HSL::send $hsl "<190>F5_PA_UID_Event uid:$user vpnip:$vpnip\n" log local0. "periodic: F5_PA_UID_Event uid:$user vpnip:$vpnip" table set "pa-vpn:$vpnip" "$user" "indef" 600 } } } } when ACCESS_SESSION_CLOSED { set hsl [HSL::open -proto UDP -pool hsl_pa-uid_pool] set vpnip [ACCESS::session data get "session.assigned.clientip"] if { $vpnip != "" }{ set user [ACCESS::session data get "session.logon.last.username"] HSL::send $hsl "<190>F5_PA_LOGOUT_Event uid:$user vpnip:$vpnip\n" log local0. "periodic: F5_PA_LOGOUT_Event uid:$user vpnip:$vpnip" } } My only concern with this implementation is performance impact. The /vdesk/timeoutagent-i.php happens every 10 seconds or so, which means the set vpnip [ACCESS::session data get "session.assigned.clientip"] and [table lookup -notouch "pa-vpn:$vpnip"] will also occur. Is my concern warranted? Is there possibly a better implementation out there? Any possible alleys that I might have missed?1.3KViews1like8CommentsF5 and Promon Have Partnered to Protect Native Mobile Applications from Automated Bots - Easily
This DevCentral article provides details on how F5 Bot Defense is used today to protect against mobile app automated bot traffic from wreaking havoc with origin server infrastructure and producing excessive nuisance load volumes, and in turn how the Promon Mobile SDK Integrator is leveraged to speed the solution's deployment. The integrator tool contributes to the solution by allowing the F5 anti-bot solution to be inserted into customer native mobile apps, for both Apple and Android devices, with a simple and quick “No Code” approach that can be completed in a couple of minutes.No source code of the native mobile app need be touched, only the final compiled binaries are required to add the anti bot security provisions as a quick, final step when publishing apps. Retrieving Rich, Actionable Telemetry in a Browser World In the realm of Internet browser traffic, whether the source be Chrome, Edge, Safari or any other standards-based web browser, the key to populating the F5 anti-bot analytics platform with actionable data is providing browsers clear instructions of when, specifically, to add telemetry to transactions as well as what telemetry is required.This leads to the determination, in real time, of whether this is truly human activity or instead automated, negative hostile traffic. The “when” normally revolves around the high value server-side endpoint URLs, things like “login” pages, “create account” links, “reset password” or “forgot username” pages, all acting like honeypot pages where attackers will gravitate to and direct brute force bot-based attacks to penetrate deep into the service. All the while, each bot will be cloaked to the best of their abilities as a seemingly legitimate human user.Other high-value transaction endpoints would likely be URLs corresponding to adding items to a virtual shopping cart and checkout pages for commercial sites. Telemetry, the “what” in the above discussion, is to be provided for F5 analytics and can range from dozens to over one hundred elements.One point of concern is inconsistencies in the User Agent field produced in the HTTP traffic, where a bot may indicate a Windows 10 machine using Chrome via the User Agent header, but various elements of telemetry retrieved indicate a Linux client using Firefox.This is eyebrow raising, if the purported user is misleading the web site with regards to the User Agent, what is the traffic creator’s endgame? Beyond analysis of configuration inconsistencies, behavioral items are noted such as unrealistic mouse motions at improbable speeds and uncanny precision suggesting automation, or perhaps the frequent use of paste functions to fill in form fields such as username, where values are more likely to be typed or be auto filled by browser cache features. Consider something as simple as the speed of key depressed and the key being released upward, if signals indicate an input typing pace that defies the physics of a keyboard, just milliseconds for a keystroke, this is another strong warning sign of automation. Telemetry in the Browser World Telemetry can be thought of as dozens and dozens of intelligence signals building a picture of the legitimacy of the traffic source. The F5 Bot Defense request for telemetry and subsequently provided values are fully obfuscated from bad actors and thus impossible to manipulate in any way. The data points lead to a real time determination of the key aspect of this user:is this really a human or is this automation? This set of instructions, directing browsers with regards to what transactions to decorate with requested signals, is achieved by forcing browsers to execute specific JavaScript inserted into the pages that browsers load when interacting with application servers.The JavaScript is easily introduced by adding JavaScript tags into HTML pages, specifically key returned transactions with “Content Type=text/HTML” that subsequently lead to high value user-side actions like submission of user credentials (login forms). The addition of the JavaScript tags is frequently inserted in-line to returned traffic through a F5 BIG-IP application delivery controller, the F5 Shape proxy itself, or a third-party tag manager, such as Google Tag Manager.The net result is a browser will act upon JavaScript tags to immediately download the F5 JavaScript itself.The script will result in browsers providing the prescribed rich and detailed set of telemetry included in those important HTTP transactions that involve high value website pages conducting sensitive actions, such as password resetting through form submissions. Pivoting to Identify Automation in the Surging Mobile Application Space With native mobile apps, a key aspect to note is the client is not utilizing a browser but rather what one can think of a “thick” app, to borrow a term from the computer world.Without a browser in play, it is no longer possible to simply rely upon actionable JavaScript tags that could otherwise be inserted in flight, from servers towards clients, through application delivery controllers or tag managers.Rather, with mobile apps, one must adjust the app itself, prior to posting to an app store or download site, to offer instructions on where to retrieve a dynamic configuration file, analogous to the instructions provided by JavaScript tag insertion in the world of browsers. The retrieved configuration instruction file will guide the mobile app in terms of which transactions will require “decorating” with telemetry, and of course what telemetry is needed.The telemetry will often vary from one platform such as Android to a differing platform such as Apple iOS.Should more endpoints within the server-side application need decorating, meaning target URLs, one simply adjusts the network hosted configuration instruction file. This adjustment in operations of a vendor’s native mobile application behavior is achieved through F5 Bot Defense Mobile SDK (Software Development Kit) and representative provided telemetry signals might include items like device battery life remaining, device screen brightness and resolution and indicators of device setup, such as a rooted device or an emulator posing as a device.Incorrectly emulated devices, such as one displaying mutually exclusive iOS and Android telemetry, concurrently, allows F5 to isolate troublesome, unwanted automated traffic from the human traffic required for ecommerce to succeed efficiently and unfettered. The following four-step diagram depicts the process of a mobile app being protected by F5 bot defense, from step one with the retrieval of instructions that include what transactions to decorate with telemetry data, through to step four where the host application (the mobile app) is made aware if the transaction was mitigated (blocked or flagged) by means of headers introduced by the F5 Antibot solution. The net result of equipping mobile apps with the F5 Bot Defense Mobile SDK, whether iOS or Android, is the ability to automatically act upon automated bot traffic observed, without a heavy administrative burden. A step which is noteworthy and unique to the F5 mobile solution is the final, fourth step, whereby a feedback mechanism is implemented in the form of the Parse Response header value.This notifies the mobile app that the transaction was mitigated (blocked) en route.One possible reason this can happen is the app was using a dated configuration file, and a sensitive endpoint URL had been recently adjusted to require telemetry.The result of the response in step 4 is the latest version of the config file, with up-to-date telemetry requirements, will automatically be re-read by the mobile app and the transaction can now take place successfully with proper decorated telemetry included. Promon SDK Integrator and F5 Bot Defense: Effortlessly Secure Your Mobile Apps Today One approach to infusing a native mobile app with the F5 Bot Defense SDK would be a programmatic strategy, whereby the source code of the mobile app would require modifications to incorporate the F5 additional code.Although possible, this may not be aligned with the skillsets of all technical resources requiring frequent application builds, for instance for quality assurance (QA) testing. Another issue might be a preference to only have obfuscated code at the point in the publishing workflow where the security offering of the SDK is introduced.In simple terms, the core mandate of native mobile app developers is the intellectual property contained within that app, adding valuable security features to combat automation is more congruent with a final checkmark obtained by protecting the completed application binary with a comprehensive protective layer of security. To simplify and speed up the time for ingestion of the SDK, F5 has partnered with Promon, of Oslo, Norway to make use of an integrator application from Promon which can achieve a “No Code” integration in just a few commands.The integrator, technically a .jar executable known as the Shielder tool at the file level, is utilized as per the following logic. The workflow to create the enhanced mobile app, using the Promon integration tool, with the resultant modified mobile app containing the F5 (Shape) SDK functions within it, consists of only two steps. 1.Create a Promon SDK Integrator configuration file 2.Perform the SDK injection An iOS example would take the follow form: Step 1: python3 create_config.py --target-os iOS --apiguard-config ./base_ios_config.json --url-filter *.domain.com --enable-logs --outfile sdk_integrator_ios_plugin_config.dat Step 2: java -jar Shielder.jar --plugin F5ShapeSDK-iOS-Shielder-plugin-1.0.4.dat --plugin sdk_integrator_ios_plugin_config.dat ./input_app.ipa --out . /output_app.ipa --no-sign Similarly, an Android example would remain also a simple two-step process looking much like this: Step 1: python3 create_config.py --target-os Android --apiguard-config ./base_android_config.json --url-filter *.domain.com --enable-logs --outfile sdk_integrator_android_plugin_config.dat Step 2: java -jar Shielder.jar --plugin F5ShapeSDK-Android-Shielder-plugin-1.0.3.dat --plugin sdk_integrator_android_plugin_config.dat ./input_app.apk --output ./output_app.apk In each respective “Step 1”, the following comments can be made about the create_config.py arguments: target-os specifies the platform (Apple or Android). apiguard-config will provide a base configuration .json file, which will server to provide an initial list of protected endpoints (corresponding to key mobile app exposed points such as “create account” or “reset password”), along with default telemetry required per endpoint.Once running, the mobile app equipped with the mobile SDK will immediately refresh itself with a current hosted config file. url-filter is a simple means of directing the SDK functions to only operate with specific domain name space (eg *.sampledomain.com).Url filtering is also available within the perpetually refreshed .json config file itself. enable-logs allows for debugging logs to be optionally turned on. outfile specifies for file naming of the resultant configuration.dat file, which is ingested in step 2, where the updated iOS or Android binary, with F5 anti-bot protections, will be created. For Step 2, where the updated binaries are created, for either Apple iOS or Android platforms, these notes regard each argument called upon by the java command: shielder.jar is the portion of the solution from Promon which will adjust the original mobile application binary to include F5 anti-bot mobile security, all without opening the application code. F5ShapeSDK-[Android or iOS]-Shielder-plugin-1.0.3.dat is the F5 antibot mobile SDK, provide in a format consumable by the Promon shielder. The remaining three arguments are simply the configuration output file of step 1, the original native mobile app binary and finally the new native mobile app binary which will now have anti-bot functions. The only additional step that is optionally run would be re-signing the new version of the mobile application, as the hash value will change with the addition of the security additions to the new output file. Contrasting the Promon SDK Integrator with Manual Integration and Mobile Application Requirements The advantage of the Promon Integrator approach is the speed of integration and the lack of any coding requirements to adjust the native mobile application.The manual approach to integrating F5 Bot Defense Mobile SDK is documented by F5 with separate detailed guides available for Android and for iOS.A representative summary of the steps involved include the following check points along the path to successful manual SDK integration: oImporting the provided APIGuard library into Android Studio (Android) and Xcode (iOS) environments oThe steps for iOS will differ depending on whether the original mobile app is using a dynamic or static framework, commands for both Swift and Objective-c are provided in the documentation oThe F5 SDK code is already obfuscated and compressed; care should be followed not to include this portion of the revised application in existing obfuscation procedures oWithin Android Studio, expand and adjust as per the documentation the Gradle scripts oInitialize the Mobile SDK; specific to Android and the Application Class the detailed functions utilized are available with the documentation, including finished initialization examples in Java and Kotlin oSpecific to iOS, initialize the mobile SDK in AppDelegate, this includes adding APIGuardDelegate to the AppDelegate class as an additional protocol, full examples are provided for Swift and Obective-c oBoth Android and iOS will require a GetHeaders functions be invoked through code additions for all traffic potentially to be decorated with telemetry, in accordance with the instructions of the base and downloaded configuration files As demonstrated the by the list length of these high-level and manual steps above, which involve touching application code, the alternative ease and simplicity of the Promon Integrator two command offering may be significant in many cases. The platform requirements for the F5 Bot Defense and Promon paired solution are not arduous and frequently reflect libraries used in many native mobile applications developed today.These supported libraries include: oAndroid: HttpURLConnection, OkHttp, Retrofit oiOS: NSURLSession, URLSession, Alamofire Finally, mobile applications that utilize WebViews, which is to say applications using browser-type technologies to implement the app, such as support for Cascading Style Sheets or JavaScript, are not applicable to the F5 Bot Defense SDK approach.In some cases, entirely WebView implemented applications may be candidates for support through the browser style JavaScript-oriented F5 Bot Defense telemetry gathering. Summary With the simplicity of the F5 and Promon workflow, this streamlined approach to integrating the anti-bot technology into a mobile app ecosystem allows for rapid, iterative usage. In development environments following modern CI/CD (continuous integration/continuous deployment) paradigms, with build servers creating frequently updated variants of native mobile apps, one could invoke the two steps of the Promon SDK integrator daily, there are no volume-based consumption constraints.1.5KViews1like0CommentsIncident Remediation with Cisco Firepower and F5 SSL Orchestrator
SSL Orchestrator Configuration steps This guide assumes you have a working SSL Orchestrator Topology, either Incoming or Outgoing, and you want to add a Cisco Firepower TAP Service.Both Topology types are supported, and configuration of the Cisco Remediation is the same. If you do not have a working SSL Orchestrator Topology you can refer to the BIG-IP SSL Orchestrator Dev Central article series for full configuration steps. In this guide we will outline the necessary steps to deploy the Cisco FTD with SSL Orchestrator.FTD can be deployed as a Layer 2/3 or TAP solution.SSL Orchestrator can be deployed as a Layer 2 or 3 solution. SSL Orchestrator gives you the flexibility to deploy in the manner that works best for you.As an example, SSL Orchestrator can be deployed in Layer 2 mode while FTD is deployed in Layer 3 mode, and vice versa. A familiarity with BIG-IP deployment concepts and technology as well as basic networking is essential for configuring and deploying the SSL Orchestrator components of the BIG-IP product portfolio. For further details on the configuration and networking setup of the BIG-IP, please visit the F5 support site at https://support.f5.com . The SSL Orchestrator Guided Configuration will walk you through configuration of the Services (Firepower nodes), Security Policy and more.Lastly, iRules will be applied. Guided Configuration: Create Services We will use the Guided Configuration wizard to configure most of this solution, though there are a few things that must be done outside of the Guided Configuration.In this example we will be working with an existing L2 Outbound Topology. From the BIG-IP Configuration Utility click SSL Orchestrator > Configuration > Services > Add. For Service properties, select Cisco Firepower Threat Defense TAP then click Add. Give it a name.Enter the Firepower MAC Address (or 12:12:12:12:12:12 if it is directly connected to the SSL Orchestrator). For the VLAN choose Create New, give it a Name (Firepower in this example) and select the correct interface (2.2 in this example).If you configured the VLAN previously then choose Use Existing and select it from the drop-down menu. Note: A VLAN Tag can be specified here if needed. Enabling the Port Remap is optional.Click Save & Next. Click the Service Chain name you wish to configure, sslo_SC_ServiceChain in this example. Note: If you don’t have a Service Chain you can add one now. Highlight the Firepower Service and click the arrow in the middle to move it to the Selected side.Click Save. Click Save & Next. Then click Deploy. Configuration of iRules and Virtual Servers We will create two iRules and two Virtual Servers. The first iRule listens for HTTP requests from the Firepower device. Firepower then responds via its Remediation API and sends an HTTP Request containing an IP Address and a timeout value. The address will be the source IP that is to be blocked by the SSL Orchestrator; the SSL Orchestrator will continue to block for the duration of the timeout period. For details and tutorials on iRules, please consult the F5 DevCentral site athttps://devcentral.f5.com. Create the first iRule on the SSL Orchestrator. Within the GUI, selectLocal Traffic > iRules then chooseCreate. Give it a name (FTD-Control in this example) then copy/paste the iRule text into the Definition field. Click Finished. This iRule will be associated with the Control Virtual Server. iRule text when HTTP_REQUEST { if { [URI::query [HTTP::uri] "action"] equals "blocklist" } { set blockingIP [URI::query [HTTP::uri] "sip"] set IPtimeout [URI::query [HTTP::uri] "timeout"] table add -subtable "blocklist" $blockingIP 1 $IPtimeout HTTP::respond 200 content "$blockingIP added to blocklist for $IPtimeout seconds" return } HTTP::respond 200 content "You need to include an ? action query" } Create the second iRule by clicking Create again. Give it a name (FTD-Protect in this example) then copy/paste the iRule text into the Definition field. Click Finished. This iRule will be associated with the Protect Virtual Server. iRule text when CLIENT_ACCEPTED { set srcip [IP::remote_addr] if { [table lookup -subtable "blocklist" $srcip] != "" } { drop log local0. "Source IP on block list " return } } Create the Virtual Servers from Local Traffic select Virtual Servers > Create. Give it a name, FTD-Control in this example. The type should be Standard.Enter “0.0.0.0/0” for the Source Address Host.This indicates any Source Address will match. The Destination Address/Mask is the IP address the SSL Orchestrator will listen on to accept API requests from Firepower.In this example it’s “10.5.9.77/32” which indicates that the SSL Orchestrator will only respond to connections TO that single IP address. Note: The Destination Address/Mask must be in the same subnet as the 2 nd Management Interface on the Firepower Management Center.We’ll go over this later. For VLANS and Tunnels Traffic it is preferred for this to be enabled on the specific VLAN that the Firepower 2 nd Management Interface will be using, rather than All VLANs and Tunnels. Choose Enabled on… Select the same VLAN that the Firepower 2 nd Management Interface will be using, in this example vlan509.Click the double << to move the vlan to Selected. In the Resources section click the FTD-Control iRule created previously.Click the double << to move it to Enabled. Click Finished when done. Click Create again. Give it a name, FTD-Protect in this example.Set the Type to Forwarding (IP).The Source Address in this example is set to 10.4.11.152/32.This Virtual Server will only accept connections with a Source IP of 10.4.11.152.It is being done this way for testing purposes to make sure everything works with a single test client.With an Incoming Topology the Source Address might be set to 0.0.0.0/0 which would allow connections from anywhere. The 10.5.11.0 network is the Destination the 10.4.11.0 network must take to pass through SSL Orchestrator. Under Available, select the ingress VLAN the SSL Orchestrator is receiving traffic on, Direct_all_vlan_511_2 in this example.Click the double << in the middle to move it from Available to Selected. In the Resources section click the FTD-Protect iRule created previously.Click the double << to move it to Enabled. Click Finished when done. Steps Performed: 1.Firepower TAP Service created 2.iRules created 3.Virtual Servers created 4.iRules attached to Virtual Servers Cisco Firepower (FTD) Setup and Configuration This Guide assumes you have Cisco Firepower and Firepower Management Center (FMC) deployed, licensed and working properly. After logging into the Firepower Management Center you will see the Summary Dashboard. Click System > Configuration to configure the Management settings. Click Management Interfaces on the left. A Management Interface on FMC must be configured for Event Traffic.This interface MUST be on the same subnet as the Control Virtual Server on SSL Orchestrator (10.5.9.77).If using a Virtual Machine for FMC you need to add a 2 nd NIC within the Hypervisor console, like this: Refer to your Hypervisor admin guide for more information on how to do this. To configure the 2 nd Management Interface click the pencil icon. Click Save when done. Firepower Access Policy This guide assumes that Intrusion and Malware policies are enabled for the Firepower device.The Policy should look something like the image below. Firepower Remediation Policies Next, we need to create a Firepower Remediation Policy. A Remediation policy can take a variety of different actions based on an almost infinite set of criteria.For example, if an Intrusion Event is detected, Firepower can tell SSL orchestrator to block the Source IP for a certain amount of time. From FMC click Policies > Responses > Modules. The F5 Remediation Module is installed here.Click Browse to install the Module.Locate the Module on your computer and select it, click Open then Install.Click the magnifying glass on the right after it’s installed. Note: The F5 Remediation Module can be downloaded from a link at the bottom of this article. Click Add to Configure an Instance. Give it a name, Block_Bad_Actors in this example.Specify the IP address of the SSL Orchestrator Control Virtual Server, 10.5.9.77 in this example.Optionally change the Timeout and click Create. Next, configure a Remediation by clicking Add. Give it a name, RemediateBlockIP in this example and click Create. Select Policies > Correlation > Create Policy to create a Correlation Policy to define when/how to initiate the Remediation. Give it a name, Remediation in this example and click Save. From the Rule Management tab click Create Rule. Give it a name, RemediateRule in this example. For the type of event select ‘an intrusion event occurs’ from the drop-down menu. For the Condition select Source Country > is > North Korea > Save Note: FMC can trigger a Remediation for a variety of different events, not just for Intrusion.In fact, while configuring Remediation you might want to use a different Event Type to make it easier to trigger an event and verify it was successfully Remediated.For example, you could choose ‘a connection event occurs’ then set the Condition to URL > contains the string > “foo”.In this way the Remediation rule should trigger if you attempt to go to the URL foo.com. Go back to Policy Management and click the Policy created previously, Remediation in this example.Click Add Rules. Select the RemediateRule and click Add. Click Save. Correlation policies can be enabled or disabled using the toggle on the right.Make sure the correct policy is enabled. Remediated Policy Reporting The status of Remediation Events can be viewed from Analysis > Correlation > Status. Here we can see the “Successful completion of remediation” message. Conclusion This concludes the recommended practices for configuring F5 BIG-IP SSL Orchestrator with the Cisco FTD. The architecture has been demonstrated to address both the SSL visibility and control and IPS Policy Based Traffic Steering and Blocking user scenarios. With the termination of SSL on the SSL Orchestrator, FTD sensors are provided visibility into both ingress and egress traffic to adapt and protect an organization’s applications, servers and other resources. By leveraging the Security Policy Based Traffic Steering, an organization can leverage this configuration and can continue to scale through the addition of more FTD managed devices in order to provide more traffic capacity for the protected networks and applications. This policy based flexibility which can be provided by the SSL Orchestrator, can also be leveraged to selectively direct traffic to different pools of resources based on business, security or compliance requirements.468Views0likes8CommentsF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM655Views0likes4CommentsF5 Friday: HP Cloud Maps Help Navigate Server Flexing with BIG-IP
The economy of scale realized in enterprise cloud computing deployments is as much (if not more) about process as it is products. HP Cloud Maps simplify the former by automating the latter. When the notion of “private” or “enterprise” cloud computing first appeared, it was dismissed as being a non-viable model due to the fact that the economy of scale necessary to realize the true benefits were simply not present in the data center. What was ignored in those arguments was that the economy of scale desired by enterprises large and small was not necessarily that of technical resources, but of people. The widening gap between people and budgets and data center components was a primary cause of data center inefficiency. Enterprise cloud computing promised to relieve the increasing burden on people by moving it back to technology through automation and orchestration. As a means to achieve such a feat – and it is a non-trivial feat – required an ecosystem. No single vendor could hope to achieve the automation necessary to relieve the administrative and operational burden on enterprise IT staff because no data center is ever comprised of components provided by a single vendor. Partnerships – technological and practical partnerships – were necessary to enable the automation of processes spanning multiple data center components and achieve the economy of scale promised by enterprise cloud computing models. HP, while providing a wide variety of data center components itself, has nurtured such an ecosystem of partners. Combined with its HP Operations Orchestration, such technologically-focused partnerships have built out an ecosystem enabling the automation of common operational processes, effectively shifting the burden from people to technology, resulting in a more responsive IT organization. HP CLOUD MAPS One of the ways in which HP enables customers to take advantage of such automation capabilities is through Cloud Maps. Cloud Maps are similar in nature to F5’s Application Ready Solutions: a package of configuration templates, guides and scripts that enable repeatable architectures and deployments. Cloud Maps, according to HP’s description: HP Cloud Maps are an easy-to-use navigation system which can save you days or weeks of time architecting infrastructure for applications and services. HP Cloud Maps accelerate automation of business applications on the BladeSystem Matrix so you can reliably and consistently fast- track the implementation of service catalogs. HP Cloud Maps enable practitioners to navigate the complex operational tasks that must be accomplished to achieve even what seems like the simplest of tasks: server provisioning. It enables automation of incident resolution, change orchestration and routine maintenance tasks in the data center, providing the consistency necessary to enable more predictable and repeatable deployments and responses to data center incidents. Key components of HP Cloud Maps include: Templates for hardware and software configuration that can be imported directly into BladeSystem Matrix Tools to help guide planning Workflows and scripts designed to automate installation more quickly and in a repeatable fashion Reference whitepapers to help customize Cloud Maps for specific implementation HP CLOUD MAPS for F5 NETWORKS The partnership between F5 and HP has resulted in many data center solutions and architectures. HP’s Cloud Maps for F5 Networks today focuses on what HP calls server flexing – the automation of server provisioning and de-provisioning on-demand in the data center. It is designed specifically to work with F5 BIG-IP Local Traffic Manager (LTM) and provides the necessary configuration and deployment templates, scripts and guides necessary to implement server flexing in the data center. The Cloud Map for F5 Networks can be downloaded free of charge from HP and comprises: The F5 Networks BIG-IP reference template to be imported into HP Matrix infrastructure orchestration Workflow to be imported into HP Operations Orchestration (OO) XSL file to be installed on the Matrix CMS (Central Management Server) Perl configuration script for BIG-IP White papers with specific instructions on importing reference templates, workflows and configuring BIG-IP LTM are also available from the same site. The result is an automation providing server flexing capabilities that greatly reduces the manual intervention necessary to auto-scale and respond to capacity-induced events within the data center. Happy Flexing! Server Flexing with F5 BIG-IP and HP BladeSystem Matrix HP Cloud Maps for F5 Networks F5 Friday: The Dynamic Control Plane F5 Friday: The Evolution of Reference Architectures to Repeatable Architectures All F5 Friday Posts on DevCentral Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait What is a Strategic Point of Control Anyway? The F5 Dynamic Services Model Unleashing the True Potential of On-Demand IT310Views0likes1CommentiControl 10.4.2 jar file
Hi Guys, I'm new here and I will be doing some system integration between F5. Would you know where I can get a copy of the assembly/jar file for an older version of the iControl? 10.4.0 or 10.4.2 will do. I searched the net but I can only see the latest version 11. Thanks in Advance! Aldrich271Views0likes2Comments