service
5 TopicsWAFaaS with SSL Orchestrator
Introduction Note: This article applies to SSL Orchestrator versions prior to 11.0. If using version 11.0 refer to the articleHERE This use case allows you to insert F5 WAF functionality as a Service in the SSL Orchestrator inspection zone. WAFaaS is the ability to insert ASM profiles into the SSL Orchestrator Service Chain for Inbound Topologies.This configuration is specific to a WAF policy running on the SSL Orchestrator device.WAF and SSL Orchestrator consume significant CPU cycles so care should be given when deploying both together.It is also possible to deploy WAF as a service on a separate BIG-IP device, in which case you’d simply configure an inline transparent proxy service.The ability to insert F5’s WAF into the Service Chain presents a significant customer benefit. This guide assumes you already have WAF/ASM profile(s) configured, licensed and provisioned on BIG-IP and wish to add this functionality to an Inbound Topology.In order to run WAF and SSL Orchestrator on the same device you will need an LTM license with SSL Orchestrator as an add-on option.You cannot add a WAF license to an SSL Orchestrator stand-alone license. SSL Orchestrator does not directly support inserting F5 WAF policies into the Service Chain.However, the F5 platform is flexible enough to handle many custom use cases.In this case, the ICAP service configuration exposes a framework that is useful for any number of specialized patterns, including adding a WAF policy to an SSLO service chain.We will configure an ICAP Service and attach the WAF policy to it. Steps: Create ICAP Service Disable Strictness on the Service Disable TCP monitor for the ICAP Pool ICAP Adapt profiles removed from the Virtual Server Application Security Policy enabled and a Policy assigned under Security Step #1: Create ICAP Service Note: These instructions assume an SSL Orchestrator Topology and Service Chain are already deployed and working properly.These instructions simply add WAFaaS to the existing Service Chain.It is entirely possible to create the WAFaaS during the initial Topology creation, in which case you would create the service during the workflow, then make the necessary changes after the topology has been created. From the SSL Orchestrator Guided Configuration click Services then Add Scroll to the bottom, select Generic ICAP Service and click Add Give it a name, WAFaaS in this example For ICAP Devices click Add on the right Enter an IP Address, 198.19.97.1 in this example and click Done. Note:the IP address you use does not have to be the one above.It’s just a local, non-routable address used as a placeholder in the service definition.This IP address will not be used. IP addresses 198.19.97.0 to 198.19.97.255 are owned by network benchmark tests and located in private networks. Scroll to the bottom and click Save & Next. The next screen is the Services Chain List.Click the name of the Service Chain you wish to add WAF functionality to, ssloSC_ServiceChain in this example. Note: The order of the Services in the Selected column is the order in which SSL Orchestrator will pass decrypted data to the device.This can be an important consideration if you want some devices to see, or not see, the actions taken by the WAF Service. Select the WAFaaS Service and click the right arrow to move it to Selected.Click Save. Click Save & Next Click Deploy You should receive a Success message Step #2: Disable Strictness on the Service From the SSL Orchestrator Configuration screen select Services.Click the padlock to Unprotect Configuration. Note:Disabling Strictness on the ICAP Service is needed to modify it and attach the WAFaaS policy.Strictness must remain disabled on this service and disabling strictness on the service has no effect on any other part of the SSL Orchestrator configuration. Click OK to Unprotect the Configuration Step #3: Disable tcp monitor for the ICAP Pool From Local Traffic select Pools > Pool List Select the WAFaaS Pool Under Active Health Monitors select tcp and click >> to move it to Available.This removes the Pool’s Monitor because otherwise it would be marked as down or unavailable. Click Update Note:The Health Monitor needs to be removed because there is no actual ICAP service to monitor. Step #4: ICAP Adapt profiles removed from the Virtual Server From Local Traffic select Virtual Servers > Virtual Server List Locate the WAFaaS ICAP service that ends in “-t-4”virtual server and select it Set the Request Adapt Profile and Response Adapt Profile to None to disable the default ICAP Profiles Click Update Step #5: Application Security Policy enabled and a Policy assigned under Security For the WAFaaS-t-4 Virtual Server click the Security tab Set Application Security Policy to Enabled Select the Security Policy you wish to use.Click Update when done Note: In specific versions of SSL Orchestrator there is one extra configuration item that needs to be modified. This is NOT required in other versions. If this change is made, when performing an upgrade it is not necessarily required to back out this change. Required versions: SSLO version 5.9.15 available on TMOS 14.1.4 SSLO versions 6.0-6.5 available on TMOX 15.0.x Navigate to “Local Traffic››Profiles : Other : Service” Select the Service profile named “ssloS_WAFaaS-service” Change the “Type” from “ICAP” to “F5 Module” Conclusion The configuration is now complete.Using the WAFaaS this way is functionally the same as using it by itself.There are no known limitations to this configuration.2.1KViews5likes9CommentsF5 Friday: Lessons from (IT) Geese
Birds migrate in flocks, which means every individual has the support of others. IT often migrates alone – but it doesn’t have to. “Lessons from Geese” has been around a long time. It is often cited and referenced, particularly with respect to teamwork and collaboration. The very first “lesson” learned from geese migrations applied to human collaboration is this: Fact #1: As each goose flaps its wings, it creates an uplift for the others behind it. By flying in a "V" formation, the whole flock adds 71% greater flying range than if each bird flew alone. Lesson: People who share a common direction and sense of community can get where they are going quicker and easier because they are traveling on the thrust of another. That’s probably not surprising at all and the basic lesson is one we’re all familiar with, no doubt. Fact #3: When the lead goose tires, it rotates back into the formation and another goose flies to the point position. Lesson: It pays to take turns doing the hard tasks and sharing leadership, as with geese, people are interdependent on each other’s skill, capabilities and unique arrangement of gifts, talents or resources. This lesson works well, if everyone is a goose is similarly talented at flying. But within IT there are myriad skill sets being used that must come together to migrate implementations from one version to another. It’s not just software – it’s data stores, identity stores, switches, and application delivery systems. There’s a lot of different skills required to successfully migrate large, business critical systems. And we can’t just pick a random goose to lead when it comes to migrating specific subsets and components; we need experts in various systems to assist. And sometimes, we don’t have the right goose. So we have to find one. “A Plan-Net survey found that 87% of organizations are currently using Exchange 2003 or earlier. There has been a reluctance to adopt the 2007 version, often considered to be the Vista of the server platform — faulty and dispensable.” -- 10 reasons to migrate to Exchange 2010 This doesn’t explain a reluctance to move to Exchange 2010. With larger mailboxes, virtualization support, voicemail transcription, and higher availability, what’s not to like? Significant changes in the underlying architecture – which cascade into the infrastructure – may be one of them. Upgrading a business critical service like Exchange requires more planning and forethought than upgrading to the latest version of Angry Birds, after all. Continuity of service is required even as the new version is put in place. And while there are plenty of experts who can help with the migration of Exchange, there are fewer that can help with the migration of its supporting infrastructure services. F5 has an answer for that, a skilled goose, if you will, who can take the lead and keep the organization on track. Introducing: F5 Architecture Design for Microsoft Exchange Service The F5 Architecture Design for Microsoft Exchange service comprises an intense three days of discussion, information gathering, analysis and knowledge-sharing of network considerations for the optimal deployment of Microsoft Exchange in an F5 network environment. F5 Professional Services consultants with Exchange expertise conduct assessments during which they review your current network and future needs to streamline your new implementation, upgrade or migration to your preferred version of Microsoft Exchange. Plan During the project kick-off call, F5 Professional Services consultants make sure to understand your overall project goals, flag dependencies, and validate that all questionnaires and information requirements have been addressed prior to the initiation of the engagement. Analyze The F5 Architecture Design for Microsoft Exchange Service facilitates the discussion, analysis and development of the network architecture requirements that best support your Exchange deployment. The engagement starts with an overview and whiteboard discussion of F5 technology, focusing on topics of high availability, scalability, security and performance. Next, the consultants engage in conversations about mail deployment for legacy mail systems or new deployments, touching on sizing, security and service-level agreements. Finally, they review the architectural components specific to your environment, including network flows, client access, unified messaging, and considerations of single vs. multisite deployments. Design and Report The F5 Professional Services consultants consolidate the results from the analysis phase and deliver a Proposed Microsoft Exchange Network Architecture and a Proposed Network Migration Plan report detailing the recommendations. F5 consultants intimately understand F5 BIG-IP ® systems and their operation, and can draw on the F5 Solutions for Microsoft Exchange Server. You can be assured of the thoroughness and relevance of their recommendations. The consultants’ reports provide you with the blueprint for flexible and cost-effective communication and collaboration in your organization. For more information about the F5 Architecture Design for Microsoft Exchange service, use the search function on f5.com or contact consulting@f5.com Additional Resources: Microsoft Exchange 2010: HELO New Architecture Deploying F5 with Microsoft Exchange 2010 F5 solution for Microsoft Exchange Microsoft Exchange 2010: HELO New Architecture F5 Friday: BIG-IP Solutions for Microsoft Private Cloud Webcast - BIG-IP v11 and Microsoft Technologies Social Forums - F5/Microsoft Solutions Eliminating Data Center Vertigo with F5 and Microsoft F5 Friday: Microsoft and F5 Lync Up on Unified Communications F5 Friday: Playing in the Infrastructure Orchestra(tion)256Views0likes0CommentsRed Herring: Hardware versus Services
In a service-focused, platform-based infrastructure offering, the form factor is irrelevant. One of the most difficult aspects of cloud, virtualization, and the rise of platform-oriented data centers is the separation of services from their implementation. This is SOA applied to infrastructure, and it is for some reason a foreign concept to most operational IT folks – with the sometimes exception of developers. But sometimes even developers are challenged by the notion, especially when it begins to include network hardware. ARE YOU SERIOUSLY? The headline read: WAN Optimization Hardware versus WAN Optimization Services. I read no further, because I was struck by the wrongness of the declaration in the first place. I’m certain if I had read the entire piece I would have found it focused on the operational and financial benefits of leveraging WAN optimization as a Service as opposed to deploying hardware (or software a la virtual network appliances) in multiple locations. And while I’ve got a few things to say about that, too, today is not the day for that debate. Today is for focusing on the core premise of the headline: that hardware and services are somehow at odds. Today is for exposing the fallacy of a premise that is part of the larger transformational challenge with which IT organizations are faced as they journey toward IT as a Service and a dynamic data center. This transformational challenge, often made reference to by cloud and virtualization experts, is one that requires a change in thinking as well as culture. It requires a shift from thinking of solutions as boxes with plugs and ports and viewing them as services with interfaces and APIs. It does not matter one whit whether those services are implemented using hardware or software (or perhaps even a combination of the two, a la a hybrid infrastructure model). What does matter is the interface, the API, the accessibility as Google’s Steve Yegge emphatically put it in his recent from-the-gut-not-meant-to-be-public rant. What matters is that a product is also a platform, because as Yegge so insightfully noted: A product is useless without a platform, or more precisely and accurately, a platform-less product will always be replaced by an equivalent platform-ized product. A platform is accessible, it has APIs and interfaces via which developers (consumer, partner, customer) can access the functions and features of the product (services) to integrate, instruct, and automate in a more agile, dynamic architecture. Which brings us back to the red herring known generally as “hardware versus services.” HARDWARE is FORM-FACTOR. SERVICE is INTERFACE. This misstatement implies that hardware is incapable of delivering services. This is simply not true, any more than a statement implying software is capable of delivering services would be true. That’s because intrinsically nothing is actually a service – unless it is enabled to do so. Unless it is, as today’s vernacular is wont to say, a platform. Delivering X as a service can be achieved via hardware as well as software. One need only look at the varied offerings of load balancing services by cloud providers to understand that both hardware and software can be service-enabled with equal alacrity, if not unequal results in features and functionality. As long as the underlying platform provides the means by which services and their requisite interfaces can be created, the distinction between hardware and “services” is non-existent. The definition of “service” does not include nor preclude the use of hardware as the underlying implementation. Indeed, the value of a “service” is that it provides a consistent interface that abstracts (and therefore insulates) the service consumer from the underlying implementation. A true “service” ensures minimal disruption as well as continued compatibility in the face of upgrade/enhancement cycles. It provides flexibility and decreases the risk of lock-in to any given solution, because the implementation can be completely changed without requiring significant changes to the interface. This is the transformational challenge that IT faces: to stop thinking of solutions in terms of deployment form-factors and instead start looking at them with an eye toward the services they provide. Because ultimately IT needs to offer them “as a service” (which is a delivery and deployment model, not a form factor) to achieve the push-button IT envisioned by the term “IT as a Service.”200Views0likes0CommentsOnce Again, I Can Haz Storage As A Service?
While plenty of people have had a mouthful (or page full, or pipe full) of things to say about the Amazon outage, the one thing that it brings to the fore is not a problem with cloud, but a problem with storage. Long ago, the default mechanism for “High Availability” was to have two complete copies of something (say a network switch) and when one went down, the other was brought up with the same IP. It is sad to say that even this is far-and-away better than the level of redundancy that most of us place in our storage. The reasons are pretty straight-forward, you can put in a redundant pair of NAS heads, or a redundant pair of file/directory virtualization appliances like our own ARX, but a redundant pair of all of your storage? The cost alone would be prohibitive. Amazon’s problems seem to stem from a controller problem, not a data redundancy problem, but I’m not an insider, so that is just appearances. But most of us suffer from the opposite. High availability entry points protect data that is all too often a single point of failure. I know I lived through the sudden and irrevocable crashing of an AIX NAS once, and it wasn’t pretty. When the third disk turned up bad, we were out of luck, had to wait for priority shipment of new disks and then do a full restore… The entire time being down in a business where time is money. The critical importance of the data that is the engine of our enterprises these days makes building that cost-prohibitive truly redundant architecture a necessity. If you don’t already have a complete replica of your data somewhere, it is worth looking into file and database replication technologies. Honestly, if you choose to keep your replicas on cheaper, slower disk, you can save a bundle and still have the security that even if your entire storage system goes down, you’ll have the ability to keep the business running. But what I’d like to see is full blown storage as a service. We couldn’t call it SaaS, so I’ll propose we name it Storage Tiers As A Service Host, just so we can use the acronym Staash. The worthy goal of this technology would be the ability to automatically, with no administrator interaction, redirect all traffic to device A over to device B, heterogeneously. So your core datacenter NAS goes down hard, lets call it a power failure to one or more racks, Staash would detect that the primary is off-line and substitute your secondary for it in the storage hierarchy. People might notice that files are served up more slowly, depending upon your configuration, but they’ll also still be working. Given sufficient maturity, this model could even go so far as to allow them to save changes made to documents that were open at the time that the primary NAS went down, though this would be a future iteration of the concept. Today we have automated redundancy all the way to the final step, it is high time we implemented redundancy on that last little bit, and made our storage more agile. While I could reasonably argue that a File/Directory Virtualization device like F5’s ARX is the perfect place to implement this functionality – it is already heterogeneous, it sits between users and data, and it is capable of being deployed in HA pairs… All the pre-requisites for Staash to be implemented – I don’t think your average storage or server administrator much cares where it is implemented, as long as it is implemented. We’re about 90% there. You can replicate your data – and you can replicate it heterogeneously. You can set up an HA pair of NAS heads (if you are a single-vendor shop) or File/Directory virtualization devices whether you are single-vendor or heterogeneous, and with a file/directory virtualization tool you have already abstracted the user from the physical storage location in a very IT-friendly way (files are still saved together, storage is managed in a reasonable manner, only files with naming conflicts are renamed, etc), all that is left is to auto-switch from your high-end primary to a replica created however your organization does these things… And then you are truly redundantly designed. It’s been what, forty years? That’s almost as long as I’ve been alive. Of course, I think this would fit in well with my long-term vision of protocol independence too, but sometimes I want to pack too much into one idea or one blog, so I’ll leave it with “let’s start implementing our storage architecture like we do our server architecture… No single point of failure. No doubt someone out there is working on this configuration… Here’s hoping they call it Staash when it comes out. The cat in the picture above is Jennifer Leggio’s kitty Clarabelle. Thanks to Jen for letting me use the pic!200Views0likes0Comments