Orchestrated Infrastructure Security - Change at the speed of Business
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere Introduction This article is part of a series on implementing Orchestrated Infrastructure Security. It includes High Availability, Central Management with BIG-IQ, Application Visibility with Beacon and the protection of critical assets using F5 Advanced WAF and Protocol Inspection (IPS) with AFM.It is assumed that SSL Orchestrator is already deployed, and basic network connectivity is working. If you need help setting up SSL Orchestrator for the first time, refer to the Dev/Central article series on Implementing SSL Orchestrator here or the CloudDocs Deployment Guide here. This article focuses on using SSL Orchestrator as a tool to assist with simplifying Change Management processes, procedures and shortening the duration of the entire process. Configuration files of BIG-IP deployed as Advanced WAF and AFM and can be downloaded fromherefrom GitLab. Please forgive me for using SSL and TLS interchangeably in this article. This article is divided into the following high level sections: ·Create a new Topology to perform testing ·Monitor server statistics – change the weight ratio – check server stats again ·Remove a single AFM device from the Service ·Perform maintenance on the AFM device ·Add the AFM device to the new Topology ·Test functionality with a single client ·Add AFM device back to the original Topology ·Test functionality again ·Repeat to perform maintenance on the other AFM device Create a new Topology to perform testing A new Topology will be used to safely test the Service after maintenance is performed.The Topology should be similar to the one used for production traffic.This Topology can be re-used in the future. From the BIG-IP Configuration Utility select SSL Orchestrator > Configuration.Click Add under Topologies. Scroll to the bottom of the next screen and click Next. Give it a name, Topology_Staging in this example. Select L2 Inbound as the Topology type then click Save & Next. For the SSL Configurations you can leave the default settings.Click Save & Next at the bottom. Click Save & Next at the bottom of the Services List. Click the Add button under Services Chain List.A new Service Chain is needed so we can remove AFM2 from the Production Service and add it here. Give the Service Chain a name, Staging_Chain in this example.Click Save at the bottom. Note: The Service will be added to this Service Chain later. Click Save & Next. Click the Add button on the right to add a new rule. For Conditions select Client IP Subnet Match. Enter the Client IP and mask, 10.1.11.52/32 in this example.Click New to add the IP/Subnet. Set the SSL Proxy Action to Intercept. Set the Service Chain to the one created previously. Click OK. Note: This rule is written so that a single client computer (10.1.11.52) will match and can be used for testing. Select Save & Next at the bottom. For the Interception Rule set the Source Address to 10.1.11.52/32.Set the Destination Address/Mask to 10.4.11.0/24.Set the port to 443. Select the VLAN for your Ingress Network and move it to Selected. Set the L7 Profile to Common/http. Click Save & Next. For Log Settings, scroll to the bottom and select Save & Next. Click Deploy. Monitor server statistics – change the weight ratio – check server statistics again Check the Virtual Server statistics on the BIG-IP we will be performing maintenance on.It’s “AFM2” in this example. Under Local Traffic click Virtual Servers. Then select Statistics > Virtual Server. Set Auto Refresh to 10 seconds. In this example you can see we have 5 Virtual Servers.The statistic counters should increment every time the screen refreshes.These servers appear to be healthy. Change the Weight Ratio Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_IPS in this example. Click the pencil icon to edit the Service. Click the pencil icon to edit the Network Configuration for AFM1. Set the ratio to 65535 and click Done. Note: Alternately you could disable the Pool Member from LTM > Pools. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Check Server Statistics Again Check the Virtual Server statistics on “AFM2” again.With Auto Refresh on, the statistics should no longer increment.Current Connections should eventually reach zero for all Virtual Servers. Remove a single AFM device from the Service Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_IPS in this example. Click the pencil icon to edit the Service. Under Network Configuration, delete AFM2. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Perform maintenance on the AFM device At this point AFM2 has been removed from the Incoming_Security Topology and is no longer handling production traffic.AFM1 is now handling all of the production traffic. We can now perform a variety of maintenance tasks on AFM2 without disrupting production traffic.When done with the task(s) we can then safely test/verify the health of AFM2 prior to moving it back into production. Some examples of maintenance tasks: ·Perform a software upgrade to a newer version. ·Make policy changes and verify they work as expected. ·Physically move the device. ·Replace a hard drive, fan, and/or power supply. Add the AFM device to the new Topology This will allow us to test its functionality with a single client computer, prior to moving it back to production. From the SSL Orchestrator Configuration Utility click SSL Orchestrator > Configuration > Topologies > sslo_Topology_Staging. Click the pencil icon on the right to edit the Service. Click Add Service. Select the Generic Inline Layer 2 Service and click Add. Give it a name or leave the default.Click Add under Network Configuration. Set the FROM and TO VLANS to the following and click Done. Click Save at the bottom. Click the Service Chain icon. Click the Staging_Chain. Move the GENERIC Service from Available to Selected and click Save. Click OK. Click Deploy. Click OK. Test functionality with a single client We created a policy with source IP = 10.1.11.52 to use the new AFM Service that we just performed maintenance on. Go to that client computer and verify that everything is still working as expected. As you can see this is the test client with IP 10.1.11.52. The page still loads for one of the web servers. You can view the Certificate and see that it is not the same as the Production Certificate. To ensure that everything is working as expected you can view the Virtual Server Statistics on AFM2, which was the AFM device removed from the Production network. From Local Traffic select Virtual Servers > Statistics > Virtual Server. Statistics can be cleared by checking the box and selecting Reset.After a reset, you should see Bits and Packets for 10.4.11.56, assuming you reload the browser a few times from the test client. It is advisable to check that all of the Virtual Servers are working this way. Add AFM device back to the original Topology From the SSL Orchestrator GUI select SSL Orchestrator > Configuration > Service Chains. Select the Staging_Chain. Select ssloS_Generic on the right and click the left arrow to remove it from Selected. Click Deploy when done. Click OK. Click OK to the Success message. From the SSL Orchestrator Guided Configuration select SSL Orchestrator > Configuration > Services. Select the GENERIC Service and click Delete. Click OK to the Warning. When that is done click the ssloS_IPS Service. Click the Pencil icon to edit the Service. Under Network Configuration click Add. Set the Ratio to the same value as AFM1, 65535 in this example.Set the From and To VLAN the following and click Done. Click Save & Next at the bottom. Click OK. Click Deploy. Click OK. Test functionality again Make sure AFM2 is working properly. To ensure that everything is working as expected you can view the Virtual Server Statistics on AFM2. From Local Traffic select Virtual Servers > Statistics > Virtual Server. Click Refresh or set Auto Refresh to 10 seconds.When the statistics reload it should look something like the following. Repeat these steps to perform maintenance on the other AFM device (not covered in this guide) Remove a single Adv.WAF device from the Service Monitor server statistics – change the weight ratio – check server statistics again ·Remove a single Adv.WAF device from the Service ·Perform maintenance on the Adv.WAF device ·Add the Adv.WAF device to the new Topology ·Test functionality with a single client ·Add Adv.WAF device back to the original Topology ·Test functionality again ·Repeat to perform maintenance on the other Adv.WAF device Check the Virtual Server statistics on the BIG-IP we will be performing maintenance on.It’s “Adv.WAF2” in this example. Under Local Traffic click Virtual Servers. Then select Statistics > Virtual Server. Set Auto Refresh to 10 seconds. In this example you can see we have 5 Virtual Servers.The statistic counters should increment every time the screen refreshes.These servers appear to be healthy. Change the Weight Ratio Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_AdvWAF in this example. Click the pencil icon to edit the Service. Click the pencil icon to edit the Network Configuration for WAF1. Set the ratio to 65535 and click Done. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Check Server Statistics Again Check the Virtual Server statistics on “Adv.WAF2” again.With Auto Refresh on, the statistics should no longer increment.Current Connections should eventually reach zero for all Virtual Servers. Remove a single Adv.WAF device from the Service Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_AdvWAF in this example. Click the pencil icon to edit the Service. Under Network Configuration, delete WAF2. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Perform maintenance on the Adv.WAF device At this point Adv.WAF2 has been removed from the Incoming_Security Topology and is no longer handling production traffic.Adv.WAF1 is now handling all of the production traffic. We can now perform a variety of maintenance tasks on Adv.WAF2 without disrupting production traffic.When done with the task(s) we can then safely test/verify the health of Adv.WAF2 prior to moving it back into production. Some examples of maintenance tasks: ·Perform a software upgrade to a newer version. ·Make policy changes and verify they work as expected. ·Physically move the device. ·Replace a hard drive, fan, and/or power supply. Add the Adv.WAF device to the new Topology This will allow us to test its functionality with a single client computer, prior to moving it back to production. From the SSL Orchestrator Configuration Utility click SSL Orchestrator > Configuration > Topologies > sslo_Topology_Staging. Click the pencil icon on the right to edit the Service. Click Add Service. Select the Generic Inline Layer 2 Service and click Add. Give it a name or leave the default.Click Add under Network Configuration. Set the FROM and TO VLANS to the following and click Done. Click Save at the bottom. Click the Service Chain icon. Click the Staging_Chain. Move the GENERIC Service from Available to Selected and click Save. Click OK. Click Deploy. Click OK. Test functionality with a single client We created a policy with source IP = 10.1.11.52 to use the new Adv.WAF Service that we just performed maintenance on. Go to that client computer and verify that everything is still working as expected. As you can see this is the test client with IP 10.1.11.52. The page still loads for one of the web servers. You can view the Certificate and see that it is not the same as the Production Certificate. To ensure that everything is working as expected you can view the Virtual Server Statistics on Adv.WAF2, which was the Adv.WAF device removed from the Production network. From Local Traffic select Virtual Servers > Statistics > Virtual Server. Statistics can be cleared by checking the box and selecting Reset.After a reset, you should see Bits and Packets for 10.4.11.56, assuming you reload the browser a few times from the test client. It is advisable to check that all of the Virtual Servers are working this way. Add Adv.WAF device back to the original Topology From the SSL Orchestrator GUI select SSL Orchestrator > Configuration > Service Chains. Select the Staging_Chain. Select ssloS_Generic on the right and click the left arrow to remove it from Selected. Click Deploy when done. Click OK. Click OK to the Success message. From the SSL Orchestrator Guided Configuration select SSL Orchestrator > Configuration > Services. Select the GENERIC Service and click Delete. Click OK to the Warning. When that is done click the ssloS_AdvWAF Service. Click the Pencil icon to edit the Service. Under Network Configuration click Add. Set the Ratio to the same value as Adv.WAF1, 65535 in this example.Set the From and To VLAN the following and click Done. Click Save & Next at the bottom. Click OK. Click Deploy. Click OK. Test functionality again Make sure Adv.WAF2 is working properly. To ensure that everything is working as expected you can view the Virtual Server Statistics on Adv.WAF. From Local Traffic select Virtual Servers > Statistics > Virtual Server. Click Refresh or set Auto Refresh to 10 seconds.When the statistics reload it should look something like the following. Repeat these steps to perform maintenance on the other Adv.WAF device (not covered in this guide) Summary In this article you learned how to use SSL Orchestrator as a tool to assist with simplifying Change Management processes, procedures and shortening the duration of the entire process. Next Steps That's it, you're done!407Views0likes0CommentsDon’t Conflate Virtual with Dynamic
Focusing on form factor over function is as shallow and misguided as focusing on beauty over brains. The saying goes that if all you have is a hammer, everything looks like a nail. I suppose then that it only makes sense that if the only tool you have for dealing with the rapid dynamism of today’s architectural models is virtualization that everything looks like a virtual image. Virtualization is but one way of implementing a dynamic infrastructure capable of the rapid provisioning and configuration gyrations needed to address the fluidity of the “perimeter” of the network today. Dynamic is not a synonym for virtualization and virtualization does not inherently provide the fluidity of the network architecture required to address the challenges associated with highly dynamic environments. COMPLEXIFICATION Consider for a moment the conclusion that the perimeter must become virtual because it is trying to contain a moving target: In the world of cloud infrastructures (IaaS), it is not so easy to determine the “area” that is supposed to be surrounded. Resources are shared among different clients (multi-tenancy) and they are allocated in data-centers of external providers (outsourcing). Moreover, computing resources get virtual – physical resources are transparently shared – and elastic – they are allocated and destroyed on demand. Since this can be done via APIs in a programmable and automated way, cloud computing infrastructures are highly dynamic and volatile. How can one build a perimeter around a moving target? Well, the short answer is: the perimeter must also become virtual, highly dynamic, and automated. -- Why The Perimeter Must Become Virtual There are a number of issues this raises, not the least of which is the mechanism for scaling and managing such a virtual perimeter especially given the topological sensitivity to a variety of network-hosted services, especially those that are focused on security. I’ll simply paraphrase Hoff at this point from his “The Four Horsemen of the Virtualization Security Apocalypse” – there are issues with a fully virtualized approach to security around topology, routing, scalability, and resiliency. In short, there are myriad architectural challenges associated with a fully virtualized approach to enabling a dynamic data center model. [An easy answer as to why security and virtual network devices aren’t always compatible is any situation in which FIPS-140 Level 2 compliance is necessary.] That’s in addition to the complexity introduced by replacing what are high-speed network components capable of handling upwards of 40 and 100 Gbps with commodity hardware, limited compute resources, and constrained network connections. Achieving similar throughput rates using virtual components will require multiple instances of the virtual network appliance which introduce architectural and topological challenges that must be addressed, not the least of which is controlling flow which subsequently introduces overhead that will negatively impact those throughput rates. This also assumes that the protocols typically associated with the network perimeter will scale across multiple, dynamic instances without noticeable disruption to services. If you’ve ever changed a routing table or a VLAN on a router and then had to wait for spanning tree to converge you’ll know what I’m talking about. It’s anything but rapid and will almost certainly have a detrimental effect on availability of every dependent service (which, at the network perimeter, is everything). IT’S NOT ABOUT THE FORM FACTOR In order to implement the kind of dynamic network perimeter introduced by the author of “Why The Perimeter Must Become Virtual” we do, in fact, need a more flexible, automated perimeter. However, that perimeter does not have to be virtual and in fact the key to implementing such a fluid network is the inherent dynamism of its components, not its form factor. If the components are dynamic themselves – programmable, if you will – and can be configured, deployed, modified and shut-down automatically and on-demand then they can be leveraged to address the dynamism inherent in a cloud computing and highly virtualized architectural model. Because they can be integrated. Because they are collaborative. The strategic points of control that exist in every data center model must be dynamic – both from a configuration and execution point of view. Not only must the components that form a strategic net across the data center - effectively virtualizing business resources such as applications and storage – be dynamic in their management they must themselves be contextually aware and capable of taking action at run-time. The kind of dynamic action required to address “moving targets” is not inherent in virtualization. Virtualizing a component only makes provisioning easier. Without a means to remotely invoke services (APIs) and modify configuration dynamically (APIs) as well as the means by which the component can dynamically adjust its behavior based on events within the data center, a virtualized component is little more than a virtual brick. Fluidity of the network is not a result of virtualization. There are myriad examples already of how traditional “iron” not only enables but stabilizes the management and control of dynamic environments. Programmability, on-demand contextual-awareness, APIs, scripting, policy-based networking. All these capabilities enable the fluidity necessary to address the “moving targets” comprising cloud-based and highly virtualized modern data center models, but without the instability created by the lack of topological and architectural control inherent in a “toss another virtual appliance” at the problem approach. It’s more about designing an architecture comprised of highly dynamic and interactive components that can be provisioned and managed on-demand, as services. Yes - dynamic, highly automated data centers are necessary to combat the issues arising from constantly changing infrastructure. But dynamism and automation do not require virtualization, they require collaboration and integration and a platform capable of providing both. Provisioning a Virtual Network is Only the Beginning The Four Horsemen Of the Virtualization Security Apocalypse Why The Perimeter Must Become Virtual Are You Ready for the New Network? VM Sprawl is Bad but Network Sprawl is Badder The Devil is in the Details Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The Question Shouldn’t Be Where are the Network Virtual Appliances but Where is the Architecture? A Fluid Network is the Result of Collaboration Not Virtualization What is a Strategic Point of Control Anyway?166Views0likes0CommentsThe days of IP-based management are numbered
The focus of cloud and virtualization discussions today revolve primarily around hypervisors, virtual machines, automation, network and application network infrastructure; on the dynamic infrastructure necessary to enable a truly dynamic data center. In all the hype we’ve lost sight of the impact these changes will have on other critical IT systems such as network systems management (NSM) and application performance management (APM). You know their names: IBM, CA, Compuware, BMC, HP. There are likely one or more of their systems monitoring and managing applications and systems in your data center right now. They provide alerts, notifications, and the reports IT managers demand on a monthly or weekly basis to prove IT is meeting the service-level agreements around performance and availability made with business stakeholders. In a truly dynamic data center, one in which resources are shared in order to provide the scalability and capacity needed to meet those service-level agreements, IP addresses are likely to become as mobile as the applications and infrastructure that need them. An application may or may not use the same IP address when it moves from one location to another; an application will use multiple IP addresses when it scales automatically and those IP addresses may or may not be static. It is already apparent that DHCP will play a larger role in the dynamic data center than it does in a classic data center architecture. DHCP is not often used within the core data center precisely because it is not guaranteed. Oh, you can designate that *this* MAC address is always assigned *that* dynamic IP address, but essentially what you’re doing is creating a static map that is in execution no different than a static bound IP address. And in a dynamic data center, the MAC address is not guaranteed precisely because virtual instances of applications may move from hardware to hardware based on current performance, availability, and capacity needs. The problem then is that NMS and APM is often tied to IP addresses. Using aging standards like SNMP to monitor infrastructure and utilizing agents installed at the OS or application server layer to collect performance data that is ultimately used to generate those eye-candy charts and reports for management. These systems can also generate dependency maps, tying applications to servers to network segments and their support infrastructure such that if any one dependent component fails, an administrator is notified. And it’s almost all monitored based on IP address. When those IP addresses change, as more and more infrastructure is virtualized and applications become more mobile within the data center, the APM and NMS systems will either fail to recognize the change or, more likely, “cry wolf” with alerts and notifications stating an application is down when in truth it is running just fine. The potential to collect erroneous data is detrimental to the ability of IT to show its value to the business, prove its adherence to agreed upon service-level agreements, and to the ability to accurately forecast growth. NMS and APM will be affected by the dynamic data center; they will need to alter the basic premise upon which they have always acted: every application and network device and application network infrastructure solution is tied to an IP address. The bonds between IP address and … everything are slowly being dissolved as we move into an architectural model that abstracts the very network foundations upon which data centers have always been built and then ignores it. While in many cases the bond between a device or application and an IP address will remain, it cannot be assumed to be true. The days of IP-based management are numbered, necessarily, and while that sounds ominous it is really a blessing in disguise. Perhaps the “silver lining in the cloud”, even. All the monitoring and management that goes on in IT is centered around one thing: the application. How well is it performing, how much bandwidth does it need/is it using, is it available, is it secure, is it running. By forcing the issue of IP address management into the forefront by effectively dismissing IP address as a primary method of identification, the cloud and virtualization have done the IT industry in general a huge favor. The dismissal of IP address as an integral means by which an application is identified, managed, and monitored means there must be another way to do it. One that provides more information, better information, and increased visibility into the behavior and needs of that application. NMS and APM, like so many other IT systems management and monitoring solutions, will need to adjust the way in which they monitor, correlate, and manage the infrastructure and applications in the new, dynamic data center. They will need to integrate with whatever means is used to orchestrate and manage the ebb and flow of infrastructure and applications within the data center. The coming network and data center revolution - the move to a dynamic infrastructure and a dynamic data center - will have long-term effects on the systems and applications traditionally used to manage and monitor them. We need to start considering the ramifications now in order to be ready before it becomes an urgent need.314Views0likes4Comments