orchestration
28 TopicsOrchestrated Infrastructure Security - Advanced WAF
The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction This article is part of a series on implementing Orchestrated Infrastructure Security. It includes High Availability, Central Management with BIG-IQ, Application Visibility with Beacon and the protection of critical assets using F5 Advanced WAF and Protocol Inspection (IPS) with AFM.It is assumed that SSL Orchestrator is already deployed, and basic network connectivity is working. If you need help setting up SSL Orchestrator for the first time, refer to the Dev/Central article series Implementing SSL Orchestrator here. This article focuses on configuring F5 Advanced WAF deployed as a Layer 2 solution. It covers the configuration of Advanced WAF protection on an F5 BIG-IP running version 16.0.0. Configuration files of BIG-IP deployed as Advanced WAF can be downloaded from here from GitLab. Please forgive me for using SSL and TLS interchangeably in this article. This article is divided into the following high level sections: Advanced WAF Network Configuration Attach Virtual Servers to an Advanced WAF Policy Advanced WAF: Network Configuration The BIG-IP will be deployed with VLAN Groups.This combines 2 interfaces to act as an L2 bridge where data flows into one interface and is passed out the other interface. Vwire configuration will be covered later. From the F5 Configuration Utility go to Network > VLANs.Click Create on the right. Give it a name, ingress1 in this example.Set the Interface to 2.1.Set Tagging to Untagged then click Add. Note: In this example interface 2.1 will receive decrypted traffic from sslo1 Interface 2.1 (untagged) should be visible like in the image below.Click Repeat at the bottom to create another VLAN. Give it a name, egress1 in this example.Set the Interface to 2.2.Set Tagging to Untagged then click Add. Note: In this example interface 2.2 will send decrypted traffic back to sslo1 Interface 2.2 (untagged) should be visible like in the image below.Click Finished. Note: This guide assumes you are setting up a redundant pair of SSL Orchestrators.Therefore, you should repeat these steps to configure VLANs for the two interfaces connected to sslo2.These VLANs should be named in a way that you can differentiate them from the others.Example: ingress2 and egress2 It should look something like this when done: Note: In this example Interface 2.3 and 2.4 are physically connected to sslo2. Click VLAN Groups then Create on the right. Give it a name, vlg1 in this example.Move ingress1 and egress1 from Available to Members.Set the Transparency Mode to Transparent.Check the box to Bridge All Traffic then click Finished. Note: This guide assumes you are setting up a redundant pair of SSL Orchestrators.Therefore, you should repeat these steps to configure a VLAN Group for the two interfaces connected to sslo2.This VLAN Group should be named in a way that you can differentiate it from the other, example: vlg1 and vlg2.It should look like the image below: For full Layer 2 transparency the following CLI option needs to be enabled: (tmos)# modify sys db connection.vgl2transparent value enable Attach Virtual Servers to an Advanced WAF Policy You can skip this step if you already have an Advanced WAF policy created and attached to one or more virtual servers.If not, we’ll cover it briefly.In this example we configured Comprehensive Protection which includes Bot Mitigation, Layer 7 DoS and Application Security. Give it a name, App_Protect1 in this example then click Save & Next. Select the Enforcement Mode and Policy Type.Click Save & Next. Configure the desired Bot Defense options.Click Save & Next on the lower right. Configure the desired DoS Profile Properties.Click Save & Next. Assign the policy to your application server(s) by moving them to Selected.Click Save & Next. Click Finish/Deploy when done. Summary In this article you learned how to configure BIG-IP in layer 2 transparency mode using VLAN groups.We also covered how to create an Advanced WAF policy and attach it to your Virtual Servers. Next Steps Click Next to proceed to the next article in the series.2KViews2likes4CommentsManage Infrastructure and Services Lifecycle with Terraform and Ansible + Demo
Working as a Solution Architect for F5, Ioften need to have access to a lab environment. 'Traditionally', the method to implement a lab was to leverage tools like Vagrant,VMWare,or others. A lab environment on a laptop is limited by its computing capacities (CPU/Memory/disk/...).Today we are often asked to show how we can integrate our solutions with many different tools(Orchestration solutions, Version Control systems, CI Servers, containerized environments, ...). Except if your laptop is a powerful one, it's difficult to build such an environment and have itrun smoothly. If the lab requirements are too demanding for my laptop, Iwould access one of our lab facility to do my work. Thisapproach itself is fine but bring some challenges: If you travel like Ido, latency can become a hindrance and be frustrating. Lab facilities leverage "shared resources". Which means you may face issues due toconflicting IP addresses, switch misconfiguration, maintenance operations, ... Some resources may already be reserved/used by another fellow colleague and not be available. You may also face other constraints making both deployment models difficult: Need to share access to the lab. Not easy when it runs on your laptop or in a private cloud that is not always opened to the outside world. People may need to be able to replicate your lab in their own environment. Stability/time needed for maintenance: Using a lab over and over will make it messy. You usually At some point, you'll reach a stage where you want to create a "new" environment that is clean and "trustworthy" (until you played too much with it again) I'm sure i've missed other constraints but you get the idea: maintaining a lab and using it in a collaborativemanner is challenging. Luckily, it's easier today to achieve those objectices: Leverage Public Cloud! Public Cloud gives you access to "unlimited" computing services over Internet that can be automated/orchestrated. With Public Cloud, you have access to an API allowing you to spin up a new environment with all therelevant tools deployed. This way, you may go straight into work (after enjoying a nice cup of coffee/tea while yourinfrastructure is being deployed! ).Once your work is done, you can destroy this environment and save money. When you'll need a lab again, you'll be able to spin a new/clean environment in a matter of minutes and be confident that it's a "healthy lab" When working on Automation/Orchestration of Public cloud environments, I see two dominant tools: Terraform andAnsible. https://www.terraform.io Terraform is an open source command line tool that can be used to provision an infrastructure on dozensof different platforms and services (AWS, Azure, ...).One of the strength of Terraform is that it is declarative: You specify the expected "state" of yourinfrastructure and Terraform will take care of all the underlying complexities (Does it need to be provisioned? Should I update the settings of a component? Which components should be created first? Do we need to deleteresources that are not required anymore, ... ).Terraform will store the "state" of your infrastructure and configuration to be more efficient in its work. https://www.ansible.com Ansible is a provisioning and configuration management tool. It is designed to automate application deployments.One of the strength of Ansible is that it doesn't require any "agents" to run on the targetted systems. Ansibleworks by leveraging "Modules". Those modules are consumed to define the "state" of the targetted systems. They areusually executed over SSH (by default). So how to leverage those tools to have a lab available on-demand? In the following demo, we will: Leverage Terraform to manage the lifecycle of a new AWS environment: manage a dedicated VPC with external/internal subnets, Ubuntu instances, F5 solution) In addition to deploying our infrastructure, it will generate the relevant files for Ansible (inventory file to know theIPs of our systems, ansible variable files to know how to configure the F5 solution with AS3) Use Ansible to manage the configuration of our systems: update our ubuntu instances, install NGINX Web serviceon our Ubuntu instances, deploy a standard F5 configuration to load balance our web application with AS3 Here is a summary for the demo: Demo time! By leveraging tools like Terraform or Ansible (you can achieve the same results with other tools), it is easy to handle thelifecycle of an infrastructure and the services running on top of it. This is what people IaC (Infrastructure as Code) Useful links:- If you want to learn more about the setup of this demo, it is posted on Github: here- F5 provides a list of templates to automate deployment in public cloud. It's available here: AWS Templates, Azure Templates, GCP Templates- F5 Application Services 3 (AS3) documentation/examples: here- If you want to learn more about our API and how to automate/orchestrate F5 solutions (free training): F5 A&O Training1KViews2likes1CommentExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.6KViews3likes0CommentsF5 Services in a CI/CD pipeline
Speedup the delivery of new services (time to market), decrease OPEX are examples of why customers are investing in automation and orchestration. This has an impact on all the layersinvolved in the delivery of a service, infrastructure included. As a solution architect working at F5, i spend a fair amount of time with customers to discuss how wecan handle F5 services lifecycle with Automation and orchestration tools. Customers want to make sure that any technologythey acquire can be orchestrated and won't be an inhibitor to their projects. In the video below, I will show you how you can tieF5 services to the lifecycle of an application. For this demo, I used the F5 automation toolchain. The F5 automation toolchain allows our customer to deploy easily and quicky F5 application services via a declarative API interface. AS3 (Application Services 3) is the extension i used to deploy the service on top of our F5 platform running as software (Virtual Machine). Everything related to this demo is available here. It contains enough scripts/information for anyone to reproduce this demo and learn more about how to consume F5 services in a CI/CD pipeline. You can find more information here regarding the F5 automation toolchain: * Overview * Whitepaper * Technical documentation * Download the F5 automation toolchain1.6KViews0likes1CommentNFV is Much More than Virtualization
There has been a lot of attention given to Network Functions Virtualization (NFV) and how it is driving a paradigm shift in the architecture of today’s communications service providers (CSPs) networks. NFV is a popular topic and has been getting a lot if visibility in the recent months. Recently, I have been seeing announcements from vendors stating that they are delivering a software version of their application or solution eliminating the need for proprietary hardware. All of these announcements always mention that since the product is now available as a ‘virtualized’ solution, they are NFV compliant or based on NFV standards. Let’s get something straight. First, there are no standards. There is an ETSI NFV working group that is defining the requirements for NFV. They have produced an initial white paper describing the benefits, enablers, and challenges for NFV in the CSP environment. Most of the industry is using this white paper as the basis for their concept of NFV. The working group is continuing to meet and determine what NFV consists of with a goal of having the concept fully defined by the end of 2014. Second, and more importantly, just because your solution can be installed and run on commoditized hardware, it does not mean that it is ‘NFV enabled’. In a recent interview, Margaret Chiosi from AT&T said, ‘Orchestration and management is a key part in realizing NFV’. While running on commoditized hardware is part of the NFV story, it is not the complete story. Virtualization does not achieve anything except reduced capital costs due to the use of common off the shelf (COTS) hardware, and flexibility in the deployment of services since there is not a proprietary hardware dependency. We, at F5 Networks, believe that the orchestration and management of these virtualized services are a critical aspect to make NFV successful. We have been delivering virtualized versions of our solutions since 2010. Since we have experience delivering virtualized solutions, this means that we understand how important it is to deliver a framework allowing for the integration of these solutions with other key components of the architecture. Even though services such as DPI, Carrier Grade NAT, PGW, and other Evolved Packet Core (EPC) functions have been virtualized, they are not automatically part of a flexible and dynamic architecture. Orchestration and management are necessary to enable the potential of the virtualized service. As I mention in a previous blog post, orchestration is where NFV can really shine because it allows the CSPs to automate service delivery and create an ecosystem where the infrastructure can react and respond to changing conditions. The infrastructure can monitor itself, determining the health of the services and instigate changes based on policies as defined by the architects and operators. An Orchestra needs a Conductor F5 has been providing the foundation for orchestration services ever since the company’s inception. The concept of load balancing and providing application delivery controller (ADC) services is all about managing connections and sessions to pools of services. This functionality inherently provides a level of security since the ADC is providing proxy services to the applications as well as availability since the ADC is also monitoring the health of the service through extensive application health checks. As Tom Nolle states in one of his blog posts, he likes the idea of ‘creating a set of abstract service models that are instantiated by linking them to an API’. This sounds like a great template for delivering orchestration services. Orchestration is the fact that these service models are linked to the services via APIs. Service models are defined via operator policies and APIs allow for the bi-directional communication between service components. The application delivery controller (ADC) is a key component for this orchestration to occur. It is the ADC that has insight into the resources and their availability. It is the ADC that manages the forwarding of sessions to these resources. In the CSP network architecture, especially as we evolve to LTE, this includes much more than the traditional load balancers and content management solutions that sit in front of server farms in data centers that we typically think of when discussing ADCs. The high value content is in the packet data network (Gi and SGi) and the control plane messaging (DNS, Diameter, SIP). With this critical role in the real-time traffic engineering of the services being virtualized along with the visibility into the health and availability of the resources the service can access, it makes sense that the ADC play a pivotal role in the management and orchestration of the virtualized infrastructure. It takes more than content inspection, load balancing, and subscriber policy control via PCRF/PCEF to enable the full orchestration required for the NFV vision to come to fruition. There needs to be an intelligence that can correlate the data from these services and technologies to determine the health and state of the infrastructure. Based on this information along with policies that have been defined by the operator and programmed into this intelligence, it becomes possible to 1) create a dynamic orchestration ecosystem that can monitor the different services and functions within the EPC, 2) collect data and provide analytics, and 3) proactively and reactively adjust the configuration and availability of resources based on the operator defined policies. Along with the intelligence, it is necessary to have open APIs that allow for inbound and outbound communications to allow for the sharing of the data collected in addition to being the conduit to deliver policy and configuration changes as determined by the orchestration system. It is critical for the orchestration of a NFV architecture in the EPC to be open and allow for multiple, potentially disparate vendors and technologies working together to create a dynamic environment that provides the flexibility and scalability that NFV is looking to achieve. As an example to demonstrate this orchestration functionality and how it is able to take advantage of virtualization within the NFV architecture, I am reposting this video of value-added services (VAS) bursting in a virtual EPC (vEPC) environment. In this scenario, the F5 BIG-IP Policy Enforcement Manager (PEM) is identifying and tracking the connections being delivered to the VAS solution, video optimization in this case. Based on the data received, such as number of concurrent connections, BIG-IP PEM is able to signal a virtual orchestration client, such as one of many varieties of virtual machine hypervisors to enable additional servers for the video optimization solution and have them added to the available resources within the traffic steering policy. This demonstration shows the initial potential of a virtualized infrastructure when one is able to deliver on the promise of the orchestration and management of the entire infrastructure as an ecosystem, not a pool of different technologies and vendor-specific implementations. It is critical for this collaboration and orchestration development to continue if we expect this NFV architecture to be successful. It is important for everyone, the CSPs, vendors, and technologists, to see and understand that NFV is much more than virtualization and the ability to deliver services as software. NFV requires a sound management and orchestration framework to become a proper success.253Views0likes1CommentOrchestrate Your Infrastructure
The digital society has emerged. Today’s always-connected world and the applications we interact with are changing the way we live. People are mobile, our devices are mobile, and by all accounts, everything that is a noun – a person, place or thing – will soon be connected and generating data... and all that traffic is destined for an application – that could also be portable - located somewhere in a data center. But not all data traffic is created equally and critical information might need some action that requires automation of the deployment process. At the same time, organizations can’t afford to manually make policy adjustments every time something needs attention. Automated coordination between applications, data and infrastructure from provisioning to applying policies and services which are in-line with business needs must be in place. This is Orchestration. Humans have always differentiated ourselves from all other creatures by our ability to reason. Today, we’re building reason into systems to make some of these decisions for us. Software that incorporates, ‘What’s the purpose?’ ‘What’s the reason why?’ Purpose-driven networking – programmability - means not just recognizing this is Thing 1 or Thing 2 and route requests to the appropriate service, but recognizing what Thing 1 or Thing 2 is trying to do and delivering in such a way as to meet expectations with respect to its performance. The underlying infrastructure/architecture also needs to understand the purpose or reason for the data traffic adjustment and enable the scale and speed of deployments necessary for business success. There is a ton of communication between us, our devices and the things around us, along with the applications that support us. It takes an agile and programmable infrastructure which is able to intercept, evaluate and interpret each request with an eye toward user, device, location and, now, purpose. Orchestration is the glue that holds together all the quick networking decisions, ensures the provisioning of policies go where they need to go and provides the intelligence for the architecture to make automatic decisions and adjustments based on policy. There could be many good reasons to automatically adjust the system and the F5 proxy architecture can augment application delivery functionality in tune with many other frameworks. Because everyone has a unique environment, we’ve built custom integrations for a variety of 3rd party solutions including Cisco APIC, Amazon EC2, VMware NSX, and OpenStack. It begins when an administrator creates a custom integration based on Application Templates. These templates can contain any configuration for a BIG-IP – from firewalls to local traffic management or anything else. Many configurations are seamless but with Cisco APIC, the configuration is then turned into a custom plug-in. The device package can then be uploaded directly to Cisco APIC, where application developers can deploy their targeted configuration correctly without using lots of knobs, but only the knobs they need to configure their application. The application developer only has to specify a couple of parameters because when the administrator created the templates, they pre-configured everything the application developer needs in order to correctly deploy their application. This is different from other vendor’s integrations, which simply expose a large series of configuration clicks that then users have to get correct…and they’re easy to get wrong. At this point, iWorkflow translates this small set of parameters into the complete configuration needed by the BIG-IP. And it deploys it on the BIG-IP. The BIG-IP is now completely configured for your application. But we’re not done yet. This is a dynamic integration since environments are always changing. When new application servers are added, or removed from your network, APIC will notice this, inform the BIG-IP, and BIG-IP’s configuration will update to reflect the new application servers and the associated application services. Now that the BIG-IP is aware of these application servers, it will immediately start directing traffic to those servers allowing your application to expand. Likewise, when application servers are removed, the BIG-IP’s configuration will immediately be updated and will stop passing traffic to those application servers, allowing you to take a maintenance window or decrease the capacity provided to your application. And while this all happening, the iWorkflow is collecting application level statistics, to provide a complete view of your infrastructure and reporting them upstream to the Cisco APIC in this example. That’s it, we’re done right?!?! WRONG!! What about security? What happens when you’re under attack?!? As you know, it is critically important that the security services dynamically follow the application also, no matter where it lives or how it got there. And in some cases, an old application needs a new home. The idea is that you start with the (figurative) castle protecting the queen's treasure – The Data - and we drop in the different service pieces to keep the application secure, available and resilient. The wall and moat around the castle represent BIG-IP AFM perimeter protection; there’s a satellite dish for signaling to Silverline DDoS Service; BIG-IP APM's draw bridge to thwart unauthorized access. The whole point is that F5 can add these services around all your 'castled' applications to protect them from threats. This is especially true for ‘older’ applications that may have issues adding security services. F5 can be deployed with the latest security services to protect your entire environment. Orchestration gives organizations the automated provisioning processes of application policies in our hybrid, dynamic, mobile and risky world. And check out Nathan Pearce's great iWorkflow Series! ps382Views0likes0CommentsF5 Heat Plugins Release Announcement - v7.0.3, v8.0.2
Release Announcement 10 June 2016 We are pleased to announce the release of v7.0.3 and v8.0.2 of the F5 Heat Plugins for OpenStack. Release Highlights A new resource plugin, F5::Cm::Cluster , has been added for clustering. This resource allows a Heat user to cluster two BIG-IP® devices together in a sync-failover group. Please see the documentation and GitHub release page for more information. Compatibility v7.0.3 is compatible with OpenStack Kilo. v8.0.2 is compatible with OpenStack Liberty. Issues We welcome bug reports through the project issues page on GitHub. - The F5 OpenStack Product Team180Views0likes0CommentsBIG-IQ Grows UP [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. Today F5 is announcing a new F5® BIG-IQ™ 4.5. This release includes a new BIG-IQ component – BIG-IQ ADC. Why is 4.5 a big deal? This release introduces a critical new BIG-IQ component, BIG-IQ ADC. With ADC management, BIG-IQ can finally control basic local traffic management (LTM) policies for all your BIG-IP devices from a single pane of glass. Better still, BIG-IQ’s ADC function has been designed with the concept of “roles” deeply ingrained. In practice, this means that BIG-IQ offers application teams a “self-serve portal” through which they can manage load balancing of just the objects they are “authorized” to access and update. Their changes can be staged so that they don’t go live until the network team has approved the changes. We will post follow up blogs that dive into the new functions in more detail. In truth, there are a few caveats around this release. Namely, BIG-IQ requires our customer’s to be using BIG-IP 11.4.1 or above. Many functions require 11.5 or above. Customers with older TMOS version still require F5’s legacy central management solution, Enterprise Manager. BIG-IQ still can’t do some of the functions Enterprise Manager provides, such as iHealth integration and advanced analytics. And BIG-IQ can’t yet manage some advanced LTM options. Never-the-less, this release will an essential component of many F5 deployments. And since BIG-IQ is a rapidly growing platform, the feature gaps will be filled before you know it. Better still, we have big plans for adding additional components to the BIG-IQ framework over the coming year. In short, it’s time to take a long hard look at BIG-IQ. What else is new? There are hundreds of new or modified features in this release. Let me list a few of the highlights by component: 1. BIG-IQ ADC - Role-based central Management of ADC functions across the network · Centralized basic management of LTM configurations · Monitoring of LTM objects · Provide high availability and clustering support of BIG-IP devices and application centric manageability services · Pool member management (enable/disable) · Centralized iRules Management (though not editing) · Role-based management · Staging and manual of deployments 2. BIG-IQ Cloud - Enhanced Connectivity and Partner Integration · Expand orchestration and management of cloud platforms via 3rd party developers · Connector for VMware NSX and (early access) connector for Cisco ACI · Improve customer experience via work flows and integrations · Improve tenant isolation on device and deployment 3. BIG-IQ Device - Manage physical and virtual BIG-IP devices from a single pane of glass · Support for VE volume licensing · Management of basic device configuration & templates · UCS backup scheduling · Enhanced upgrade advisor checks 4. BIG-IQ Security - Centralizes security policy deployment, administration, and management · Centralized feature support for BIG-IP AFM · Centralized policy support for BIG-IP ASM · Consolidated DDoS and logging profiles for AFM/ASM · Enhanced visibility and notifications · API documentation for ASM · UI enhancements for AFM policy management My next blog will include a video demonstrating the new BIG-IQ ADC component and showing how it enhances collaboration between the networking and application teams with fine grained RBAC.804Views0likes3CommentsSoftware Defined Data Center Made Easy with F5 and VMware
Jared Cook, VMware’s Lead Strategic Architect, Office of the CTO, visits #F5Agility15 and shares how F5 and VMware solutions can be used together in an orchestrated fashion to enable customers to spin up applications on-demand, and provision F5 software defined application services those applications need to run successfully, with greater ease and automation than before in the Software Defined Data Center. ps Related: F5Agility15 - The Preview Video Welcome to F5 Agility 2015 Innovate, Expand, Deliver with F5 CEO Manny Rivelo Get F5 Certified at F5 Agility 2015 F5 Agility 2015 F5 YouTube Channel Technorati Tags: f5,agility,f5agility15,vmware,sddc,silva,video,orchestration,automation,euc,cloud Connect with Peter: Connect with F5:271Views0likes0CommentsThe 2015 problem - Human Latency
What is human latency? No, I’m not referring to how quick you can pull a table cloth - something my mother insisted I stop trying, and failing miserably at, at a young age. In technology circles, human latency refers to the delays incurred when waiting for people to complete a task. I don’t believe (I hope not) that the term was coined to suggest that certain employees are slow to meet expectations, but more to do with the fact that people have finite bandwidth and, therefore, when the workload exceeds what they can deliver businesses will experience delays. This is ‘human latency’. Putting something in a request queue (the business) waiting for something to be done (human latency). Human latency can be the result of many things: Workload exceeding the capabilities of a delivery team. Procedural delays (policy) caused by anything form budget constraint, key decision makers being unavailable, possibly even interdepartmental disputes over prioritization. Knowledge limitations of engineers working beyond their comfort zone. Why are we suddenly so interested? In more recent times, the speed of business has increased significantly. Far, have IT departments progressed, from the days of sitting around in sun-loungers (I jest), to todays more competitive, information-consumer driven priorities. Driven largely by competition and the ‘immediate gratification’ generation, expectations on the delivery of ‘everything now’ have never been so prevalent. What effect will this have? Have you seen Terminator 3: Rise of the Machines? In case you haven’t, it’s the one about the robots deciding to eradicate the human race from planet earth. Turns out the robots realized that we, their creators, were the weakness in evolution. I’m hoping that said movie has socialized the importance that we (humans) stop before anything gets quite so bad <CR><LF>. However, back to the title of this section, it’s the human latency problem that is driving technology integration and intelligent orchestration. Not this integration (man inserts RFID chip into hand), and not this (musicy thingy) kind of orchestration. Five years ago (that’s 2009, to the future alien race), I was talking with a customer who told me of a most excellent policy within their team. “NEVER do something more than 3 times without committing effort towards the automation of that task.” Unfortunately, my suggestions toward building a shrine in honor of this customer were laughed off, but can you deny the genius behind this policy? In their environment, one was ‘busy’ and unavailable for other assignments should they be reviewing a historical task for the purpose of never performing it again. BRILLIANT!!! What can we learn? For a long time, cloud computing was heralded as the solution to IT’s bottleneck. “Move workloads to the cloud so that IT can focus on strategic projects”, they yelled from the rooftops (via twitter). Cloud-based technology didn’t solve problems, it merely moved them elsewhere, or created new ones. With IaaS we have the same service delivery problems as private data centers, just on someone else’s hypervisor, and with a utility licensing model. With SaaS, its great that someone else is handling feature requests and execution, but with SaaS also came access problems: increased password fatigue and a reduced security integrity. And no, I didn’t forget PaaS. PaaS just brought confusion as it primarily solved problems for coders and, while I think PaaS is AWESOME, there’s still an education phase for this one. A new hope (coincidentally, Star Wars, Episode IV) While pushing better business, the efficiency (automation/orchestration) drive is definitely creating headaches – how many different/conflicting SDN pitches did you hear in 2014? New terms like SDN create for new stories, but lets keep the focus on reducing human latency. The best people I’ve ever worked with are those who have focused, primarily, on removing themselves from policy validation (safely) to ensure rapid execution. But there’s always a line…. all your base are belong to us!298Views0likes2Comments