scripting
5 TopicsTo Err is Human
#devops Automating incomplete or ineffective processes will only enable you to make mistakes faster – and more often. Most folks probably remember the play on "to err is human…" proverb when computers first began to take over, well, everything. The saying was only partially tongue-in-cheek, because as we've long since learned the reality is that computers allow us to make mistakes faster and more often and with greater reach. One of the statistics used to justify a devops initiative is the rate at which human error contributes to a variety of operational badness: downtime, performance, and deployment life-cycle time. Human error is a non-trivial cause of downtime and other operational interruptions. A recent Paragon Software survey found that human error was cited as a cause of downtime by 13.2% of respondents. Other surveys have indicated rates much higher. Gartner analysts Ronni J. Colville and George Spafford in "Configuration Management for Virtual and Cloud Infrastructures" predict as much as 80% of outages through 2015 impacting mission-critical services will be caused by "people and process" issues. Regardless of the actual rates at which human error causes downtime or other operational disruptions, reality is that it is a factor. One of the ways in which we hope to remediate the problem is through automation and devops. While certainly an appropriate course of action, adopters need to exercise caution when embarking on such an initiative, lest they codify incomplete or inefficient processes that simply promulgate errors faster and more often. DISCOVER, REMEDIATE, REFINE, DEPLOY Something that all too often seems to be falling by the wayside is the relationship between agile development and agile operations. Agile isn't just about fast(er) development cycles, it's about employing a rapid, iterative process to the development cycle. Similarly, operations must remember that it is unlikely they will "get it right" the first time and, following agile methodology, are not expected to. Process iteration assists in discovering errors, missing steps, and other potential sources of misconfiguration that are ultimately the source of outages or operational disruption. An organization that has experienced outages due to human error are practically assured that they will codify those errors into automation frameworks if they do not take the time to iteratively execute on those processes to find out where errors or missing steps may lie. It is process that drives continuous delivery in development and process that must drive continuous delivery in devops. Process that must be perfected first through practice, through the application of iterative models of development on devops automation and orchestration. What may appear as a tedious repetition is also an opportunity to refine the process. To discover and eliminate inefficiencies that streamline the deployment process and enable faster time to market. Inefficiencies that are generally only discovered when someone takes the time to clearly document all steps in the process – from beginning (build) to end (production). Cross-functional responsibilities are often the source of such inefficiencies, because of the overlap between development, operations, and administration. The outage of Microsoft’s cloud service for some customers in Western Europe on 26 July happened because the company’s engineers had expanded capacity of one compute cluster but forgot to make all the necessary configuration adjustments in the network infrastructure. -- Microsoft: error during capacity expansion led to Azure cloud outage Applying an agile methodology to the process of defining and refining devops processes around continuous delivery automation enables discovery of the errors and missing steps and duplicated tasks that bog down or disrupt the entire chain of deployment tasks. We all know that automation is a boon for operations, particularly in organizations employing virtualization and cloud computing to enable elasticity and improved provisioning. But what we need to remember is that if that automation simply encodes poor processes or errors, then automation just enables to make mistakes a whole lot faster. Take care to pay attention to process and to test early, test often. How to Prevent Cascading Error From Causing Outage Storm In Your Cloud Environment? DOWNTIME, OUTAGES AND FAILURES - UNDERSTANDING THEIR TRUE COSTS Devops Proverb: Process Practice Makes Perfect 1024 Words: The Devops Butterfly Effect Devops is a Verb BMC DevOps Leadership Series305Views0likes0CommentsThe Double Whammy of Scripting
Many of you are very familiar with iRules, our Tool Command Language (Tcl) based scripter. It’s a powerful application delivery tool to have a programmable proxy that allows you to manipulate – in real time - any network traffic passing through the BIG-IP. Many BIG-IP fans have used it to address their specific needs and some iRules have even been productized as features. For example, the cool ASM Data Mask feature that blocks sensitive info like SSN or credit card numbers from leaking out was once an iRule. Aw, our baby made it to the BIGs. And by now you may have heard the trumpets about iRules LX, available in our most recent BIG-IP v12.1 release. So I was wondering if you were wondering what’s the difference between iRules and iRules LX? Why would you use one or the other? iRules is based on Tcl and is an extremely stable and well-documented solution. We introduced it in BIG-IP v9.0 and we continue ongoing feature development for it. iRules Language eXtensions (where the LX comes from) is the next-generation of network programmability based on JavaScript. IRules LX is not intended to replace or antiquate Tcl, but provide additional functionality in certain situations. Say you are writing a rule in Tcl that looks for some piece of data. When you find that data, you then need to make a database call to verify the parameters. That could get messy with many lines of code. You may even say to yourself, ‘Geeze, this would be a whole lot easier if I had a parser…wouldn’t that be nice.’ This is where IRules LX can be handy. Toss it over to a Node.js extension and let it do the work. With the proper node package manger (npm), of which there are some 280,000 (and counting), iRules LX will process and send back to Tcl so you can go on your merry way. Essentially, that last 10% is 90% of the work so why not have a proper engine run it. iRules LX is a simple way to solve tough challenges…another tool to use when you need it. Granted, it is not necessarily a hammer but that particular hex tool for precise jobs. It also bridges into the new world of programming. Tcl is still very relevant yet Node.js a popular, cutting edge language that the development community has eaten up. It offers more flexibility when you need it and a new tool in your arsenal of application delivery solutions. You should also check out Eric Flores' Getting Started with iRules LX series which covers some concepts, use cases, configurations and workflows. ps Related: Introducing iRules LX Getting Started with iRules LX June is Programmability Month!285Views0likes0CommentsWILS: Automation versus Orchestration
Infrastructure 2.0 is not just about automation, but rather is about the orchestration of processes, which are actually two different things: the former is little more than advanced scripting, the latter requires participation and decision making on the part of the infrastructure involved. Automation is the process of codifying – usually through a scripting language but not always – a specific task. This task usually has one goal, though it may have several steps that have to be performed to accomplish it. An example would be “bring this server down for maintenance.” This may require quiescing connections if it is an application server, and stopping specific processes and then taking it offline. But the automation is of a specific task. Orchestration, on the other hand, is the codification of a complete process. In the case of cloud computing and IT this can also accomplished using scripts but more often involves the use of APIs – both RESTful and SOAPy. Orchestration ties together a set of automated tasks into a single process (operational in the case of IT, business in the case of many other solutions) and may span multiple devices, applications, solutions, and even data centers. “Bring this server down for maintenance” may actually be a single task in a larger process that is “Deploying a new version of an application.” The subtle difference between automation and orchestration is important primarily because the former is focused on codifying a concrete set of steps normally handled manually but that are done to a device or component. The latter often requires participation and decision making on the part of the infrastructure being orchestrated - the infrastructure is an active participant, a collaborator, in orchestration but is likely not in automation. The Context-Aware Cloud203Views0likes1Comment1024 Words: Why Devops is Hard
#devops #cloud #virtualization Just because you don't detail the complexity doesn't mean it isn't there It should be noted that this is one – just one – Java EE application and even it is greatly simplified (architectural flowcharts coming from the dev side of the house are extraordinarily complicated, owing to the number of interconnects and integrations required). Most enterprises have multiple applications that require just as many interconnects, with large enterprises numbering their applications in the hundreds. Related blogs & articles: Devops Proverb: Process Practice Makes Perfect Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk165Views0likes0CommentsWill DevOps Fork?
An impassioned plea from a devops blogger and a reality check from a large enterprise highlight a growing problem with devops evolutions – not enough dev with the ops. John E. Vincent offered a lengthy blog on a subject near and dear to his heart recently: devops. His plea was not to be left behind as devops gains momentum and continues to barrel forward toward becoming a recognized IT discipline. The problem is that John, like many folks, works in an enterprise. An enterprise in which not only the existence of legacy and traditional solutions require a bit more ingenuity to integrate but in which the imposition of regulations breaks the devops ability to rely solely on script-based solutions to automate operations. The whole point of this long-winded post is to say "Don't write us off". We know. You're preaching to the choir. It takes baby steps and we have to pursue it in a way that works with the structure we have in place. It's great that you're a startup and don't have the legacy issues older companies have. We're all on the same team. Don't leave us behind. John E. Vincent, “No operations team left behind - Where DevOps misses the mark” But it isn’t just legacy solutions and regulations slowing down the mass adoption of cloud computing and devops, it’s the sheer rate of change that can be present in very large enterprise operations. Scripted provisioning is faster than the traditional approach and reduces technical and human costs, but it is not without drawbacks. First, it takes time to write a script that will shift an entire, sometimes complex workload seamlessly to the cloud. Second, scripts must be continually updated to accommodate the constant changes being made to dozens of host and targeted servers. "Script-based provisioning can be a pretty good solution for smaller companies that have low data volumes, or where speed in moving workloads around is the most important thing. But in a company of our size, the sheer number of machines and workloads we need to bring up quickly makes scripts irrelevant in the cloud age," said Jack Henderson, an IT administrator with a national transportation company based in Jacksonville, Fla. More aggressive enterprises that want to move their cloud and virtualization projects forward now are looking at more advanced provisioning methods Server provisioning methods holding back cloud computing initiatives Scripting, it appears, just isn’t going to cut it as the primary tool in devops toolbox. Not that this is any surprise to those who’ve been watching or have been tackling this problem from the inevitable infrastructure integration point of view. In order to accommodate policies and processes specific to regulations and simultaneously be able to support a rapid rate of change something larger than scripts and broader than automation is going to be necessary. You are going to need orchestration, and that means you’re going to need integration. You’re going to need Infrastructure 2.0. AUTOMATED OPERATIONS versus DATA CENTER ORCHESTRATION In order to properly scale automation along with a high volume of workload you need more than just scripted automation. You need collaboration and dynamism, not codified automation and brittle configurations. What scripts provide now is the ability to configure (and update) the application deployment environment and – if you’re lucky – piece of the application network infrastructure on an individual basis using primarily codified configurations. What we need is twofold. First, we need to be able to integrate and automate all applicable infrastructure components. This becomes apparent when you consider the number of network infrastructure components upon which an application relies today, especially those in a highly virtualized or cloud computing environment. Second is the ability to direct a piece of infrastructure to configure itself based on a set of parameters and known operational states – at the time it becomes active. We don’t want to inject a new configuration every time a system comes up, we want to modify on the fly, to adapt in real-time, to what’s happening in the network, in the application delivery channel, in the application environment. While it may be acceptable (and it isn’t in very large environments but may be acceptable in smaller ones) to use a reset approach, i.e. change and reboot/reload a daemon to apply those changes, this is not generally an acceptable approach in the network. Other applications and infrastructure may be relying on that component and rebooting/resetting the core processes will interrupt service to those dependent components. The best way to achieve the goal desired – real-time management – is to use the APIs provided to do so. On top of that we need to be able to orchestrate a process. And that process must be able to incorporate the human element if necessary, such as may be the case with regulations that require approvals or “sign-offs”. We need solutions that are based on open-standards and integrate with one another in such a way as to make it possible to arrive at a solution that can serve an organization of any size and any age and at any stage in the cloud maturity model. Right now devops in practice is heading down a path that relegates it to little more than automation operators, which is really not all that much different than what the practitioners were before. Virtualization and cloud computing have simply raised their visibility due to the increased reliance on automation as a means to an end. But treating automated operations as the end goal completely eliminates the “dev” in “devops” and ignores the concept of an integrated, collaborative network that is not a second-class citizen but a full-fledged participant in the application lifecycle and deployment process. That concept is integral to the evolution of highly virtualized implementations toward a mature, cloud-based environment that can leverage services whether they are local, remote, or a combination of both. Or that change from day to day or hour to hour based on business and operational conditions and requirements. DEVOPS NEEDS to MANAGE INFRASTRUCTURE not CONFIGURATIONS The core concept behind infrastructure 2.0 is collaboration between all applicable constituents – from the end user to the network to the application infrastructure to the application itself. From the provisioning systems to the catalog of services to the billing systems. It’s collaborative and dynamic, which means adaptive and able to change the way in which policies are applied – from routing to switching to security to load balancing – based on the application and its right-now needs. Devops is – or should be - about enabling that integration. If that can be done with a script, great. But the reality is that a single script or set of scripts that focus on the automation of components rather than systems and architectures is not going to scale well and will instead end up contributing to the diseconomy of scale that was and still is the primary driver behind the next-generation network. Scripts are also unlikely to address the very real need to codify the processes that drive an enterprise deployment, and do not take into consideration the very real possibility that a new deployment may need to be “backed-out” if something goes wrong. Scripts are too focused on managing configurations and not focused enough on managing the infrastructure. It is the latter that will ultimately provide the most value and the means the which the network will be elevated to a first class citizen in the deployment process. Infrastructure 2.0 is the way in which organizations will move from aggregation to automation and toward the liberation of the data center based on full stack interoperability and portability. A services-based infrastructure that can be combined to form a dynamic control plane that allows infrastructure services to be integrated into the processes required to not just automate and ultimately orchestrate the data center, but to do so in a way that scales along with the implementation. Collaboration and integration will require development, there’s no way to avoid that. This should be obvious from the reliance on APIs (Application Programming Interface), a term which if updated to reflect today’s terminology would be called an ADI (Application Development Interface). Devops needs to broaden past “ops” and start embracing the “dev” as a means to integrate and enable the collaboration necessary across the infrastructure to allow the maturation of emerging data center models to continue. If the ops in devops isn’t balanced with dev, it’s quite possible that like many cross-discipline roles within IT, the concept of devops may have to fork in order to continue moving virtualization and cloud computing down its evolutionary path. NOTE: Edited 8/4/2010 after a conversation with one of the "founding fathers" of devops, John Willis, to note that it's not the devops movement per se that's headed down the automation and scripts path, but devops in practice. As he notes in one of his discussions of devops, there's a lot more to devops than just automation. The "tear down the walls" between operations and development should be a familiar one to those who have embraced application delivery as a means to integrate these two disparate and yet complementary roles. Related Posts149Views0likes0Comments