agile
6 TopicsDevops Proverb: Process Practice Makes Perfect
#devops Tools for automating – and optimizing – processes are a must-have for enabling continuous delivery of application deployments Some idioms are cross-cultural and cross-temporal. They transcend cultures and time, remaining relevant no matter where or when they are spoken. These idioms are often referred to as proverbs, which carries with it a sense of enduring wisdom. One such idiom, “practice makes perfect”, can be found in just about every culture in some form. In Chinese, for example, the idiom is apparently properly read as “familiarity through doing creates high proficiency”, i.e. practice makes perfect. This is a central tenet of devops, particularly where optimization of operational processes is concerned. The more often you execute a process, the more likely you are to get better at it and discover what activities (steps) within that process may need tweaking or changes or improvements. Ergo, optimization. This tenet grows out of the agile methodology adopted by devops: application release cycles should be nearly continuous, with both developers and operations iterating over the same process – develop, test, deploy – with a high level of frequency. Eventually (one hopes) we achieve process perfection – or at least what we might call process perfection: repeatable, consistent deployment success. It is implied that in order to achieve this many processes will be automated, once we have discovered and defined them in such a way as to enable them to be automated. But how does one automate a process such as an application release cycle? Business Process Management (BPM) works well for automating business workflows; such systems include adapters and plug-ins that allow communication between systems as well as people. But these systems are not designed for operations; there are no web servers or databases or Load balancer adapters for even the most widely adopted BPM systems. One such solution can be found in Electric Cloud with its recently announced ElectricDeploy. Process Automation for Operations ElectricDeploy is built upon a more well known product from Electric Cloud (well, more well-known in developer circles, at least) known as ElectricCommander, a build-test-deploy application deployment system. Its interface presents applications in terms of tiers – but extends beyond the traditional three-tiers associated with development to include infrastructure services such as – you guessed it – load balancers (yes, including BIG-IP) and virtual infrastructure. The view enables operators to create the tiers appropriate to applications and then orchestrate deployment processes through fairly predictable phases – test, QA, pre-production and production. What’s hawesome about the tools is the ability to control the process – to rollback, to restore, and even debug. The debugging capabilities enable operators to stop at specified tasks in order to examine output from systems, check log files, etc..to ensure the process is executing properly. While it’s not able to perform “step into” debugging (stepping into the configuration of the load balancer, for example, and manually executing line by line changes) it can perform what developers know as “step over” debugging, which means you can step through a process at the highest layer and pause at break points, but you can’t yet dive into the actual task. Still, the ability to pause an executing process and examine output, as well as rollback or restore specific process versions (yes, it versions the processes as well, just as you’d expect) would certainly be a boon to operations in the quest to adopt tools and methodologies from development that can aid them in improving time and consistency of deployments. The tool also enables operations to determine what is failure during a deployment. For example, you may want to stop and rollback the deployment when a server fails to launch if your deployment only comprises 2 or 3 servers, but when it comprises 1000s it may be acceptable that a few fail to launch. Success and failure of individual tasks as well as the overall process are defined by the organization and allow for flexibility. This is more than just automation, it’s managed automation; it’s agile in action; it’s focusing on the processes, not the plumbing. MANUAL still RULES Electric Cloud recently (June 2012) conducted a survey on the “state of application deployments today” and found some not unexpected but still frustrating results including that 75% of application deployments are still performed manually or with little to no automation. While automation may not be the goal of devops, but it is a tool enabling operations to achieve its goals and thus it should be more broadly considered as standard operating procedure to automate as much of the deployment process as possible. This is particularly true when operations fully adopts not only the premise of devops but the conclusion resulting from its agile roots. Tighter, faster, more frequent release cycles necessarily puts an additional burden on operations to execute the same processes over and over again. Trying to manually accomplish this may be setting operations up for failure and leave operations focused more on simply going through the motions and getting the application into production successfully than on streamlining and optimizing the processes they are executing. Electric Cloud’s ElectricDeploy is one of the ways in which process optimization can be achieved, and justifies its purchase by operations by promising to enable better control over application deployment processes across development and infrastructure. Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk262Views0likes1Comment1024 Words: IT Syzygy
#SDN #DevOps #Agile A syzygy is an astronomical term that describes the alignment of three celestial bodies within a gravitational system. The alignment of three celestial bodies (app dev, ops, the network) within a gravitational system (IT). The data seems to show some alignment on a very important business driver: accelerating time to market. As Mr. Spock would say, "Fascinating."424Views0likes0CommentsDevops is a Verb
#devops Devops is not something you build, it’s something you do Operations is increasingly responsible for deploying and managing applications within this architecture, requiring traditionally developer-oriented skills like integration, programming and testing as well as greater collaboration to meet business and operational goals for performance, security, and availability. To maintain the economy of scale necessary to keep up with the volatility of modern data center environments, operations is adopting modern development methodologies and practices. cloud computing and virtualization have elevated the API as the next generation management paradigm across IT, driven by the proliferation of virtualization and pressure on IT to become more efficient. In response, infrastructure is becoming more programmable, allowing IT to automate, integrate and manage continuous delivery of applications within the context of an overarching operational framework. The role of infrastructure vendors in devops is to enable the automation, integration, and lifecycle management of applications and infrastructure services through APIs, programmable interfaces and reusable services. By embracing the toolsets, APIs, and methodologies of devops, infrastructure vendors can enable IT to create repeatable processes with faster feedback mechanisms that support the continuous and dynamic delivery cycle required to achieve efficiency and stability within operations. DEVOPS MORE THAN ORCHESTRATING VM PROVISIONING Most of the attention paid to devops today is focused on automating the virtual machine provisioning process. Do you use scripts? Cloned images? Boot scripts or APIs? Open Source tools? But devops is more than that and it’s not what you use. You don’t suddenly get to claim you’re “doing devops” because you use a framework instead of custom scripts, or vice-versa. Devops is a broader, iterative agile methodology that enables refinement and eventually optimization of operational processes. Devops is lifecycle management with the goal of continuous delivery of applications achieved through the discovery, refinement and optimization of repeatable processes. Those processes must necessarily extend beyond the virtual machine. The bulk of time required to deploy an application to the end-user lies not in provisioning it, but in provisioning it in the context of the entire application delivery chain. Security, access, web application security, load balancing, acceleration, optimization. These are the services that comprise an application delivery network, through which the application is secured, optimized and accelerated. These services must be defined and provisioned as well. Through the iterative development of the appropriate (read: most optimal) policies to deliver specific applications, devops is able to refine the policies and the process until it is repeatable. Like enterprise architects, devops practitioners will see patterns emerge from the repetition that clearly indicate an ability to reuse operational processes and make them repeatable. Codifying in some way these patterns shortens the overall process. Iterations refine until the process is optimized and applications can be completely deployed in as short a time as possible. And like enterprise architects, devops practitioners know that these processes span the silos that exist in data centers today. From development to security to the network; the process of deploying an application to the end-user requires components from each of these concerns and thus devops must figure out how to build bridges between the ivory towers of the data center. Devops must discern how best to integrate processes from each concern into a holistic, application-focused operational deployment process. To achieve this, infrastructure must be programmable, it must present the means by which it can be included the processes. We know, for example, that there are over 1200 network attributes spanning multiple concerns that must be configured in the application delivery network to successfully deploy Microsoft Exchange to ensure it is secure, fast and available. Codifying that piece of the deployment equation as a repeatable, automated process goes a long way toward reducing the average time to end-user from 3 months down to something more acceptable. Infrastructure vendors must seek to aid those on their devops journey by not only providing the APIs and programmable interfaces, but actively building an ecosystem of devops-focused solutions that can be delivered to devops practitioners. It is not enough to say “here is an API”, go forth and integrate. Devops practitioners are not developers, and while an API in some cases may be exactly what is required, more often than not organizations are adopting platforms and frameworks through which devops will be executed. Infrastructure vendors must recognize this reality and cooperatively develop the integrations and the means to codify repeatable patterns. The collaboration across silos in the data center is difficult, but necessary. Infrastructure vendors who cross market lines, as it were, to cooperatively develop integrations that address the technological concerns of collaboration will make the people and process collaboration responsibility of devops a much less difficult task. Devops is not something you build, it’s something you do. Will DevOps Fork? SDN, OpenFlow, and Infrastructure 2.0 The Conspecific Hybrid Cloud Ecosystems are Always in Flux The Infrastructure Turk: Lessons in Services This is Why We Can’t Have Nice Things558Views0likes0CommentsSix Lines of Code
The fallacy of security is that simplicity or availability of the solution has anything to do with time to resolution The announcement of the discovery of a way in which an old vulnerability might be exploited gained a lot of attention because of the potential impact on Web 2.0 and social networking sites that rely upon OAuth and OpenId, both of which use affected libraries. What was more interesting to me, however, was the admission by developers that the “fix” for this vulnerability would take only “six lines of code”, essentially implying a “quick fix.” For most of the libraries affected, the fix is simple: Program the system to take the same amount of time to return both correct and incorrect passwords. This can be done in about six lines of code, Lawson said. It sounds simple enough. Six lines of code. If you’re wondering (and I know you are) why it is that I’m stuck on “six lines of code” it’s because I think this perfectly sums up the problem with application security today. After all, it’s just six lines of code, right? Shouldn’t take too long to implement, and even with testing and deployment it couldn’t possibly take more than a few hours, right? Try thirty eight days, on average. That’d be 6.3 days per lines of code, in case you were wondering. SIMPLICITY OF THE SOLUTION DOES NOT IMPLY RAPID RESOLUTION Turns out that responsiveness of third-party vendors isn’t all that important, either. But a new policy announced on Wednesday by TippingPoint, which runs the Zero Day Initiative, is expected to change this situation and push software vendors to move more quickly in fixing the flaws. Vendors will now have six months to fix vulnerabilities, after which time the Zero Day Initiative will release limited details on the vulnerability, along with mitigation information so organizations and consumers who are at risk from the hole can protect themselves. -- Forcing vendors to fix bugs under deadline, C|Net News, August 2010 To which I say, six lines of code, six months. Six of one, half-a-dozen of the other. Neither is necessarily all that pertinent to whether or not the fix actually gets implemented. Really. Let’s examine reality for a moment, shall we? The least amount of time taken by enterprises to address a vulnerability is 38 days, according to WhiteHat Security’s 9th Website Security Statistic Report. Only one of the eight reasons cited by organizations in the report for not resolving a vulnerability is external to the organization: affected code is owned by an unresponsive third-party vendor. The others are all internal, revolving around budget, skills, prioritization, or simply that risk of exploitation is acceptable. Particularly of note is that in some cases, the “fix” for the vulnerability conflicts with a business use case. Guess who wins in that argument? What WhiteHat’s research shows, and what most people who’ve been inside the enterprise know, is that there are a lot of reasons why vulnerabilities aren’t patched or fixed. We can say it’s only six lines of code and yes, it’s pretty easy, but that doesn’t take into consideration all the other factors that go into deciding when, if ever, to resolve the vulnerability. Consider that one of the reasons cited for security features of underlying frameworks being disabled in WhiteHat’s report is that it breaks functionality. That means that securing one application necessarily breaks all others. Sometimes it’s not that the development folks don’t care, it’s just that their hands are essentially tied, too. They can’t fix it, because that breaks critical business functions that impact directly the bottom line, and not in a good way. For information security professionals this must certainly appear to be little more than gambling; a game which the security team is almost certain to lose if/when the vulnerability is actually exploited. But the truth is that information security doesn’t get to set business and development priorities, unfortunately, but yet they’re the ones responsible. All the accountability and none of the authority. It’s no wonder these folks are high-strung. INFOSEC NEEDS THEIR OWN INFRASTRUCTURE TOOLBOX This is one of the places that a well-rounded security toolbox can provide security teams some control over their own destiny, and some of the peace of mind needed for them to get some well-deserved sleep. If the development team can’t/won’t address a vulnerability, then perhaps it’s time to explore other options. IPS solutions with an automatically updated signature database for those vulnerabilities that can be identified by a signature can block known vulnerabilities. For exploits that may be too variable or subtle, a web application firewall or application delivery controller enabled with network-side scripting can provide the means by which the infosec professional can write their own “six lines of code” and at least stop-gap the risk of actively being exploited. Infosec also needs visibility into the effectiveness of their mitigating solutions. If a solution involves a web application firewall, then that web application firewall ought to provide an accurate report on the number of exploits, attacks, or even probing attempts it stopped. There’s no way for an application to report that data easily – infosec and operators end up combing through log files or investing in event correlation solutions to try and figure out what should be a standard option. Infosec needs to make sure it can report on the amount of risk mitigated in the course of a month, or a quarter, or a year. Being able to quantify in terms of hard dollars provides management and the rest of the organization (particularly the business) what they consider real “proof” of value of not just infosec but the solutions in which it invests to protect data and applications and business concerns. Every other group in IT has, in some way, embraced the notion of “agile” as an overarching theme. Infosec needs to embrace agile as not only an overarching theme but as a methodology in addressing vulnerabilities. Because the “fix” may be a simple “six lines of code”, but who implements that code and where is less important than when. An iterative approach that initially focuses on treating the symptoms (stop the bleeding now) and then more carefully considers long-term treatment of the disease (let’s fix the cause) may result in a better overall security posture for the organization. Related Posts When Is More Important Than Where in Web Application Security191Views0likes0CommentsInfrastructure 2.0: Aligning the network with the business (and the rest of IT)
When SOA was the hot topic of the day (not that long ago) everyone was pumped up about the ability finally align IT with the business. Reusability, agility, and risk mitigation were benefits that would enable the business itself to be more agile and react dynamically to the constant maelstrom that is "the market". But only half of IT saw those benefits; the application half. Even though pundits tried to remind folks that the "A" in SOA stood for "architecture", and that it necessarily included more than just applications, still the primary beneficiary of SOA has been applications and through their newfound agility and reusability, the business. The network has remained, for many, just as brittle and unchanging (and thus not agile) as it has ever been, mired in its own "hardwired" architectures, unable to flex or extend its abilities to support the applications it is tasked with delivering. And no one seemed to mind, really, because the benefits of SOA were being realized anyway, and no one could really quantify the benefits of also rearchitecting the network infrastructure to be as flexible and agile as the application infrastructure. But along comes virtualization and cloud computing, and an epiphany was had by many: the network and application delivery infrastructure must be as agile and flexible as the application infrastructure in order to achieve the full measure of benefits from this newest technology. Without an application delivery infrastructure that is as able to adapt dynamically the infrastructure is the wall between a successful deployment and failure. In order to truly align the network with the business - and the other half of IT - it becomes necessary to dig deeper into the network stack and really take a look at how you're delivering those agile applications and services. It's important to consider the ramifications of a static, brittle delivery infrastructure on the successful deployment and delivery of virtually hosted applications and services. It's necessary to look at the ability of your delivery infrastructure and evaluate its abilities in terms of reusability, scalability, and dynamism. Analyst and research firm Gartner said is as succinctly as it can be said: You Can't Do Cloud Computing Without the Right Cloud (Network) and the same holds true for virtualization efforts. You can't efficiently deliver virtualized applications without the right network infrastructure. Until your network and application delivery infrastructure is as agile and reusable as your application infrastructure you won't be able to align all of IT with the business. Until you have a completely agile architecture that spans all of IT, you're not truly aligned with the business.199Views0likes0CommentsHow AJAX can make a more agile enterprise
In general, we talk a lot about the benefits of SOA in terms of agility, aligning IT with the business, and risk mitigation. Then we talk about WOA (web oriented architecture) separately from SOA (service oriented architecture) but go on to discuss how the two architectures can be blended to create a giant application architecture milkshake that not only tastes good, but looks good. AJAX (Asynchronous JavaScript and XML) gets lumped under the umbrella of "Web 2.0" technologies. It's neither WOA nor SOA, being capable of participating in both architectural models easily. Some might argue that AJAX, being bound to the browser and therefore the web, is WOA. But WOA and SOA are both architectural models, and AJAX can participate in both - it is neither one or the other. It's seen as a tool; a means to an end, rather than as an enabling facet of either architectural model. It's seen as a mechanism for building interactive and more responsive user interfaces, as a cool tool to implement interesting tricks in the browser, and as yet another cross-browser incompatible scripting technology that makes developer's lives miserable. But AJAX, when used to build enterprise applications, can actually enable and encourage a more agile application environment. When AJAX is applied to user-interface elements to manipulate corporate data the applications or scripts on the server-side that interact with the GUI are often distilled into discrete blocks of functionality that can be reused in other applications and scripts in which that particular functionality is required. And thus services are born. Services that are themselves agile and thus enable broader agility within the application architecture. They aren't SOA services, at least that's what purists would say, but they are services, empowered with the same characteristics of their SOA-based cousins: reusable and granular. The problem is that AJAX is still seen as an allen wrench in an architecture that requires screwdrivers. It's often viewed only in terms of building a user interface, and the services it creates or takes advantage of on the back-end as being unequal to those specifically architected for inclusion in the enterprise SOA. Because AJAX drives the development of discrete services on the server-side, it can be a valued assistant in decomposing applications into its composite services. It can force you to think about the services and the operations required because AJAX necessarily interacts with granular functions of a service in a singular fashion. If we force AJAX development to focus on the user-interface, we lose some of the benefits we can derive from the design and development process by ignoring how well AJAX fits into the service-oriented paradigm. We lose the time and effort that goes into defining the discrete services that will be used by an AJAX-enabled component in the user-interface, and the possibility of reusing those services in the broader SOA. An SOA necessarily compels us to ignore platform and language and concentrate on the service. Services deployed on a web server utilizing PHP or ASP or Ruby as their implementation language are no different than those deployed on heavy application servers using JSP or Java or .NET. They can and should be included in the architectural design process to ensure they can be reused when possible. AJAX forces you to think in a service-oriented way. The services required by an AJAX-enabled user-interface should be consistent with the enterprise's architectural model and incorporated into that architecture whenever possible in order to derive agility and reuse from those services. AJAX is inherently an agile technology. Recognizing that early and incorporating the services required by AJAX-enabled components can help build a more agile, more consistent, more SOA-like application infrastructure.233Views0likes0Comments