Devops Proverb: Process Practice Makes Perfect
#devops Tools for automating – and optimizing – processes are a must-have for enabling continuous delivery of application deployments Some idioms are cross-cultural and cross-temporal. They transcend cultures and time, remaining relevant no matter where or when they are spoken. These idioms are often referred to as proverbs, which carries with it a sense of enduring wisdom. One such idiom, “practice makes perfect”, can be found in just about every culture in some form. In Chinese, for example, the idiom is apparently properly read as “familiarity through doing creates high proficiency”, i.e. practice makes perfect. This is a central tenet of devops, particularly where optimization of operational processes is concerned. The more often you execute a process, the more likely you are to get better at it and discover what activities (steps) within that process may need tweaking or changes or improvements. Ergo, optimization. This tenet grows out of the agile methodology adopted by devops: application release cycles should be nearly continuous, with both developers and operations iterating over the same process – develop, test, deploy – with a high level of frequency. Eventually (one hopes) we achieve process perfection – or at least what we might call process perfection: repeatable, consistent deployment success. It is implied that in order to achieve this many processes will be automated, once we have discovered and defined them in such a way as to enable them to be automated. But how does one automate a process such as an application release cycle? Business Process Management (BPM) works well for automating business workflows; such systems include adapters and plug-ins that allow communication between systems as well as people. But these systems are not designed for operations; there are no web servers or databases or Load balancer adapters for even the most widely adopted BPM systems. One such solution can be found in Electric Cloud with its recently announced ElectricDeploy. Process Automation for Operations ElectricDeploy is built upon a more well known product from Electric Cloud (well, more well-known in developer circles, at least) known as ElectricCommander, a build-test-deploy application deployment system. Its interface presents applications in terms of tiers – but extends beyond the traditional three-tiers associated with development to include infrastructure services such as – you guessed it – load balancers (yes, including BIG-IP) and virtual infrastructure. The view enables operators to create the tiers appropriate to applications and then orchestrate deployment processes through fairly predictable phases – test, QA, pre-production and production. What’s hawesome about the tools is the ability to control the process – to rollback, to restore, and even debug. The debugging capabilities enable operators to stop at specified tasks in order to examine output from systems, check log files, etc..to ensure the process is executing properly. While it’s not able to perform “step into” debugging (stepping into the configuration of the load balancer, for example, and manually executing line by line changes) it can perform what developers know as “step over” debugging, which means you can step through a process at the highest layer and pause at break points, but you can’t yet dive into the actual task. Still, the ability to pause an executing process and examine output, as well as rollback or restore specific process versions (yes, it versions the processes as well, just as you’d expect) would certainly be a boon to operations in the quest to adopt tools and methodologies from development that can aid them in improving time and consistency of deployments. The tool also enables operations to determine what is failure during a deployment. For example, you may want to stop and rollback the deployment when a server fails to launch if your deployment only comprises 2 or 3 servers, but when it comprises 1000s it may be acceptable that a few fail to launch. Success and failure of individual tasks as well as the overall process are defined by the organization and allow for flexibility. This is more than just automation, it’s managed automation; it’s agile in action; it’s focusing on the processes, not the plumbing. MANUAL still RULES Electric Cloud recently (June 2012) conducted a survey on the “state of application deployments today” and found some not unexpected but still frustrating results including that 75% of application deployments are still performed manually or with little to no automation. While automation may not be the goal of devops, but it is a tool enabling operations to achieve its goals and thus it should be more broadly considered as standard operating procedure to automate as much of the deployment process as possible. This is particularly true when operations fully adopts not only the premise of devops but the conclusion resulting from its agile roots. Tighter, faster, more frequent release cycles necessarily puts an additional burden on operations to execute the same processes over and over again. Trying to manually accomplish this may be setting operations up for failure and leave operations focused more on simply going through the motions and getting the application into production successfully than on streamlining and optimizing the processes they are executing. Electric Cloud’s ElectricDeploy is one of the ways in which process optimization can be achieved, and justifies its purchase by operations by promising to enable better control over application deployment processes across development and infrastructure. Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk253Views0likes1CommentIntroducing PoshTweet - The PowerShell Twitter Script Library
It's probably no surprise from those of you that follow my blog and tech tips here on DevCentral that I'm a fan of Windows PowerShell. I've written a set of Cmdlets that allow you to manage and control your BIG-IP application delivery controllers from within PowerShell and a whole set of articles around those Cmdlets. I've been a Twitter user for a few years now and over the holidays, I've noticed that Jeffrey Snover from the PowerShell team has hopped aboard the Twitter bandwagon and that got me to thinking... Since I live so much of my time in the PowerShell command prompt, wouldn't it be great to be able to tweet from there too? Of course it would! HTTP Requests So, last night I went ahead and whipped up a first draft of a set of PowerShell functions that allow access to the Twitter services. I implemented the functions based on Twitter's REST based methods so all that was really needed to get things going was to implement the HTTP GET and POST requests needed for the different API methods. Here's what I came up with. function Execute-HTTPGetCommand() { param([string] $url = $null); if ( $url ) { [System.Net.WebClient]$webClient = New-Object System.Net.WebClient $webClient.Credentials = Get-TwitterCredentials [System.IO.Stream]$stream = $webClient.OpenRead($url); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $stream; [string]$results = $sr.ReadToEnd(); $results; } } function Execute-HTTPPostCommand() { param([string] $url = $null, [string] $data = $null); if ( $url -and $data ) { [System.Net.WebRequest]$webRequest = [System.Net.WebRequest]::Create($url); $webRequest.Credentials = Get-TwitterCredentials $webRequest.PreAuthenticate = $true; $webRequest.ContentType = "application/x-www-form-urlencoded"; $webRequest.Method = "POST"; $webRequest.Headers.Add("X-Twitter-Client", "PoshTweet"); $webRequest.Headers.Add("X-Twitter-Version", "1.0"); $webRequest.Headers.Add("X-Twitter-URL", "http://devcentral.f5.com/s/poshtweet"); [byte[]]$bytes = [System.Text.Encoding]::UTF8.GetBytes($data); $webRequest.ContentLength = $bytes.Length; [System.IO.Stream]$reqStream = $webRequest.GetRequestStream(); $reqStream.Write($bytes, 0, $bytes.Length); $reqStream.Flush(); [System.Net.WebResponse]$resp = $webRequest.GetResponse(); $rs = $resp.GetResponseStream(); [System.IO.StreamReader]$sr = New-Object System.IO.StreamReader -argumentList $rs; [string]$results = $sr.ReadToEnd(); $results; } } Credentials Once those were completed, it was relatively simple to get the Status methods for public_timeline, friends_timeline, user_timeline, show, update, replies, and destroy going. But, for several of those services, user credentials were required. I opted to store them in a script scoped variable and provided a few functions to get/set the username/password for Twitter. $script:g_creds = $null; function Set-TwitterCredentials() { param([string]$user = $null, [string]$pass = $null); if ( $user -and $pass ) { $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } else { $creds = Get-TwitterCredentials; } } function Get-TwitterCredentials() { if ( $null -eq $g_creds ) { trap { Write-Error "ERROR: You must enter your Twitter credentials for PoshTweet to work!"; continue; } $c = Get-Credential if ( $c ) { $user = $c.GetNetworkCredential().Username; $pass = $c.GetNetworkCredential().Password; $script:g_creds = New-Object System.Net.NetworkCredential -argumentList ($user, $pass); } } $script:g_creds; } The Status functions Now that the credentials were out of the way, it was time to tackle the Status methods. These methods are a combination of HTTP GETs and POSTs that return an array of status entries. For those interested in the raw underlying XML that's returned, I've included the $raw parameter, that when set to $true, will not do a user friendly display, but will dump the full XML response. This would be handy, if you want to customize the output beyond what I've done. #---------------------------------------------------------------------------- # public_timeline #---------------------------------------------------------------------------- function Get-TwitterPublicTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/public_timeline.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # friends_timeline #---------------------------------------------------------------------------- function Get-TwitterFriendsTimeline() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/friends_timeline.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- #user_timeline #---------------------------------------------------------------------------- function Get-TwitterUserTimeline() { param([string]$username = $null, [bool]$raw = $false); if ( $username ) { $username = "/$username"; } $results = Execute-HTTPGetCommand "http://twitter.com/statuses/user_timeline$username.xml"; Process-TwitterStatus $results $raw } #---------------------------------------------------------------------------- # show #---------------------------------------------------------------------------- function Get-TwitterStatus() { param([string]$id, [bool]$raw = $false); if ( $id ) { $results = Execute-HTTPGetCommand "http://twitter.com/statuses/show/" + $id + ".xml"; Process-TwitterStatus $results $raw; } } #---------------------------------------------------------------------------- # update #---------------------------------------------------------------------------- function Set-TwitterStatus() { param([string]$status); $encstatus = [System.Web.HttpUtility]::UrlEncode("$status"); $results = Execute-HTTPPostCommand "http://twitter.com/statuses/update.xml" "status=$encstatus"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # replies #---------------------------------------------------------------------------- function Get-TwitterReplies() { param([bool]$raw = $false); $results = Execute-HTTPGetCommand "http://twitter.com/statuses/replies.xml"; Process-TwitterStatus $results $raw; } #---------------------------------------------------------------------------- # destroy #---------------------------------------------------------------------------- function Destroy-TwitterStatus() { param([string]$id = $null); if ( $id ) { Execute-HTTPPostCommand "http://twitter.com/statuses/destroy/$id.xml", "id=$id"; } } You may notice the Process-TwitterStatus function. Since there was a lot of duplicate code in each of these functions, I went ahead and implemented it in it's own function below: function Process-TwitterStatus() { param([string]$sxml = $null, [bool]$raw = $false); if ( $sxml ) { if ( $raw ) { $sxml; } else { [xml]$xml = $sxml; if ( $xml.statuses.status ) { $stats = $xml.statuses.status; } elseif ($xml.status ) { $stats = $xml.status; } $stats | Foreach-Object -process { $info = "by " + $_.user.screen_name + ", " + $_.created_at; if ( $_.source ) { $info = $info + " via " + $_.source; } if ( $_.in_reply_to_screen_name ) { $info = $info + " in reply to " + $_.in_reply_to_screen_name; } "-------------------------"; $_.text; $info; }; "-------------------------"; } } } A few hurdles Nothing goes without a hitch and I found myself pounding my head at why my POST commands were all getting HTTP 417 errors back from Twitter. A quick search brought up this post on Phil Haack's website as well as this Google Group discussing an update in Twitter's services in how they react to the Expect 100 HTTP header. A simple setting in the ServicePointManager at the top of the script was all that was needed to get things working again. [System.Net.ServicePointManager]::Expect100Continue = $false; PoshTweet in Action So, now it's time to try it out. First you'll need to . source the script and then set your Twitter credentials. This can be done in your Twitter $profile file if you wish. Then you can access all of the included functions. Below, I'll call Set-TwitterStatus to update my current status and then Get-TwitterUserTimeline and Get-TwitterFriendsTimeline to get my current timeline as well as that of my friends. PS> . .\PoshTweet.ps1 PS> Set-TwitterCredentials PS> Set-TwitterStatus "Hacking away with PoshTweet" PS> Get-TwitterUserTimeline ------------------------- Hacking away with PoshTweet by joepruitt, Tue Dec 30, 12:33:04 +0000 2008 via web ------------------------- PS> Get-TwitterFriendsTimeline ------------------------- @astrout Yay, thanks! by mediaphyter, Tue Dec 30 20:37:15 +0000 2008 via web in reply to astrout ------------------------- RT @robconery: Headed to a Portland Nerd Dinner tonite - should be fun! http://bit.ly/EUFC by shanselman, Tue Dec 30 20:37:07 +0000 2008 via TweetDeck ------------------------- ... Things Left Todo As I said, this was implemented in an hour or so last night so it definitely needs some more work, but I believe I've got the Status methods pretty much covered. Next I'll move on to the other services of User, Direct Message, Friendship, Account, Favorite, Notification, Block, and Help when I've got time. I'd also like to add support for the "source" field. I'll need to setup a landing page for this library that is public facing so the folks at Twitter will add it to their system. Once I get all the services implemented, I'll more forward in formalizing this as an application and submit it for consideration. Collaboration I've posted the source to this set of functions on the DevCentral wiki under PsTwitterApi. You'll need to create an account to get to it, but I promise it will be worth it! Feel free to contribute and add to if you have the time. Everyone is welcome and encouraged to tear my code apart, optimize it, enhance it. Just as long as it get's better in the process. B-).1.7KViews0likes10CommentsVirtualize This.
#ApplicationMobility holds a place in IT’s future. Check out this app virtualization and movement tool. We in IT have spent a ton of time, ink, and electrons discussing server virtualization, and with good reason. Server virtualization did wonders for IT as an industry, offering hardware independence for older applications – many an OS/2 app that was necessary but not “cool” ended up on VMware to relieve worries that the hardware it was running on might break, and a lot of poorly utilized servers were consolidated. Meanwhile, we greatly ignored all the other bits of virtualization while they were growing up. Application Virtualization has been around forever, and yet we don’t spill barrels of ink about it. Many organizations use app virtualization, yet it gets third rank, talked about when talking about overall virtualization strategy. That might just be about to end. I recently had the opportunity to chat with Greg O’Connor of AppZero about their solution to application virtualization. It’s not the application virtualization of a decade ago, that’s for certain. AppZero wraps up an application in a device-independent package. As long as you’re moving from like OS to like OS, you can move the application across the globe. This may sound like not a big deal in the age of virtualizing everything (did you see F5’s press release about virtualizing the network for VMware?), in practice what AppZero is doing certainly is the type of thing that IT admins need, even if they don’t yet know they need it. Consider moving an application from cloud A to cloud B. Do you copy the entire VM across the Internet? Do you reinstall everything and just copy the application bits across the Internet? Both are inefficient. Copying an entire VM – even with compression – can be expensive in terms of dollars because it is bits across your cloud, while both take an inordinate amount of time. In the case of installing everything and then just copying the app files, there’s the risk of human error also. But what if you could install the operating system on the target, and then simply say “move my app”? That’s what AppZero is building toward. And from what I’ve seen, they’re doing a good job of it. Moving only the application means that you’re moving less across the network, but they also compress, so you’re moving really very little. Depending on the app, the savings can be huge. While I no longer have the full-fledged test lab that we used to use to test out vendors’ claims, I did pop out to their “enterprise app store” and install OpenOffice directly. I also sat through a demo where an entire web application was shifted from Amazon to IBM clouds. The entire web app. While we were on the phone. For my part, I prefer to talk about the parts that I’ve touched more than the parts I’ve seen. I’ve been through enough dog-n-pony shows to know there are a million ways for marketing folks to show something that’s not there yet… Or not there at all. So what I can touch is a much better gauge of product readiness. The OpenOffice install was the fastest I have ever done. I’ve installed OpenOffice a few bazillion times, and this was the fastest. The amazing part about that statement is that all of my previous installs were from local disk (CD or hard disk, depending), this one was over a hotel network. I was attending meetings at corporate HQ, so sitting in my hotel room at night, I ran the installer over hotel wireless. Not the fastest environment in the world. Yet it was the fastest install I’ve done. So what use do we have for someone like AppZero? It is time to start asking those questions. The “limitations” that Greg admitted to are not, IMO, all that limiting. First is the “like to like” requirement. I was (and you will be) unsurprised to discover that you can’t move an app running on Windows to a Linux server. While I’d love to see the day when we have that level of portability, first you crawl, then you walk. Second, in the web app world, the “app” you are moving is the web server, and it takes the directory structure with it, so you might end up with several web apps moved when you only intended to move one. Knowing that one means you can plan around it. The mobility falls into two categories also. They wrap the application in a container for movement, and that container will run on your machine as-is. But it’s not running native, which causes some support staff to get touchy. So they provided a “dissolve” function that unwraps it and moves it to a 100% native install – registry modifications, copy to default directories, etc. The one issue I did have a bit of concern about was that you have to choose which services move with the app. When moving you are presented with a list of services and you have to pick which ones go along. Hopefully they’re working on making that more mobile. Again, that does not figure into their “Enterprise App Store”, where they have pre-packaged applications, only to moving a live app. Cloud mobility requires that you are able to bring up processing power on a new cloud to avoid lock-in. AppZero is young yet, but they show promise of filling in that gap by allowing you to package applications and move them along. Integration for large applications might well be problematic – if you move the web app, but not the database, or if you move the entire application and need to merge databases for example. But cloud mobility had to get started, and this is a start. AppZero is relatively new, as is the “application mobility” space that they’re placed in by analysts. Lori and I were discussing how cool technology like this would be to enable “I have application X, it can run in Amazon, IBM, Rackspace, or the datacenter… What are the costs, strengths, and weaknesses of each?” It’s going to be an interesting ride. We certainly need this market segment to grow and mature, will be fun seeing where it ends up. I’ll certainly be paying more attention. Of course, F5 gives me a lot of leeway about what I choose to cover in my blog, but in the end, pays me to consider things in light of our organization, so I can say unequivocally that it doesn’t hurt at all that you’ll need global DNS and global server load balancing (GSLB) to take advantage of moving applications around the globe. Particularly the GSLB part, where a wide IP can represent whatever you need it to, dynamically, without waiting for DNS propagation. But only for the server side. The desktop application side is very cool, and I’ll be watching both. Meanwhile, Greg tells me they are taking the Enterprise App Store into beta next month. If you have questions, you can contact him at go connor /at/ app zero /dot/ com. After you remove the spaces and s// the // .213Views0likes0CommentsWhen VDI Comes Calling
It is an interesting point/counterpoint to read up about Virtual Desktop Infrastructure (VDI) deployment or lack thereof. The industry definitely seems to be split on whether VDI is the wave of the future or not, what its level of deployment is, and whether VDI is more secure than traditional desktops. There seems to be very little consensus on any of these points, and yet VDI deployments keep rolling out. Meanwhile, many IT folks are worried about all of these “issues” and more. Lots more. Like where the heck to get started even evaluating VDI needs for a given organization. There’s a lot written about who actually needs VDI, and that is a good place to start. Contrary to what some would have you believe, not every employee should be dumped into VDI. There are some employees that will garner a lot more benefit from VDI than others, all depending upon work patterns, access needs, and software tools used. There are some excellent discussions of use cases out there, I won’t link to a specific one just because you’ll need to find one that suits your needs clearly, but searching on VDI use cases will get you started. Then the hard part begins. It is relatively easy to identify groups in your organization that share most of the same software and could either benefit, or at least not be harmed by virtualizing their desktop. Note that in this particular blog post I am ignoring application virtualization in favor of the more complete desktop virtualization. Just thought I’d mention that for clarity. The trick, once you’ve identified users that are generally the same, is to figure out what applications they actually use, what their usage patterns are (if they’re maxing out the CPU of a dedicated machine, that particular user might not be a great choice for VDI unless all the other users that share a server with them are low-usage), and how access from other locations than their desktop could help them to work better/smarter/faster. A plan to plan. I don’t usually blog about toolsets that I’ve never even installed, working at Network Computing Magazine made me leery of “reviews” by people who’ve never touched a product. But sometimes (like with Oracle DataGuard about a year ago), an idea so strikes to the heart of what enterprise IT needs to resolve a given problem than I think it’s worth talking about. Sometimes – like with DataGuard – lots of readers reap the benefits, sometimes – like with Cirtas – I look like a fool. That’s the risks of talking about toys you don’t touch though. That is indeed an introduction to products I haven’t touched . Centrix Software’s Workspace IQ and Lakeside Software’s Systrack Virtual Machine Planner are tools that can help you evaluate usage patterns, software actually run, and usage volumes. Software actually run is a good indicator of what the users actually need, because as everyone in IT knows, often software is installed and when it is no longer needed it is not removed. Usage patterns help you group VMs together on servers. The user that is active at night can share a VM with daytime users without any risk of oversubscription of the CPU or memory. Usage volumes also help you figure out who/how many users you can put on a server. For one group it may be very few heavy users, for another group it may be very many light users. And that’s knowledge you need to get started. It helps you scope the entire project, including licenses and servers, it helps you identify the groups that will have to be trained – and yes coddled – before, during, and shortly after the rollout is occurring, and it helps you talk with vendors about their product’s capabilities. One nice bit about Systrack VMP is that it suggests the correct VDI vendor for your environment. If it does that well, it certainly is a nice feature. These aren’t new tools by any means, but as more and more enterprises look into VDI, talking about solutions like this will give hapless architects and analysts who were just thrown into VDI projects a place to start looking at how to tackle a very large project. It wouldn’t hurt to read the blog over at Moose Logic either, specifically Top Ten VDI Mistakes entry. And when you’re planning for VDI, plan for the network too. There’s a lot more traffic on the LAN in a VDI deployment than there was before you started, and for certain (or maybe all) users you’re going to want high availability. We can help with that when the time comes, just hit DevCentral or our website and search for VDI. Hopefully this is a help to those of you who are put onto a VDI project and expected to deliver quickly. It’s not all there is out there by any stretch, but it can get you started. Related Articles and Blogs: Skills Drive Success It’s Time To Consider Human Capital Again. F5 Friday: In the NOC at Interop Underutilized F5 Friday: Are You Certifiable? SDN, OpenFlow, and Infrastructure 2.0268Views0likes0CommentsIBM Rational AppScan
In my last post, I introduced my role as Solution Engineer for our IBM partnership and how many exciting solutions we have coming out in our partnership. Today I’m going to briefly cover one of our latest releases, the IBM Rational AppScan parser. AppScan IBM’s Rational AppScan implements the latest scanning technology to test your web applications for vulnerabilities. I’ve run this scanner many times and the complexity and depth of its scans is mind boggling. There are something like 30,000 tests that it can run in comprehensive mode, looking for all types of attacks against a website. When launching a new application or reviewing your security on an existing site, an investment like Rational AppScan may save your entire organization enormous amounts of pain and expense. So how does AppScan work? You simply point it at your website and go. During a recent test, I tested a sample ecommerce site (designed to have flaws) and found over 129 problems, 37 of them critical exploits such as SQL injection and cross-site scripting. The beautiful thing with AppScan is that you simply see exactly where the exploit took place, how to repeat it and how to mitigate it. It’s an amazing tool and you should definitely check out the trial. Once you have your scan, the next step is to fix the issues. In the example above, the 37 vulnerabilities might take days or weeks to solve. And that doesn’t even address the four dozen other medium and low priority issues. So how do you help speed this along? This is where BIG-IP ASM enters the picture. As of version 11.1, our IBM AppScan integration allows you to export your reports from AppScan, import them into ASM and immediately remediate the critical problems. In my test, I was able to remediate 21 out of the 37 critical vulnerabilities, leaving just a small handful to be worked on by the developers.401Views0likes2CommentsF5 Friday: A War of Ecosystems
Nokia’s brutally honest assessment of its situation identifies what is not always obvious in the data center - it’s about an ecosystem. In what was certainly a wake-up call for many, Nokia’s CEO Stephen Elop tells his organization its “platform is burning.” In a leaked memo reprinted by Engadget and picked up by many others, Elop explained the analogy as well as why he believes Nokia is in trouble. Through careful analysis of its competitors and their successes, he finds the answer in the ecosystem its competitors have built -comprising developers, applications and more. The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. If you’re wondering what this could possibility have to do with networking and application delivery, well, the analysis Elop provides regarding the successes of a mobile device vendor can be directly applied to the data center. The nature of data centers and networks is changing. It’s becoming more dynamic, more integrated, more dependent upon collaboration and connections between devices (components) that have traditionally stood alone on their own. But as data center models evolve and morph and demands placed upon them increase the need for contextual awareness and collaboration and the ability to be both reactive and proactive in applying policies across a wide spectrum of data center concerns, success becomes as dependent on a components ability to support and be supported by an ecosystem. Not just the success of vendors, which was Elop’s focus, but success of data center architecture implementations. To counter the rising cost and complexity introduced by new computing and networking models requires automation, orchestration, and collaboration across data center components. cloud computing and virtualization has turned the focus from technology focused components to process-oriented platforms. From individual point solutions to integrated, collaborative systems that encourage development and innovation as a means to address the challenges arising from extreme dynamism. F5 Networks Wins VMware Global Technology Innovator Award Yesterday we took home top honors for enhancing the value of VMware virtualization solutions for companies worldwide. At VMware Partner Exchange 2011, VMware’s annual worldwide partner event, F5 was recognized with VMware’s Technology Innovator Partner of the Year Award. Why is that important? Because it recognizes the significant value placed on building a platform and developing an ecosystem in which that platform can be leveraged to integrate and collaborate on solutions with partners and customers alike. And it is about an ecosystem; it is about collaborative solutions that address key data center challenges that may otherwise hinder the adoption of emerging technologies like cloud computing and virtualization. A robust and flexible application delivery platform provides not only the means by which data and traffic can be dynamically delivered and secured, but also the means through which a more comprehensive strategy to address operational challenges associated with increasingly dynamic data center architectures can be implemented. The collaboration between VMware and F5’s BIG-IP platforms is enabled through integration, through infrastructure 2.0 enabled systems that create an environment in which flexible architectures and dynamism can be managed efficiently. In 2010 alone, F5 and VMware collaborated on a number of solutions leveraging the versatile capabilities of F5’s BIG-IP product portfolio, including: Accelerated long distance live migration with VMware vMotion The joint solution helps solve latency, bandwidth, and packet-loss issues, which historically have prevented customers from performing live migrations between data centers over long distances. An integrated enterprise cloudbursting solution with VMware vCloudDirector The joint solution simplifies and automates use of cloud resources to enhance application delivery performance and availability while minimizing capital investment. Optimized user experience and secure access capabilities with VMware View The solution enhances VMware View user experience with secure access, single sign-on, high performance, and scalability. “Since joining VMware’s Technology Alliance Partner program in 2008, F5 has driven a number of integration and interoperability efforts aimed at enhancing the value of customers’ virtualization and cloud deployments,” said Jim Ritchings, VP of Business Development at F5. “We’re extremely proud of the industry-leading work accomplished with VMware in 2010, and we look forward to continued collaboration to deliver new innovations around server and desktop virtualization, cloud solutions, and more.” It is just such collaboration that builds a robust ecosystem that is necessary to successfully move forward with dynamic data center models built upon virtualization and cloud computing principles. Without this type of collaboration, and the platforms that enable it, the efficiencies of private cloud computing and economy of scale of public cloud computing simply wouldn’t be possible. F5 has always been focused on delivering applications, and that has meant not just partnering extensively with application providers like Oracle and Microsoft and IBM, it has also meant partnering and collaborating with infrastructure providers like HP and Dell and VMware to create solutions that address the very real challenges associated with data center and traffic management. Elop is exactly right when he points to ecosystems being the key to the future. In the case of network and application networking solutions that ecosystem is both about vendor relationships and partnerships as much as it is solutions that enable IT to better align with business and operational goals; to reduce the complexity introduced by increasingly dynamic operations. VMware’s recognition of the value of that ecosystem, of the joint solutions designed and developed through partnerships, is great validation of the important role of the ecosystem in the successful implementation of emerging data center models. F5 Friday: Join Robin “IT” Hood and Take Back Control of Your Applications F5 Friday: The Dynamic VDI Security Game WILS: The Importance of DTLS to Successful VDI F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure F5 Friday: Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache F5 Friday: Playing in the Infrastructure Orchestra(tion) Why Virtualization is a Requirement for Private Cloud Computing F5 VMware View Solutions F5 VMware vSphere Solutions Application Delivery for Virtualized Infrastructure DevCentral - VMware / F5 Solutions Topic Group257Views0likes2CommentsWhat Do Database Connectivity Standards and the Pirate’s Code Have in Common?
A: They’re both more what you’d call “guidelines” than actual rules. An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e. a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols. Connectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.” It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly. Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently. -- Simpler Database Migrations Have Arrived (Forrester Research Report) Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all. According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external? ENABLING IT as a SERVICE Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious. Consider that applications have variable needs in terms of performance, redundancy, disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens. What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals. If a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database. Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards. It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments. While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified. But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings. CONNECTIVITY CARTEL And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong. Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else. The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized. And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have. Database Migrations are Finally Becoming Simpler Enterprise Information Integration | Data Without Borders Review: EII Suites | Don't Fear the Data The Database Tier is Not Elastic Infrastructure Scalability Pattern: Sharding Sessions F5 Friday: THE Database Gets Some Love The Impossibility of CAP and Cloud Sessions, Sessions Everywhere Cloud-Tiered Architectural Models are Bad Except When They Aren’t285Views0likes1CommentAn Aristotlean Approach to Devops and Infrastructure Integration
Aristotle’s famous four questions can be applied to infrastructure integration as a means to determine whether an API or SDK is the right tool for the job. While bouncing back and forth last week with Patrick Debois on the role of devops , vendors and infrastructure integration he left a comment on the blog post that started the discussion that included the following assertion: On a side note: vendors should treat their API's as first class citizens. Too often (and i personally feel iControl too) API's expose a thinking model based upon the internal implementation of the product and they are not focused on using it from a business perspective. Simplicity to understand Load balancer -> create_network, ... vs. understanding all the objects. There is real work to be done there! Object Oriented languages are great, but sometimes a scripted language goes around easier. Which was distilled down to: APIs need to be more than a service-enabled SDK. Nothing new there, I’ve made that assertion before (and so have many, many, many other pundits, experts, and architects). What Patrick is saying, I think, is that today it is often the case that an infrastructure developer needs not only understand the concept and relationship between a load balancer, the network, and the resources it is managing, but each individual object that comprises those entities within the SDK. In order to create a “load balancer”, for example, you have to understand not only what a “load balancer” is, but the difference between a pool and a member, monitoring objects, virtual servers, and a lengthy list of options that can be used to configure each of those objects. What’s needed is an operationally-focused API in addition to a component and object-focused SDK. One of the failings of SOA was that it too often failed to move beyond service-enablement into true architecture. It failed to adequately represent business objects and too often simply wrapped up programmatic design components with cross-platform protocols like SOAP and HTTP. It made it easier to integrate, in some ways, and in others did very little to encourage the efficiency through re-use necessary for SOA to make the impact it was predicted to make. Okay, enough of the lamentation for SOA. The point of an API – even in the infrastructure world – should be to abstract and ultimately encapsulate the business or operational tasks that comprise a process. Which is a fairly wordy way to say “an API call should do everything necessary to achieve a single operational task.” What we often have today in the infrastructure world is still a service-enabled SDK; every function you could ever want to perform is available. But they are not aggregated or collected into discrete, reusable task-oriented API calls. The former are methods, the latter are process integration and invocation points. Where SOA encapsulated business functions, APIs for infrastructure encapsulate operational tasks. That said, the more I thought about it the more I realized we really do need both. Basically I think what we have here is a “right tool for the job” issue. The question is which tool is right for which job? LET’S ASK ARISTOTLE Illustration: Toothpaste for Dinner Aristotle (384 – 322 BC) is in part known for his teleological philosophy. He more or less invented the rules of logic and was most certainly one of the most influential polymaths of his era (and likely beyond). In other words, he was really, really smart. One of his most famous examples is his four causes, in which four questions are asked about a “thing” as a means to identify and understand it. These causes were directly related to his biological research and contributed greatly to our understanding for many eons about the nature of life and animals. MATERIAL CAUSE: What is it made of? FORMAL CAUSE: What sort of thing is it? EFFICIENT CAUSE: What brought it into being? FINAL CAUSE: What is it for? These may, for a moment, seem more applicable to determining the nature of a table; a question most commonly debated by students of philosophy late at night in coffee shops and not something that broods on the mind of those who are more concerned with meeting deadlines, taking out the garbage or how to make it to the kids’ basketball game if that meeting runs late. But they, are in fact, more applicable to IT and in particular the emerging devops discipline than it might first appear; especially when we start discussing the methods by which infrastructure and systems are integrated and managed by such a discipline. There’s a place, I think, for both interface mechanisms – API or service-enabled SDK – but in order to determine which one is best in any given situation, you’ll need to get Aristotlean and ask a few questions. Not about the integration method (API, SDK) but about the integration itself, i.e. what you’re trying to do and how that fits with the integration and invocation points provided by the infrastructure. The reason such questions are necessary is because the SDK provides a very granular set of entry points into the infrastructure. The API is then (often) layered atop the SDK, aggregating and codifying the specific methods/functions needed to implement a specific operational task, which is what an infrastructure API should encapsulate. That means it’s abstracted and generalized by the implementers to represent a set of common operational tasks. The API should be more general than the SDK. So if your specific operational process has unique needs it may be necessary to leverage the SDK instead to achieve such a process integration. The reason this is important is that the SDK often comes first because inter-vendor and even intra-vendor infrastructure integration is often accomplished using the same SDK that is offered to devops. The granularity of an SDK is necessary to accomplish specific inter-vendor integration because it is highly specific to the vendors, the products, and the integration being implemented. So the SDK is necessary to promote the integration of infrastructure components as a means to collaborate and share context across data center architectures. Similarly, the use-case of the integration needs to be considered. Run-time (dynamic policy enforcement) is different beast than configuration-time (provisioning) methods and may require the granular control offered by an SDK. Consider that dynamic policy enforcement may involve the tweaking of a specific “application” to for one response but not another or in response to the application of a downstream policy. An application or other infrastructure solution may deem a user/client/request to be malicious, for example, and need the means by which it can instruct the upstream infrastructure to deny the request, block the user, or redirect the client. Such “one time” actions are generally implemented through specific SDK calls because they are highly customized and unique to the implementation and or solutions’ integration. CONCLUSION: Standardized (i.e. commoditized) operational process: API. Unique operational process: SDK. FLEXIBILITY REQUIRES OPTIONS Because the very nature of codifying processes and integrating infrastructure implies myriad use-cases, scenarios, and requirements, there is a need for flexibility. That means options for integration and remote management of infrastructure components. We need both SDKs and APIs to ensure that the drive for simplicity does not eliminate the opportunity and the need for granularity in creating unique integrations supporting operational and business interests. Many infrastructure solutions today are lacking an SDK (one of the reasons cloud, specifically IaaS, makes it difficult to replicate an established data center architecture), and those with an SDK are often lacking an API. Do we need service-enabled SDKs? Yes. Do we need operational APIs? Yes. An API is absolutely necessary for enterprise devops to fully realize its goals of operational efficiency and codification of common provisioning and deployment processes. They’re necessary to create repeatable deployments and architectures that reduce errors and time to deploy. Simply implementing an API as a RESTful or scripting-friendly version of the SDK, i.e. highly granular function calls encapsulated using ubiquitous protocols, is not enough. What’s necessary is to recognize that there is a difference between an operational API and a service-enabled SDK. The API can then be used to integrate into “recipes” or what-have-you to enable devops tools such as Puppet and Chef that can be distributed and, ultimately, improved upon or modified to fit the specific needs of a given organization. But we do need both, because without the ability to get granular we may lose the flexibility and ultimately the control over the infrastructure necessary to continue to migrate from the traditional, static data centers of yesterday toward the dynamic and agile data centers of tomorrow. Without operationally commoditized APIs it is less likely that data centers will be able to leverage Infrastructure 2.0 as one of the means to bridge the growing gap between the cost of managing infrastructure components and the static budgets and resources that ultimately constrain data center innovation. Standardizing Cloud APIs is Useless Standardized Cloud APIs? Yes. What Do Database Connectivity Standards and the Pirate’s Code Have in Common? The Impact of Security on Infrastructure Integration Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The World Doesn’t Care About APIs Cloud, Standards, and Pants Infrastructure 2.0: Squishy Name for a Squishy Concept Infrastructure Integration: Metadata versus API The API Is the New CLI163Views0likes0CommentsDistributing SAP Load using BIG-IP Advanced Monitoring
Several recent forum posts on DevCentral forums have commented on the fact that SAP Landscapes often have asynchronous batch jobs that cause higher CPU loads on certain servers. This causes problems for application delivery controllers because load balancing methods are typically based on connection counts. Picture the scenario where one connection causes a big CPU or memory spike and then goes away. Now you have the same number of new connections coming into the server while one is slammed. The solution to this problem is relatively straightforward and I recently documented this for everyone in our “Deploying F5 Networks with SAP NetWeaver” deployment guide, located here: SAP NetWeaver and Enterprise SOA: Enterprise Portal (BIG-IP v10.1, WOM, Edge, WA). The solution is based around using SNMP in conjunction with application based monitors. The BIG-IP SNMP monitor provides the ability to perform dynamic load balancing based on CPU, memory or disk utilization while the advanced monitors test the J2EE stack, the authentication system and the database. With this combination, SAP administrators should be able to sleep better at night knowing that their customers and users are getting to a live system that best prepared to service the request. So, how does layer monitoring work? If you are not aware, it’s possible to have two monitors for a particular pool or node. In the UI, it looks like this: In this example there are two monitors, SAP-CPU and ICMP. In the real world, ICMP would be replaced with the advanced application monitor. So, what does the SNMP monitor configuration look like: Here we have an SNMP setup that is set at a CPU Threshold of 80%, a memory Threshold of 0% and a Disk Threshold of 10%. Obviously this is from my testing to insure the monitor is working properly. What this defines is that if the disk is more than 10% full, or the memory is being utilized at 0% or the CPU is being utilized at over 80%, then de-weight the amount of new connections that get sent to this node(server). The coefficients allow further granular control over the traffic weighting determination. This is not a config you would probably run in production, but it’s great for testing! By logging into the BIG-IP advanced shell and enabling logging, I can see exactly what weight is being assigned. This is accomplished through the command: bigpipe db Snmp.SNMPDCA.Log true and then by tailing the snmpdca.log located in /var/tmp : tail -f /var/tmp/snmpdca.log There you have it. Now all we have to do is change the load balancing mechanism for the pool to be based on dynamic, apply the advanced application monitor, and we have a fully dynamic decision making system. You can play with the Thresholds and Coefficients until you have a desired mix. The SNMP monitor will not mark a host down, but it will set the weight (between 1 and 100) in a manner that very few connections will get to a node that has exceeded all tresholds. A quick note on the advanced health monitor. I can’t stress how important it is to have layered monitoring in this and other dynamic load balancing scenarios. Especially in an SAP NetWeaver J2EE stack installation (or even a dual stack implementation) many things can go wrong. Just because the CPU, memory and disk are normal, doesn’t mean that your J2EE stack hasn’t crashed, or that your authentication system has gone down. By layering monitors, you cover all BASIS. :-) I hope this post has been helpful, and as always, please email me if you have any questions. Remember that detailed installation instructions including step-by-step configuration is in the deployment guide linked at the top, or through f5.com ---> Resources -- > Deployment Guides ---> SAP NetWeaver and Enterprise SOA: Enterprise Portal (BIG-IP v10.1, WOM, Edge, WA)303Views0likes0CommentsF5 Friday: The Evolution of Reference Architectures to Repeatable Architectures
A reference architecture is a solution with the “some assembly required” instructions missing. As a developer and later an enterprise architect, I evaluated and leveraged untold number of “reference architectures.” Reference architectures, in and of themselves, are a valuable resource for organizations as they provide a foundational framework around which a concrete architecture can be derived and ultimately deployed. As data center architecture becomes more complex, employing emerging technologies like cloud computing and virtualization, this process becomes fraught with difficulty. The sheer number of moving parts and building blocks upon which such a framework must be laid is growing, and it is rarely the case that a single vendor has all the components necessary to implement such an architecture. Integration and collaboration across infrastructure solutions alone, a necessary component of a dynamic data center capable of providing the economy of scale desired, becomes a challenge on top of the expected topological design and configuration of individual components required to successfully deploy an enterprise infrastructure architecture from the blueprint of a reference architecture. It is becoming increasingly important to provide not only reference architectures, but repeatable architectures. Architectural guidelines that not only provide the abstraction of a reference architecture but offer the kind of detailed topological and integration guidance necessary for enterprise architects to move from concept to concrete implementation. Andre Kindness of Forrester Research said it well in a recent post titled, “Don’t Underestimate The Value Of Information, Documentation, And Expertise!”: Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. REPEATABLE ARCHITECTURE For many years one of F5’s differentiators has been the development and subsequent offering of “Application Ready Solutions”. The focus early on was on providing optimal deployment configuration of F5 solutions for specific applications including IBM, Oracle, Microsoft and more recently, VMware. These deployment guides are step-by-step, detailed documentation developed through collaborative testing with the application provider that offer the expertise of both organizations in deploying F5 solutions for optimal performance and efficiency. As the data center grows more complex, so do the challenges associated with architecting a firm foundation. It requires more than application-specific guidance, it now requires architectural guidance. While reference architectures are certainly still germane and useful, there also needs to be an evolution toward repeatable architectures such that the replication of proposed solutions derived from the collaborative efforts of vendors is achievable. It’s not enough to throw up an architecture comprised of multiple solutions from multiple vendors without providing the insight and guidance necessary to actually replicate that architecture in the data center. That’s why it’s exciting to see our collaborative efforts with vendors of key data center solutions like IBM and VMware result in what are “repeatable architectures.” These are not simply white papers and Power Point decks that came out of joint meetings; these are architectural blueprints that can be repeated in the data center. These are the missing instructions for the “some assembly required” architecture. These jointly designed and developed architectures have already been implemented and tested – and then tested again and again. The repeatable architecture that emerges from such efforts are based on the combined knowledge and expertise of the engineers involved from both organizations, providing insight normally not discovered – and certainly not validated – by an isolated implementation. This same collaboration, this cooperative and joint design and implementation of architectures, is required within the enterprise as well. It’s not enough for architects to design and subsequently “toss over the wall” an enterprise reference architecture. It’s not enough for application specialists in the enterprise to toss a deployment over the wall to the network and security operations teams. Collaboration across compute, network and storage infrastructure requires collaboration across the teams responsible for their management, implementation and optimal configuration. THE FUTURE is REPEATABLE This F5-IBM solution is the tangible representation of an emerging model of collaborative, documented and repeatable architectures. It’s an extension of an existing model F5 has used for years to provide the expertise and insight of the engineers and architects inside the organization that know the products best, and understand how to integrate, optimize and deploy successfully such joint efforts. Repeatable architectures are as important an evolution in the support of jointly developed solutions as APIs and dynamic control planes are to the successful implementation of data center automation. More information on the F5-IBM repeatable enterprise cloud architecture: Why You Need a Cloud to Call Your Own – F5 and IBM White Paper Building an Enterprise Cloud with F5 and IBM – F5 Tech Brief SlideShare Presentation F5 and IBM: Cloud Computing Architecture – Demo Related blogs & articles: F5 Application Ready Solutions F5 and IBM Help Enterprise Customers Confidently Deploy Private Clouds F5 Friday: A War of Ecosystems Data Center Feng Shui: Process Equally Important as Preparation Don’t Underestimate The Value Of Information, Documentation, And Expertise! Service Provider Series: Managing the IPv6 Migration222Views0likes0Comments