F5 Friday: HP Cloud Maps Help Navigate Server Flexing with BIG-IP
The economy of scale realized in enterprise cloud computing deployments is as much (if not more) about process as it is products. HP Cloud Maps simplify the former by automating the latter. When the notion of “private” or “enterprise” cloud computing first appeared, it was dismissed as being a non-viable model due to the fact that the economy of scale necessary to realize the true benefits were simply not present in the data center. What was ignored in those arguments was that the economy of scale desired by enterprises large and small was not necessarily that of technical resources, but of people. The widening gap between people and budgets and data center components was a primary cause of data center inefficiency. Enterprise cloud computing promised to relieve the increasing burden on people by moving it back to technology through automation and orchestration. As a means to achieve such a feat – and it is a non-trivial feat – required an ecosystem. No single vendor could hope to achieve the automation necessary to relieve the administrative and operational burden on enterprise IT staff because no data center is ever comprised of components provided by a single vendor. Partnerships – technological and practical partnerships – were necessary to enable the automation of processes spanning multiple data center components and achieve the economy of scale promised by enterprise cloud computing models. HP, while providing a wide variety of data center components itself, has nurtured such an ecosystem of partners. Combined with its HP Operations Orchestration, such technologically-focused partnerships have built out an ecosystem enabling the automation of common operational processes, effectively shifting the burden from people to technology, resulting in a more responsive IT organization. HP CLOUD MAPS One of the ways in which HP enables customers to take advantage of such automation capabilities is through Cloud Maps. Cloud Maps are similar in nature to F5’s Application Ready Solutions: a package of configuration templates, guides and scripts that enable repeatable architectures and deployments. Cloud Maps, according to HP’s description: HP Cloud Maps are an easy-to-use navigation system which can save you days or weeks of time architecting infrastructure for applications and services. HP Cloud Maps accelerate automation of business applications on the BladeSystem Matrix so you can reliably and consistently fast- track the implementation of service catalogs. HP Cloud Maps enable practitioners to navigate the complex operational tasks that must be accomplished to achieve even what seems like the simplest of tasks: server provisioning. It enables automation of incident resolution, change orchestration and routine maintenance tasks in the data center, providing the consistency necessary to enable more predictable and repeatable deployments and responses to data center incidents. Key components of HP Cloud Maps include: Templates for hardware and software configuration that can be imported directly into BladeSystem Matrix Tools to help guide planning Workflows and scripts designed to automate installation more quickly and in a repeatable fashion Reference whitepapers to help customize Cloud Maps for specific implementation HP CLOUD MAPS for F5 NETWORKS The partnership between F5 and HP has resulted in many data center solutions and architectures. HP’s Cloud Maps for F5 Networks today focuses on what HP calls server flexing – the automation of server provisioning and de-provisioning on-demand in the data center. It is designed specifically to work with F5 BIG-IP Local Traffic Manager (LTM) and provides the necessary configuration and deployment templates, scripts and guides necessary to implement server flexing in the data center. The Cloud Map for F5 Networks can be downloaded free of charge from HP and comprises: The F5 Networks BIG-IP reference template to be imported into HP Matrix infrastructure orchestration Workflow to be imported into HP Operations Orchestration (OO) XSL file to be installed on the Matrix CMS (Central Management Server) Perl configuration script for BIG-IP White papers with specific instructions on importing reference templates, workflows and configuring BIG-IP LTM are also available from the same site. The result is an automation providing server flexing capabilities that greatly reduces the manual intervention necessary to auto-scale and respond to capacity-induced events within the data center. Happy Flexing! Server Flexing with F5 BIG-IP and HP BladeSystem Matrix HP Cloud Maps for F5 Networks F5 Friday: The Dynamic Control Plane F5 Friday: The Evolution of Reference Architectures to Repeatable Architectures All F5 Friday Posts on DevCentral Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait What is a Strategic Point of Control Anyway? The F5 Dynamic Services Model Unleashing the True Potential of On-Demand IT307Views0likes1CommentF5 Friday: BIG DDoS Umbrella powered by the HP VAN SDN Controller
#SDN #DDoS #infosec Integration and collaboration is the secret sauce to breaking down barriers between security and networking Most of the focus of SDN apps has been, to date, on taking layer 4-7 services and making them into extensions of the SDN controller. But HP is taking a different approach and the results are tantalizing. HP's approach, as indicated by the recent announcement of its HP SDN App Store, focuses more on the use of SDN apps as a way to enable the sharing of data across IT silos to create a more robust architecture. These apps are capable of analyzing events and information that enable the HP VAN SDN Controller to prescriptively modify network behavior to address issues and concerns that impact networks and the applications that traverse them. One such concern is security (rarely mentioned in the context of SDN). For example, how the network might response more rapidly to threat events, such as in progress DDoS attack. Which is where the F5 BIG DDoS Umbrella for HP's VAN (Virtual Application Network) comes into play. The focus of F5 BIG DDoS Umbrella is on mitigating in-progress attacks and the implementation depends on a collaboration between two disparate devices: the HP VAN SDN Controller and F5 BIG-IP. The two devices communicate via an F5 SDN app deployed on the HP VAN SDN Controller. The controller is all about the network, while the F5 SDN app is focused on processing and acting on information obtained from F5 security services deployed on the BIG-IP. This is collaboration and integration at work, breaking down barriers between groups (security and network operations) by sharing data and automating processes*. F5 BIG DDoS Umbrella The BIG DDoS Umbrella relies upon the ability of F5 BIG-IP to intelligently intercept, inspect and identity DDoS attacks in flight. BIG-IP is able to identify DDoS events targeting the network, application layers, DNS or SSL. Configuration (available as an iApp upon request) is flexible, enabling the trigger to be one, a combination of or all of the events. This is where collaboration between security and network operations is critical to ensure the response to a DDoS event meets defined business and operational goals. When BIG-IP identifies a threat, it sends the relevant information with a prescribed action to the HP VAN SDN Controller. The BIG DDoS Umbrella agent (the SDN "app") on the HP VAN SDN Controller processes the information, and once the location of entry for the attacker is isolated, the prescribed action is implemented on the device closest to the attacker. The BIG DDoS Umbrella App is free, and designed to extend the existing DDoS protection capabilities of BIG-IP to the edge of the network. It is a community framework which users may use, enhance or improve. Additional Resources: DDoS Umbrella for HP SDN AppStore - Configuration Guide HP SDN App Store - F5 BIG DDoS Umbrella App Community * If that sounds more like DevOps than SDN, you're right. It's kind of both, isn't it? Interesting, that...236Views0likes0CommentsIT Chaos Theory: The PeopleSoft Effect
#cloud #devops A robust integration ecosystem is critical to prevent the PeopleSoft effect within the network In chaos theory, the butterfly effect is the sensitive dependence on initial conditions, where a small change at one place in a nonlinear system can result in large differences to a later state. The name of the effect, coined by Edward Lorenz, is derived from the theoretical example of a hurricane's formation being contingent on whether or not a distant butterfly had flapped its wings several weeks before. -- Wikipedia, Butterfly Effect Many may not recognize the field of IT chaos theory (because technically I made it up for this post) but its premise is similar in nature to that of chaos theory. The big difference is that while Chaos Theory has a butterfly effect, IT Chaos Theory has a PeopleSoft effect. In IT Chaos Theory, the PeopleSoft effect is the sensitive dependence on initial integrations between operational components where a small change in one place results in large amounts of technical debt when any single component is upgraded. This name was chosen due to the history of PeopleSoft implementations in which even small customizations of one version generally leads to increasing amounts of time and effort being expended to reproduce after upgrades which obliterated the original customizations. Lest you think I jest with respect to the heartache caused by PeopleSoft in the past, consider this excerpt from an article on PeopleSoft Planet regarding customization of the software: Although you are currently succeeding in resisting the temptation to customize the software, your excuse of “let’s get the routine established first” is losing ground. Experienced users can imagine countless ways to “tweak” the system to do everything the previous solution did, plus take advantage of all the features that were why you purchased this software originally. In truth, you have already made a few customizations, but those you had to do. Nagging you is the persistent worry that once you customize, that your options to upgrade will be seriously jeopardized, or at least the prospect of a relatively simple, smooth, even seamless upgrade is reduced to a myth. How can you guarantee that when you upgrade, your customizations will not be lost? That days of productivity will not be compromised? Yet, if you do not make a few more concessions, how many man hours will be spent, attempting to recreate the solutions you had previously? There is a reason there is an entire sub-market of PeopleSoft developers within the larger development community. There are legions of folks out there who focus solely on PeopleSoft and whose daily grind is, in fact, to maintain and continually update new versions of PeopleSoft. If you’ve worked within an enterprise you will recognize these dedicated teams as a reality. These are the kinds of situations and realities we want to – nay, must – avoid within operations. While integration of infrastructure with automation frameworks and orchestration systems is critical to the successful implementation of cloud computing models, we must be sensitive to the impact of customization and integration downstream. The reason for this is the technical debt incurred by each small change grows non-linearly over time. As this becomes reality, rigidity begins to take hold and agility begins to rapidly decline as operations becomes increasingly aggressive towards changes in the underlying integrations. Rigidity of the systems takes root in the slowness or outright refusal to enact change in the system by those reluctant to take on the task of identifying the impact across an ever broadening set of integrated systems. AVOIDING the PEOPLESOFT EFFECT: A ROBUST INTEGRATION ECOSYSTEM One of the ways in which IT can avoid the PeopleSoft effect is to take advantage of existing integration ecosystems whenever possible so as to minimize the amount of custom integrations that must be managed. One of the benefits of automation and orchestration – of cloud computing, really – is to reduce the burden on manual procedures and processes. Which in turn reduces the already high burden on IT operations – on admins. A recent GFI stress survey found that IT admins are a particularly stressed-out lot already, and anything that can be done to reduce this burden should be viewed as a positive step. Automation and orchestration enhances the scalability of the processes as well as the speed with which they can be executed, and has additional benefits in the form of reducing the potential for human error to cause delays or outages. And perhaps it’s the thing that ensures those 67% of admins considering switching careers due to job stress don’t actually follow through. A mixture of pre-packaged integration for automation purposes affords operations the ability to focus on process codification via orchestration engines, and not on writing or tweaking code to fit APIs and SDKs. Codification of customization should occur as much as possible in the processes and policies that govern automation, not in the integration layer that interconnects the systems and environments being controlled. Taking advantage of pre-existing integrations with automation frameworks and provisioning systems enables IT to alleviate the potential PeopleSoft effect that occurs when APIs, SDKs or frameworks invariably change. Cloud is ultimately built on an ecosystem: a robust integration ecosystem wherein the focus lies on process engineering and policy development as a means to create repeatable deployment processes and automation objects that form the foundation for IT as a Service. When evaluating infrastructure, in particular, pay careful attention to the integration available with frameworks and orchestration engines and in particular those upon which you may have or may be considering standardizing: Popular frameworks and orchestration managers include: Some customization is always necessary, but application integration nightmares involving integration have taught us that minimizing the amount of customization is the best strategy for minimizing the potential impact of changes later on. This is especially true for cloud computing environments, where integration and the processes orchestrated atop it may start out simple, but rapidly grow more complex in terms of interdependencies and interrelationships. The more intertwined this systems become, the more likely it is that a small change in one part of the system will have a dramatic impact on another later on. F5 Friday: Addressing the Unintended Consequences of Cloud At the Intersection of Cloud and Control… Cloud is an Exercise in Infrastructure Integration The Impact of Security on Infrastructure Integration The API is the Center of the Application (Integration) Universe An Aristotlean Approach to Devops and Infrastructure Integration The Importance Of Integration In The Future Of The Cloud With BlueLock Web 2.0: Integration, APIs, and Scalability251Views0likes0CommentsStandardized Cloud APIs? Yes.
Mike Fratto over at Network Computinghas a blog that declares the need for standards in Cloud Management APIs is non-existent or at least premature. Now Mike is a smart guy and has enough experience to have a clue what he’s writing about, unlike many cloud pundits out there, but like all smart people I like to read information from, I reserve the right to completely disagree. And in this case I am going to have to. He’s right that Cloud Management is immature, and he’s right that it is not a simple topic. Neither was the conquering of standardized APIs for graphical monitors back in the day, or the adoption of XML standards for a zillion things. And he’s right that the point of standards is interoperability. But in the case of cloud, there’s more to it than that. Cloud is infrastructure. Imagine if you couldn’t pull out a Cisco switch and drop in the equivalent HP switch? That’s what we’re talking about here, infrastructure. There’s a reason that storage, networks, servers, etc. all have interoperability standards. And those reasons apply to Cloud also. If you’re a regular reader, you no doubt have heard my disdain for Cloud Storage vendors who implemented CLOUD storage and thereby guaranteed that enterprises would need cloud storage gateways just to make use of the cloud storage offerings. At least in the short term while standards-compliant cloud interfaces or drivers for servers are implemented. The same is true of all cloud services, and for many of the same reasons. Do not tell an enterprise that they should put their applications out in your space by using a proprietary API that locks them into your solutions. Tell them they should put their applications out on your cloud space because it is competitively the best available. And the way to do that is through standards. Mike gets 20 or so steps ahead of himself by listing the problems without considering the minimum cost of entry. To start, you don’t need an API for every single possible option that might ever be considered to bring up a VM. How about “startVM ImageName Priority IPAddress Netmask or something similar? That tells the cloud provider to start a VM using the image file named, giving it a certain priority (priority is a placeholder for number of CPUs, memory, etc), using the mentioned IP Address and Network Mask. That way clones can be started with unique networking addresses. Is it all-encompassing? No. Is it the last API we’ll ever need? No. Does it mean that today I can be with Amazon today and tomorrow move to Rackspace? Yes. And that’s all the industry needs – the ability for an enterprise to keep their options open. There’s another huge benefit to standardization – employee reusability/mobility. Once you know how to implement the standard for your enterprise, you can implement it on any provider, rather than having to gain experience with each new provider. That makes employees more productive, and keeps the pool of available cloud developers and devops people large enough to fulfill staffing needs without having to train or retrain everyone. The burden on IT training budgets is minimized, and the choices when hiring are broadened. That doesn’t mean they’ll come cheap – it’s still going to be a small, in-demand crowd – but it does mean you won’t have to say “must have experience programming for Rackspace”. Though the way standards work is that there will be benefits to finding someone specialized in the vendor you’re using, it will only be a “nice to have”, not a “requirement”, broadening the pool of prospective employees. And as long as users are involved in the standards process, it is never too early to standardize something that is out there being utilized. Indeed, the longer you wait to standardize, the more inertia builds to resist standardization because each vendor’s customers have a ton of stuff built in the pre-standards manner. Until you start the standardization process and get user input into what’s working and what’s not, you can’t move the ball down the court, so to speak, and standards written in absence of those who have to use them do not have a huge track record of success. The ones that work in this manner tend to have tiny communities where it’s a badge of honor to overcome the idiosyncrasies of the standard (NAS standards spring to mind here). So do we need standardized cloud APIs? I’ll say yes. Customers need mobility not just for their apps, but for their developers to keep the cost of cloud out of the clouds. And it’s not simple, but the first step is. Let’s take it, and get this infrastructure choice closer to being an actual option that can be laid on the table next to “buy more hardware” and considered equally.200Views0likes0CommentsStore Storing Stored? Or Blocked?
Now that Lori has her new HP TouchSmart for an upcoming holiday gift, we are finally digitizing our DVD collection. You would think that since our tastes are somewhat similar, we’d be good to go with a relatively small number of DVDs… We’re not. I’m a huge fan of well-done war movies and documentaries, we share history and fantasy interests, and she likes a pretty eclectic list of pop-culture movies, so the pile is pretty big. I’m working out how to store them all on the NAS such that we can play them on any TV on the network, and that got me to pondering the nature of storage access these days. We own a SAN, it never occurred to me to put these shows on it – that would limit access to those devices with an FC card… Or we’d end up creating a share to run them all through one machine with an FC card as a NAS head of sorts. In the long litany of different ways that we store things – direct attached or networked, cloud or WAN, Object store or hierarchical – the one that stands out as the most glaring, and the one that has traditionally gotten the most attention is file versus block. For at least a decade the argument has raged between which is more suited to enterprise use, while most of us have watched from the sidelines and been somewhat bemused by the conversation because the enterprise is using both. As a rule of thumb, if you need to boot from it or write sectors of data to it, you need block. Everything else is generally file. And that’s where I’m starting to wonder. I know there was a movement not too many years ago to make databases file based instead of block based, and that the big vendors were going in that direction, but I do wonder if maybe it’s time for block to retire at the OS level. Of course for old disks to be compatible, the OS would still have to handle block, but setting it to only allow OS-level calls (I know, it’s harder with each release, that’s death by a thousand cuts though) to read/write sectors would resolve much of the problem. Then a VMWare style boot-from-file-structure would resolve the last bit. Soon we could cut our file protocols in half. Seriously, at this point in time, what does block give us? Not much, actually. thin/auto provisioning is available on NAS, high-end performance tweaks are available on NAS, and the extensive secondary network (be it FC or IP) is not necessary for NAS, though there are some cases where throughput may demand it, those are not your everyday case in a world of 1 Gig networks with multi-Gig backplanes on most devices. And 10 Gig is available pretty readily these days. SAN has been slowly dying, I’m just pondering the question of whether it should be finished off. Seriously, people say “SAN is the only thing for high-performance!” but I can guarantee you that I can find plenty of NAS boxes that perform better than plenty of SAN networks – just a question of vendor and connectivity. I’m a big fan of iSCSI, but am no longer sure there’s a need for it out there. Our storage environment, as I’ve blogged before, has become horribly complex, with choices at every turn, many of which are more tied to vendor and profits than needs and customer desires. Strip away the marketing and I wonder if SAN has a use in the future of enterprise. I’m starting to think not, but I won’t declare it dead, as I am still laughing at those who declared tape dead for the last 20 years – and still are, regardless of what tape vendors’ sales look like. It would be hypocritical of me to laugh at them and make the same type of pronouncement. SAN will be dead when customers stop buying it, not before. Block will end when vendors stop supporting it, not before… So I really am just pondering the state of the market, playing devil’s advocate a bit. I have heard people proclaim that block is much faster for database access. I have written and optimized B-Tree code, and yeah, it is. But that’s because we write databases to work on blocks. If we used a different mechanism, we’d get a different result. It is no trivial thing to move to a different storage method, but if the DB already supports file access, the work is half done, only optimizing for the new method or introducing shims to make chunks of files look like blocks would be required. If you think about it, if your DB is running in a VM, this is already essentially the case. The VM is in a file, the DB is in that file… So though the DB might think it’s directly accessing disk blocks, it is not. Food for thought.187Views0likes0CommentsA Storage (Capacity) Optimization Buying Spree!
Remember when Beanie Babies were free in Happy Meals, and tons of people ran out to buy the Happy Meals but only really wanted the Beanie Babies? Yeah, that’s what the storage compression/dedupe market is starting to look like these days. Lots of big names are out snatching up at-rest de-duplication and compression vendors to get the products onto their sales sheets, we’ll have to see if they wanted the real value of such an acquisition – the bright staff that brought these products to fruition – or they’re buying for the product and going to give or throw away the meat of the transaction. Yeah, that sentence is so pun laden that I think I’ll leave it like that. Except there is no actual meat in a Happy Meal, I’m pretty certain of that. Today IBM announced that it is formally purchasing Storwize, a file compression tool designed to compress data on NAS devices. That leaves few enough players in the storage optimization space, and only one – Permabit – whose name I readily recognize. Since I wrote the blog about Dellpicking up Ocarina, and this is happening while that blog is still being read pretty avidly, I figured I’d weigh in on this one also. Storwize is a pretty smart purchase for IBM on the surface. The products support NAS at the protocol level – they claim “storage agnostic”, but personal experience in the space is that there’s no such thing… CIFs and NFS tend to require tweaks from vendor A to vendor B, meaning that to be “agnostic” you have to “write to the device”. An interesting conundrum. Regardless, they support CIFS and NFS, are stand-alone appliances that the vendors claim are simple to set up and require little or no downtime, and offer straight-up compression. Again, Storewize and IBM are both claiming zero performance impact, I cannot imagine how that is possible in a compression engine, but that’s their claim. The key here is that they work on everyone’s NAS devices. If IBM is smart, the products still will work on everyone’s devices in a year. Related Articles and Blogs IBM Buys Storewize Dell Buys Ocarina Networks Wikipedia definition – Capacity Optimization Capacity Optimization – A Core Storage Technology (PDF)264Views0likes1CommentDell Buys Ocarina Networks. Dedupe For All?
Storage at rest de-duplication has been a growing point of interest for most IT staffs over the last year or so, just because de-duplication allows you to purchase less hardware over time, and if that hardware is a big old storage array sucking a ton of power and costing a not-insignificant amount to install and maintain, well, it’s appealing. Most of the recent buzz has been about primary storage de-duplication, but that is merely a case of where the market is. Backup de-duplication has existed for a good long while, and secondary storage de-duplication is not new. Only recently have people decided that at-rest de-dupe was stable enough to give it a go on their primary storage – where all the most important and/or active information is kept. I don’t think I’d call it a “movement” yet, but it does seem that the market’s resistance to anything that obfuscates data storage is eroding at a rapid rate due to the cost of the hardware (and attendant maintenance) to keep up with storage growth. Related Articles and Blogs Dell-Ocarina deal will alter landscape of primary storage deduplication Data dedupe technology helps curb virtual server sprawl Expanding Role of Data Deduplication The Reality of Primary Storage Deduplication212Views0likes0CommentsGet out your dice! It’s time for a game of Datacenters & Dragons
It’s the all new revised fifth edition of the popular real-life fantasy game we call Datacenters and Dragons DM (Datacenter Manager): “Through the increasingly cloudy windows of the datacenter you see empty racks and abandoned servers where once there were rumored to be blinking lights and application consoles. Only a few brave and stalwart applications remain, somehow immune to the siren-like call of the Cloud Empire through the ancient and long forgotten secret rituals found only in the now-lost COBOL copybook. As you stand, awestruck at the destructive power of the Empire, a shadow falls across the remaining rack, dimming the few remaining fluorescent lights. It is…a cloud dragon. As you stand, powerless to move in your abject terror, the cloud dragon breathes on another rack and its case dissolves. A huge claw lifts the application server and clutches it to its breast, another treasure to add to its growing hoard. And then, just as you are finally able to move, it reaches out with the other claw and bats aside the operators with a powerful blow, scattering them beyond the now ethereal walls of the datacenter. Then it turns its cloudy eye on you and rears back, drawing in its breath as it prepares to breathe on you. Roll initiative.” The cries of “Change or die” and “IT is dead” and “cloud is a threat to IT” are becoming more and more common across the greater kingdoms of IT, pitting cloud as the evil dragon that you will either agree to serve as part of a much larger, nebulous empire known as ‘the cloud’ or you’ll find yourself asking “would you like fries with that?” According to some industry pundits, cloud computing has already passed from the realm of hype into a technology that is seriously impacting the business of IT. The basis for such claims point to small organizations for whom cloud computing makes the most sense (at least early on) and at large organizations like HP who are reducing the size of their IT staff based on their cloud computing efforts. IT as we know it, some say, is doomed*. Yet surveys and research conducted in the past year show a very different story – cloud computing is an intriguing option that is more interesting as a way to transform IT into a more efficient business resource than it is as an off-premise, wash-your-hands of the problem outsourcing option. In fact, a Vanson Bourne survey conducted on behalf of cloud provider RackSpace shows a very different story; at the beginning of 2009 less than 1/3 of small businesses were even considering cloud computing and only 11 percent of UK mid-sized businesses were using cloud as part of their strategy, though more than half indicated cloud would be incorporated in the future.190Views0likes0CommentsF5 Friday: Playing in the Infrastructure Orchestra(tion)
I’m sure you’ve noticed that there have been quite a few posts on the topic of automation, orchestration, and infrastructure 2.0. Aside from the fact that an integrated, collaborative infrastructure is necessary to achieve many of the operational efficiencies associated with cloud computing and highly virtualized data centers, it’s also a fascinating topic from the perspective of understanding how network and infrastructure providers are dealing with some of the same issues that enterprise software has long had to face while navigating the enterprise application integration (EAI) landscape. One of the ways in which vendors like F5 are addressing the need for automation and ultimately the orchestration required to implement a fluid, dynamic infrastructure is through strategic partnerships. These partnerships allow for tighter integration of solutions like BIG-IP (and its myriad feature and product modules) with the emerging infrastructure management solutions coming from industry leaders like HP, Microsoft, and VMware. A fully integrated, automated IT operations center isn’t, as anyone who’s been through the EAI nightmare, something that happens overnight. You have to start with the automation and management of key components and get them to the point that the automation can be trusted by customers before you expand outward. One of the first components of a dynamic, scalable infrastructure such as those required by cloud computing to be automated through such integration should be the load balancing solution because without load balancing, well, you really can’t implement elastic scalability. While auto-scaling looks easy from an administrative point of view, what’s really happening under the hood is a more complex set of operations that are initiated when a new application (usually in a virtual machine these days) is launched. The Load balancer has to be notified that a new node (application instance) is available and it has to be inserted into the appropriate pool (farm, cluster) so that the system can begin sending requests to it. That happens through iControl, F5’s standards-based control plane API. That same APi is available for any F5 customer, by the way, and it works the same for the Virtual Edition as it does the hardware versions. Anyone can use the API to develop whatever kind of interesting out-of-band management or monitoring or control application they’d like. That’s always been the case, since the first API call saw the light of day many years ago. By providing out-of-the-box integration between F5 solutions and three of the most widely used orchestration tools the industry (HP Operations Orchestrator, VMware vCenter Orchestrator, and Microsoft Virtual Machine Manager) a more complete solution for automated infrastructure adaptation with VM provisioning and de-provisioning can be realized. We’re not at the point where it’ll make your dinner and do the laundry but it’s a decent step forward toward automating enough of the operational tasks associated with a dynamic data center that at least you’ll have more time to make dinner or do the laundry.153Views0likes0CommentsWhy IT Needs to Take Control of Public Cloud Computing
IT organizations that fail to provide guidance for and governance over public cloud computing usage will be unhappy with the results… While it is highly unlikely that business users will “control their own destiny” by provisioning servers in cloud computing environments that doesn’t mean they won’t be involved. In fact it’s likely that IaaS (Infrastructure as a Service) cloud computing environments will be leveraged by business users to avoid the hassles they perceive (and oft times actually do) exist in their quest to deploy a given business application. It’s just that they won’t themselves be pushing the buttons. There have been many experts that have expounded upon the ways in which cloud computing is forcing a shift within IT and the way in which assets are provisioned, acquired, and managed. One of those shifts is likely to also occur “outside” of IT with external IT-focused services, such as system integrators like CSC and EDS HP Enterprise Services.203Views0likes2Comments