rack space
8 TopicsQ/A with Rackspace Network Architect Vijay Emarose - DevCentral's Featured Member for November
Koman Vijay Emarose works as a Network Architect with the Strategic Accounts team at Rackspace. He has been a “Racker” (Rackspace Employee) for 7+ years and currently he is adapting to a networking world that is pivoting towards a world of automation. In Odaah's free time, he likes to identify DevCentral site bugs, incessantly torment Chase Abbott to fix them – particularly the badges and he is DevCentral’s Featured Member for November! Vijay's other hobbies include traveling and has been to more than eleven countries and looking to increase that number in the future. Personal finance blogs and binge watching documentaries are his guilty pleasures. DevCentral got an opportunity to talk with Vijay about his work, life and blog. DevCentral: You’ve been an active contributor to the DevCentral community and wondered what keeps you involved? Vijay Emarose: I have been a passive DevCentral user for quite a while and relied heavily on DevCentral to improve my iRule skills. The continued support for DevCentral community among F5 employees and other BIG-IP administrators provided me with the motivation to start sharing the knowledge that I have gained over the years. Answering questions raised by other members helps me to reinforce my knowledge and opens me up to alternate solutions that I had not considered. Rest assured, I will strive to keep the momentum going. DC: Tell us a little about the areas of BIG-IP expertise you have. VE: I started working on F5 during the transition period from 9.x to 10.x code version in 2010. BIG-IP LTM & GTM are my strong points. I have some experience with AFM, APM and ASM but not as much as I would like. Working with clients of various sizes from small scale to large enterprises at Rackspace, exposed me to a wide variety of F5 platforms from the 1600s to the VIPRION. I am sporadically active in the LinkedIn Community for F5 Certified Professionals. I had taken the beta versions of the F5 Certification exams and I am currently an F5 Certified Technology Specialist in LTM & GTM. I am eagerly looking forward to the upcoming F5 402 Exam. I have been fortunate enough to work with the F5 Certification Team (Ken Salchow, Heidi Schreifels, et al) in the Item Development Workshop (IDW) for F5’s 201 TMOS Administration Certification Exam and it was an eye-opener to understand the amount of thought and effort that goes into creating a certification exam. The 2016 F5 Agility in Chicago was my very first F5 Agility conference and I enjoyed meeting with and learning from Jason Rahm, Chase Abbott and other DevCentral members. I look forward to participating in future F5 Agility Conferences. DC: You are a Network Architect with Rackspace, the largest managed cloud provider. Where does BIG-IP fit in the services you offer or within your own infrastructure? VE: Rackspace is a leader in the Gartner Magic Quadrant for Cloud Enabled Managed Hosting and participates in the F5 UNITY Managed Service Provider Partner Program at the Global Gold Level. Various F5 platforms from the 1600s to the VIPRIONS are offered to customers requiring a dedicated ADC depending on their requirements. LTM & GTM are widely supported. In the past, I have been a member of the RackConnect Product team within Rackspace. “RackConnect” is a product that allows automated hybrid connections between a customer’s dedicated environment and Rackspace’s public cloud. F5 platforms were utilized as the gateway devices in this product. There is a DevCentral article on RackConnect by Lori MacVittie. I would like to take this opportunity to thank the F5 employees who support Rackspace that I have had the pleasure of working with - Richard Tocci, Scott Huddy and Kurt Lanthier. They have been of massive help to me whenever I required clarification or assistance with F5. DC: Your blog, Network-Maven.com, documents your experiences in the field of Network Engineering, Application Delivery, Security and Cloud Computing. What are some of the highlights that the community might find interesting? VE: This is a recent blog that I started to share my knowledge and experience working in the Networking field. Application Delivery Controllers are a niche area within Networking and I was fortunate enough to learn from some of the best at Rackspace. My idea is to share some of my experiences that could potentially help someone new to the field. Working with thousands of customer environments running different code versions on various F5 platforms has provided me with a rich variety of experience that could be of help to fellow F5 aficionados who are executing an F5 maintenance or implementing a new feature/function in their F5 environments. DC: Describe one of your biggest challenges and how DevCentral helped in that situation. VE: DevCentral has been a great resource for me on multiple occasions and it is tough to pinpoint a single challenge. I rely on it to learn from other’s experiences and to develop my iRule and iControl REST skills. I have benefited from the iRule: 20 Lines or Less series and I am an avid follower of the articles published by community members. For someone starting new with F5, I would certainly recommend following the articles and catching up on the iRules: 20 Lines or less series. DC: Lastly, if you weren’t working in IT – what would be your dream job? VE: I haven’t figured it out yet. Tech, finance & travel interest me. May be some combination of these interests would be the answer. DC: Thanks Vijay and congratulations! You can find Vijay on LinkedIn, check out his DevCentral contributions and follow @Rackspace. Related: Q/A with Yann Desmarest - DevCentral's Featured Member for July Q/A with SpringCM's Joel Newton - DevCentral's Featured Member for August Q/A with Secure-24's Josh Becigneul - DevCentral's Featured Member for September Q/A with ExITeam’s Security Engineer Stanislas Piron - DevCentral's Featured Member for October428Views0likes0CommentsWhen The Walls Come Tumbling Down.
When horrid disasters strike and both people and corporations are put on notice that they suddenly have a lot more important things to do, will you be ready? It is a testament to man’s optimism that with very few exceptions we really don’t, not at the personal level, not at the corporate level. I’ve worked a lot of places, and none of them had a complete, ready to rock DR plan. The insurance company I worked at was the closest – they had an entire duplicate datacenter sitting dark in a location very remote from HQ, awaiting need. Every few years they would refresh it to make certain that the standby DC had the correct equipment to take over, but they counted on relocating staff from what would be a ravaged area in the event of a catastrophe, and were going to restore thousands of systems from backups before the remote DC could start running. At the time it was a good plan. Today it sounds quaint. And it wasn’t that long ago. There are also a lot of you who have yet to launch a cloud initiative of any kind. This is not from lack of interest, but more because you have important things to do that are taking up your time. Most organizations are dragging their feet replacing people, and few – according to a recent survey, very few – are looking to add headcount (proud plug that F5 is – check out our careers page if you’re looking). It’s tough to run off and try new things when you can barely keep up with the day-to-day workloads. Some organizations are lucky enough to have R&D time set aside. I’ve worked at a couple of those too, and honestly, they’re better about making use of technology than those who do not have such policies. Though we could debate if they’re better because they take the time, or take the time because they’re better. And the combination of these two items brings us to a possible pilot project. You want to be able to keep your organization online or be able to bring it back online quickly in the event of an emergency. Technology is making it easier and easier to complete this arrangement without investing in an entire datacenter and constantly refreshing the hardware to have quick recovery times. Global DNS in various forms is available to redirect users from the disabled datacenter to a datacenter that is still capable of handling the load, if you don’t have multiple datacenters, then it can redirect elsewhere – like to virtual servers running in the cloud. ADCs are starting to be able to work similarly whether they are cloud deployed or DC deployed, that leaves keeping a copy of your necessary data and applications in the cloud, and cloud storage with a cloud storage gateway such as the Cloud Extender functionality in our ARX product allow for this to be done with a minimum of muss and fuss. These technologies, used together, yield a DR architecture that looks something like this: Notice that the cloud extender isn’t listed here, because it is useful for getting the data copied, but would most likely reside in your damaged datacenter. Assuming that the cloud provider was one like our partner Rackspace, who does both cloud VMs and cloud storage, this architecture is completely viable. You’ll still have to work some things out, like guaranteeing that security in the cloud is acceptable, but we’re talking about an emergency DR architecture here, not a long-running solution, so app-level security and functionality to block malicious attacks at the ADC layer will cover most of what you need. AND it’s a cloud project. The cost is far, far lower than a full blown DR project, and you’ll be prepared in case you need it. This buys you time to ingest the fact that your datacenter has been wiped out. I’ve lived through it, there is so much that must be done immediately – finding a new location, dealing with insurance, digging up purchase documentation, recovering what can be recovered… Having a plan like this one in place is worth your while. Seriously. It’s a strangely emotional time, and having a plan is a huge help in keeping people focused. Simply put, disasters come, often without warning – mine was a flood caused by a broken pipe. We found out when our monitoring equipment fried from being soaked and sent out a raft of bogus messages. The monitoring equipment was six feet above the floor at the time. You can’t plan for everything, but to steal and twist a famous phrase, “he who plans for nothing protects nothing.”199Views0likes0CommentsYour Call is Important to Us at CloudCo: Please Press 1 for Product, 2 for OS, 3 for Hypervisor, or 4 for Management Troubles
When there’s a problem with a virtual network appliance installed in “the cloud”, who do you call first? An interesting thing happened on the way to troubleshoot a problem with a cloud-deployed application – no one wanted to take up the mantle of front line support. With all the moving parts involved, it’s easy to see why. The problem could be with any number of layers in the deployment: operating system, web server, hypervisor or the nebulous “cloud” itself. With no way to know where it is – the cloud has limited visibility, after all – where do you start? Consider a deployment into ESX where the guest OS (hosting a load balancing solution) isn’t keeping its time within the VM. Time synchronization is a Very Important aspect of high-availability architectures. Synchronization of time across redundant pairs of load balancers (and really any infrastructure configured in HA mode) is necessary to ensure that a failover even isn’t triggered by a difference caused simply by an error in time keeping. If a pair of HA devices are configured to failover from one to another based on a failure to communicate after X seconds, and their clocks are off by almost X seconds…well, you can probably guess that this can result in … a failover event. Failover events in traditional HA architectures are disruptive; the entire device (virtual or physical) basically dumps in favor of the backup, causing a loss of connectivity and a quick re-convergence required at the network layer. A time discrepancy can also wreak havoc with the configuration synchronization processes while the two instances flip back and forth. So where was the time discrepancy coming from? How do you track that down and, with a lack of visibility and ultimately control of the lower layers of the cloud “stack”, who do you call to help? The OS vendor? The infrastructure vendor? The cloud computing provider? Your mom? We’ve all experienced frustrating support calls – not just in technology but other areas, too, such as banking, insurance, etc… in which the pat answer is “not my department” and “please hold while I transfer you to yet another person who will disavow responsibility to help you.” The time, and in business situations, money, spent trying to troubleshoot such an issue can be a definite downer in the face of what’s purportedly an effortless, black-box deployment. This is why the black-box mentality marketed by some cloud computing providers is a ridiculous “benefit” because it assumes the abrogation of accountability on the part of IT; something that is certainly not in line with reality. Making it more difficult for those responsible within IT to troubleshoot and having no real recourse for technical support makes cloud computing far more unappealing than marketing would have you believe with their rainbow and unicorn picture of how great black-boxes really are. The bottom line is that the longer it takes to troubleshoot, the more it costs. The benefits of increased responsiveness of “IT” are lost when it takes days to figure where an issue might be. Black-boxes are great in airplanes and yes, airplanes fly in clouds but that doesn’t mean that black-boxes and clouds go together. There are myriad odd little issues like time synchronization across infrastructure components and even applications that must be considered as we attempt to move more infrastructure into public cloud computing environments. So choose your provider wisely, with careful attention paid to support, especially with respect to escalation and resolution procedures. You’ll need the full partnership of your provider to ferret out issues that may crop up and only a communicative, partnership-oriented provider should be chosen to ensure ultimate success. Also consider more carefully which applications you may be moving to “the cloud.” Those with complex supporting infrastructure may simply not be a good fit based on the difficulties inherent not only in managing them and their topological dependencies but also their potentially more demanding troubleshooting needs. Rackspace put it well recently when they stated, “Cloud is for everyone, not everything.” That’s because it simply isn’t that easy to move an architecture into an environment in which you have very little or no control, let alone visibility. This is ultimately why hybrid or private cloud computing will stay dominant as long as such issues continue to exist. Cloud Control Does Not Always Mean ‘Do it yourself’ Control, choice, and cost: The conflict in the cloud They're Called Black Boxes Not Invisible Boxes Dynamic Infrastructure: The Cloud within the Cloud When Black Boxes Fail: Amazon, Cloud and the Need to Know On Cloud, Integration and Performance What CIOs Can Learn from the Spartans161Views0likes0CommentsReliability? We've got your reliability right here...
When talking about IT performance and rating "must haves", data center reliability is often right near the top of the list, and for good reason. Performance and scalability , features and functionality don't matter much unless the application is up and available. We here at F5 tend to hold availability in pretty high regard, and recent info from Netcraft seems to show that this effort has not gone in vain. Netcraft likes to study and analyze many things, among which is the reliability of different hosting companies. The way they do this is by polling around forty different hosting providers' websites at 15 minute intervals from different locations around the net, then crunching those numbers into something meaningful. Often near the top of the list of the most reliable hosting companies is Rackspace. I hear what you're asking, "As cool as they are, what does Rackspace have to do with F5, and why are you yammering on about them?". Pictures, as they say, are worth quite a few words, so feast your eyes on this: Source: http://news.netcraft.com/archives/2011/05/02/most-reliable-hosting-company-sites-in-april-2011.html Still don't see it? Of special interest, to me at least, is the "OS" listed for the Rackspace entry. While F5 BIG-IP might not technically be an OS (it's oh so much more!), it's still wicked fun to see it at the top of a reliability list. So thanks, Rackspace, for maintaining a highly available architecture and using F5 gear to help do it. Keep up the good work. #Colin173Views0likes0CommentsStandardized Cloud APIs? Yes.
Mike Fratto over at Network Computinghas a blog that declares the need for standards in Cloud Management APIs is non-existent or at least premature. Now Mike is a smart guy and has enough experience to have a clue what he’s writing about, unlike many cloud pundits out there, but like all smart people I like to read information from, I reserve the right to completely disagree. And in this case I am going to have to. He’s right that Cloud Management is immature, and he’s right that it is not a simple topic. Neither was the conquering of standardized APIs for graphical monitors back in the day, or the adoption of XML standards for a zillion things. And he’s right that the point of standards is interoperability. But in the case of cloud, there’s more to it than that. Cloud is infrastructure. Imagine if you couldn’t pull out a Cisco switch and drop in the equivalent HP switch? That’s what we’re talking about here, infrastructure. There’s a reason that storage, networks, servers, etc. all have interoperability standards. And those reasons apply to Cloud also. If you’re a regular reader, you no doubt have heard my disdain for Cloud Storage vendors who implemented CLOUD storage and thereby guaranteed that enterprises would need cloud storage gateways just to make use of the cloud storage offerings. At least in the short term while standards-compliant cloud interfaces or drivers for servers are implemented. The same is true of all cloud services, and for many of the same reasons. Do not tell an enterprise that they should put their applications out in your space by using a proprietary API that locks them into your solutions. Tell them they should put their applications out on your cloud space because it is competitively the best available. And the way to do that is through standards. Mike gets 20 or so steps ahead of himself by listing the problems without considering the minimum cost of entry. To start, you don’t need an API for every single possible option that might ever be considered to bring up a VM. How about “startVM ImageName Priority IPAddress Netmask or something similar? That tells the cloud provider to start a VM using the image file named, giving it a certain priority (priority is a placeholder for number of CPUs, memory, etc), using the mentioned IP Address and Network Mask. That way clones can be started with unique networking addresses. Is it all-encompassing? No. Is it the last API we’ll ever need? No. Does it mean that today I can be with Amazon today and tomorrow move to Rackspace? Yes. And that’s all the industry needs – the ability for an enterprise to keep their options open. There’s another huge benefit to standardization – employee reusability/mobility. Once you know how to implement the standard for your enterprise, you can implement it on any provider, rather than having to gain experience with each new provider. That makes employees more productive, and keeps the pool of available cloud developers and devops people large enough to fulfill staffing needs without having to train or retrain everyone. The burden on IT training budgets is minimized, and the choices when hiring are broadened. That doesn’t mean they’ll come cheap – it’s still going to be a small, in-demand crowd – but it does mean you won’t have to say “must have experience programming for Rackspace”. Though the way standards work is that there will be benefits to finding someone specialized in the vendor you’re using, it will only be a “nice to have”, not a “requirement”, broadening the pool of prospective employees. And as long as users are involved in the standards process, it is never too early to standardize something that is out there being utilized. Indeed, the longer you wait to standardize, the more inertia builds to resist standardization because each vendor’s customers have a ton of stuff built in the pre-standards manner. Until you start the standardization process and get user input into what’s working and what’s not, you can’t move the ball down the court, so to speak, and standards written in absence of those who have to use them do not have a huge track record of success. The ones that work in this manner tend to have tiny communities where it’s a badge of honor to overcome the idiosyncrasies of the standard (NAS standards spring to mind here). So do we need standardized cloud APIs? I’ll say yes. Customers need mobility not just for their apps, but for their developers to keep the cost of cloud out of the clouds. And it’s not simple, but the first step is. Let’s take it, and get this infrastructure choice closer to being an actual option that can be laid on the table next to “buy more hardware” and considered equally.201Views0likes0CommentsF5 Friday: Rackspace CloudConnect - Hybrid Architecture in Action
Rackspace steps up to the plate with a new hybrid architectural solution. Earlier this year we talked about the “other” hybrid architecture; the one that lives out there, in the cloud, but that combines two different deployment models: applications deployed on co-located servers that are imbued with elasticity by taking advantage of the same provider’s cloud computing offering. Throughout the year I’ve posited (nearly harped upon) the reality that because most organizations are not greenfields, hybrid architectures will be the norm. This is especially true with applications that have consistent workloads and that may only benefit periodically from the elasticity enabled by cloud computing. Some organizations prefer the benefits of a hosted environment for applications, but only need to take advantage of elasticity once in a while or, perhaps, they need that elasticity as part of a longer time strategy to manage potential growth and scale. Such an architecture, as proven out by Terremark, is not on possible but realistic, and the excellent folks at Rackspace recently posted a more detailed description of such an architecture. This is an F5 Friday post, so if you’re wondering where F5 fits in the picture, well, Rackspace’s solution leverages BIG-IP in their hybrid architecture to provide the dynamism required for hosted applications to take advantage of its cloud computing resources seamlessly. Cloud Connect: Where Dedicated and Cloud Hosting Come Together by Angela Bartels on October 6, 2010 This post comes to you from Toby Owen, Rackspace Product Manager for Hybrid Hosting Solutions. As discussed in a previous post, Rackspace offers a suite of computing services, from Managed Dedicated Servers, to Private Cloud, to the Rackspace Public Cloud. Many of our Managed Dedicated hosting customers utilize cloud services for various tasks. In today’s post, I’d like to discuss how you can utilize both dedicated and cloud platforms at Rackspace in a more integrated fashion. Customers running multiple web applications – from marketing sites to test sites to e-commerce – have been able to utilize Rackspace Dedicated Servers for some of those apps and Cloud Servers for others. Keeping some applications separated can allow you to test new applications without affecting your production environment. Other applications can benefit from using the Cloud and Dedicated environments in a connected way. With Cloud Connect, you have the option to connect these platforms to build a scalable, flexible compute solution that offers the performance of dedicated servers with the flexibility and scalability of the Cloud. Up until now, Rackspace Dedicated and Cloud environments have not had the ability to talk to each other over a secure, private network. This is now a possibility with Cloud Connect (currently in Beta). Since that connection stays within the Rackspace datacenter, your servers can talk at wire speed with the added security of never leaving Rackspace’s network. You can even load balance between Dedicated Managed and Cloud servers, perfect for scaling web sites on demand. Here’s what this might look like with an F5 Load balancer: Toby goes on to provide several use cases for this hybrid architecture including the traditional “dev and test” environments, seasonal traffic spikes, and an intriguing PCI compliant solution that leverages its dedicated server offering for PCI specific application workloads in conjunction with “cloud servers” for the more variable load web application portions of such solutions. It’s one solution to the “cloud security” issue that is often raised in conjunction with PCI DSS. Not mentioned as a scenario, but one that is certainly possible in a combined dedicated + cloud computing architecture is the ability to leverage Rackspace “cloud servers” to augment capacity for applications hosted on its dedicated (physical) servers until it becomes clear that the capacity increase is permanent. Such an architecture allows for immediate response to increases in demand but as a temporary stop-gap solution while budget is freed or trends are collected and the allocation of new dedicated resources to scale can be accomplished. HYBRID for the WIN Hybrid architectures are going to be the norm for all but the most aggressive organizations. With the exception of startups, who are lucky enough to have a green field in which to build their data center architectures, organizations will continue to have and support a variety of technological solutions that must be integrated and managed together. Whether that’s legacy mainframe applications and client-server combined with Web 2.0 and SaaS or some other combination thereof, there will be applications that for some reason either cannot be deployed or will not benefit long-term from being deployed on cloud computing environments. Whether that hybrid architecture comprises local data center and public cloud computing or a hosted/managed data center and public cloud computing is not as important as the resulting architecture, which is after all, a hybrid. This kind of flexibility will better support organizations moving forward, as it is a rare organization that does not have a variety of computing needs that must met and that cannot be met with one deployment model. A hybrid, dedicated-cloud architecture provides another option for organizations to better meet their computing and operational needs. Related blogs & articles: All F5 Friday Entries on DevCentral F5 Friday: The 2048-bit Keys to the Kingdom F5 Friday: Gracefully Scaling Down F5 Friday: Beyond the VPN to VAN F5 Friday: Eavesdropping on Availability F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure The Other Hybrid Cloud Architecture The Goldfish Effect Load Balancing in a Cloud Applying Scalability Patterns to Infrastructure Architecture246Views0likes0CommentsThe Storage Future is Cloudy, and it is about time.
One of the things I have talked about quite a bit in the last couple of months is the disjoint between the needs of enterprise IT and the offerings of a wide swath of the cloud marketplace. Some times it seems like many cloud vendors are telling customers “here’s what we choose to offer you, deal with it”. Problem is, oftentimes what they’re offering is not what the enterprise needs. There are of course some great examples of how to do cloud for the enterprise, Rackspace (among others) has done a smashing job of offering users a server with added services to install a database or web server on those servers. There are still some security concerns that the enterprise needs to address, but at least it’s a solid start toward giving IT the option of using the cloud in the same manner that they use VMs. Microsoft has done a good job of setting up databases that follow a similar approach. If you use MS database products, you know almost all you need to know to run an Azure database. The storage market has been a bit behind the rest of the cloud movement, with several offerings that aren’t terribly useful to the enterprise, and a few that are, more or less. That is changing, and I personally am thrilled. The most recent delve into cloud storage that is actually useful to the enterprise without rewriting their entire systems or buying a cloud storage gateway is Hitachi Data Systems (HDS). HDS announced their low-risk cloud storage services on the 29th of June, and the press largely yawned. I’m not terribly certain why they didn’t get as excited as I am, other than the fact that they are currently suffering information overload where cloud is concerned. With the style of offering that HDS has set up, you can use your HDS gear as in “internal” cloud, and their services as “external” cloud, all managed from the same place. And most importantly, all presenting as the storage you’re used to dealing with. The trend of many companies to offer storage as an API is self-defeating, as it seriously limits usefulness to the enterprise and has spawned the entire (mondo-cool, in context) cloud storage gateway market. The HDS solution allows you to hook up your disk like it was disk, not write extra code in every application to utilize disk space “in the cloud” and use the same methods you always have “in the DC”. To do the same with most other offerings requires the purchase of a cloud storage gateway. So you can have your cloud and internal too. The future of storage is indeed looking cloudy these days, and I’m glad for it. Let the enterprise use cloud, and instead of telling them “everyone is doing it”, give them a way to use it that makes sense in the real world. The key here is enabling. Now that we’re past early offerings, the winner’s circle will be filled with those that can make cloud storage accessible to IT. After that, the winners’ circle will slowly be filled with those who make it accessible to IT processes and secured against everything else. And it’s coming, so IT actually can take advantage of what is an astounding concept for storage. If you’re not an HDS customer (and don’t want to be), Cloud Storage Gateways can give you similar functionality, or you can hang out for a little bit. Looking like local storage is what must happen to cloud storage for it to be accessible, so more is on its way. And contrary to some of the hype out there, most of the big storage vendors have the technology to make an environment like – or even better than – the one Hitachi Data Systems has put together, so have no doubt that they will, since the market seems to be going there. There are some bright people working for these companies, so likely they’re almost there now. You might say that EMC has “been there, done that”, but not in a unified manner, in my estimation. I for one am looking forward to how this all pans out. Now that cloud storage is doing for customers instead of to them, it should be an interesting ride. Until then, cloud storage vendors take note: Storage is not primarily accessed, data files created or copied, or backups performed through an API. Related Articles and Blogs: Hitachi Data Systems to Help Customers Deploy… EMC Announces New End-to-End Storage Provisioning Solution EMC Cans Atmos Online Service178Views0likes0CommentsCloud Today is Just Capacity On-Demand
We won’t have true cloud computing until we have a services-based infrastructure and standardization of cloud management frameworks. We may call it “cloud” today, but what we really have with the offerings today is “capacity on demand.” We don’t actually have all the pieces necessary to execute on the vision that is “cloud computing.” We’ve almost completed server standardization through virtualization but we haven’t really begun to standardize network and infrastructure services. And we’re certainly nowhere near ready to standardize on the cloud and application frameworks that will enable a seamless Intercloud. The term “utility” has many meanings. One of them is an “economic term referring to the total satisfaction received from consuming a good or service.” The utility of cloud computingcompute on-demand services today is fairly middling on the “W00T” scale from an enterprise consumer perspective. On a scale of 1 to 10 I’d say we’re at about 3 today. We’re at compute resources as a service, at capacity on demand, but we’re not at infrastructure as a service. We’re not even really close yet.172Views0likes0Comments