on-demand
7 TopicsNew SSL Certificate update breaks F5 On-Demand
Greetings, Our SSL certificate for our virtual server entry expired yesterday, so I replaced it as normal process. However, On-Demand VPN users using the IOS application can no longer login via certificate auth. According to F5 device, the On-Demand certificate doesn't expire till 2017. However since switching to the new web cert the IOS client now throws “Server rejected the supplied client certificate or one was not sent”. The caveat to this is that we are using MobileIron to manage the certificates and policies on the iDevices. If I am to switch back to the expired SSL certificate it all works flawlessly. I am failing to understand how switching the virtual/web SSL certificate breaks the On-Demand feature. However, enabling SSL Debug, the only error that crops up is: Peer cert verify error: unable to get local issuer certificate (depth 0; cert /CN=F5SSLONDEMAND2-clientname/OU=appSetting:a9146cd7-011a-441e-xxxx-xxxxxx) Connection error: ssl_shim_vfycerterr:4249: unable to get local issuer certificate (48) However I am failing to find any information on this. I've tried creating a new policy within MobileIron and uploaded the new root CA certificate that it creates. However I still have the same error. While I understand MobileIron is a completely different system and likely out of F5's scope. It is F5 producing the error and my google-fu fails to find anything positive to rectify this error. Any assistance would seriously be appreciated. Cheers, David543Views0likes7CommentsThe Conflation of Pay-as-you-Grow Hardware with On-Demand
#cloud Today’s post is brought to you by the Law of Diminishing Returns The conflation of “pay-as-you-grow” with “on-demand” tends to cause confusion in the realm of networking and hardware. This is because of the way in which networking vendors have attempted to address the demand of organizations to pay only for what you use and to expand on-demand. The premise is that costs grow proportionally with capacity. In cloud computing organizations achieve this. As more capacity (resources from hardware) are necessary, they are provisioned an paid for. On-demand scale. The costs per transaction (or user) remain consistent with growth because there is a direct relationship between an increase in capacity (hardware resources such as memory and CPU) and capacity. Networking vendors have attempted to simulate this capability through licensing based restrictions, allowing customers to initially provision resources at a much lower cost per transaction. The fallacy in this scheme is that, unlike cloud computing, no additional capacity (hardware resources) are ever provisioned. It is only the artificial limitation on the use of that capacity that is lifted at a price during the “growth” stage. Regardless of form-factor, this has a profound impact on the cost-per-transaction (or user) and, it turns out, on the scalability of performance. The difference between the two models is significant. A “pay-as-you-grow” licensing-based model is like having a great kitchen that is segmented. You can only use a portion of it initially. If you need to use more because you’re giving a dinner party, you can pay for anther segment. The capabilities of the kitchen don’t change, just how much of you can use. Conversely, an on-demand model such as is offered by cloud lets you start out with a standard-sized kitchen, and if you need more room you tack on another kitchen, increasing not only size, but capability. If you’ve ever cooked for a large number of people, you know that one oven is likely not enough, but that’s what you get with “pay-as-you-grow” – one oven with initially limited access to it. The on-demand model gives you two. Or three, or as many as you need to make dinner for your guests. SCALE of PERFORMANCE While appearing more cost effective at the outset, “pay-as-you-grow” strategies do not always provide for the scalability of all performance metrics. This is because licensing restrictions do not impact the underlying hardware capacity, and it is the hardware capacity and load that is always the most constraining factor for performance. As utilization of hardware increases, capacity degrades, albeit in some cases more slowly than others. The end result is that scale-by-license produces increasingly diminishing returns on performance. This is true whether we’re considering layer 4 throughput or layer 7 requests per second, two common key performance metrics for application delivery solutions. The reason for this is simple – you aren’t increasing the underlying speed or capacity, you’re only the load that can be handled by the device. That means the overall utilization is higher, and it is nearly a priori knowledge in networking that as utilization (load) increases, performance and capacity degrade. The result is uneven scalability as you progress through the “upgrade” of licenses. You’re still paying the same amount per increase, but each increase nets you less capacity and slower performance than the upgrade before. Conversely, a true on-demand model, based on the same premises as cloud computing, scales more linearly. Upgrading four times nets you four times the performance at four times the cost, because the resources available also increase four times. Cost and performance scale equally with a platform-based model. Licensing-based models do not, nay they cannot, because they aren’t scaling out resources, they’re only scaling out what portion of the resources you have access to. It’s a subtle difference but one that has a significant impact on capacity and performance. ECONOMY of SCALE As has been noted, as utilization of hardware increases, capacity degrades. When we start looking at the total costs when compared to the scaling value received, it becomes apparent that the pay-as-you-grow model produces increasing costs per transaction while the platform-based model produces decreasing costs per transaction. This is simply a matter of math. If each upgrade in a pay-as-you-grow model increases the overall cost by 1/4, but returns increasingly smaller performance and capacity gains, you end up with a higher cost per transaction. Conversely, a more linear on-demand approach actually ends up producing slightly lower or consistent costs per transaction. The economy of scale is important as it’s a fairly common financial metric used to evaluate infrastructure as it directly translates into business costs and can be used to adjust pricing and facilitate estimated expenses. This disparity is not one that is often considered up front, as it is usually the up-front, capital investment that is most important to the initial decision. This oversight, however, almost always proves to be problematic as it is rarely the case that an organization does not need additional capacity and performance, and thus the long-term costs of Pay-as-you-Grow result in a much poorer return on investment in terms of performance than a Platform-based scalability model. DISRUPTION and CapEx The arguments against a platform-based model generally consist of disruptiveness of upgrades and initial costs. Disruption is a valid concern and it is almost always true that hardware-based devices require a certain amount of disruption to upgrade. The lifting of an artificially imposed limitation on the amount of existing hardware that can be utilized, conversely, does not. This is where the cloud computing on-demand (i.e. throw more (virtual) hardware at the problem) usually diverges from the on-demand model used to scale out networking hardware, such as an application delivery controller. The introduction of virtual application delivery controllers and the ability to seamlessly scale out in a model similar to cloud computing eliminates the disruption-based argument. There do exist models and technology which closely models a cloud computing on-demand scalability strategy that are as non-disruptive as scaling out via a licensing-based model. This leaves the initial cost argument, which generally boils down to a CapEx versus OpEx argument. You are going to pay over the long run, the question is whether you pay up front or over time and what the return on those investments will ultimately be. Just don’t let the conflation of cloud computing’s on-demand with pay-as-you-grow licensing-based models obscure what those real costs will be. IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Desktop VDI May Be Ready for Prime Time but Is the Network? The Cloud API is Pseudo-Consolidation of Infrastructure The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… The Battle of Economy of Scale versus Control and Flexibility300Views0likes0CommentsBursting the Cloud
The cloud computing craze is leading to some interesting new terms. Cloudware and cloudbursting are two terms I particularly like for their ability to describe specific computing models based on cloud computing. Today we're going to look at cloudbursting, which is basically a new twist on an old concept. Cloudbursting appears to be to marry the traditional safe enterprise computing model with cloud computing; in essence, bursting into the cloud when necessary or using the cloud when additional compute resources are required temporarily. Jeff at Amazon Web Services Blog talks about the inception of this term as applied to the latter and describes it in his blog post as a method used by Thomas Brox Røst to regenerate a number of dynamic pages in 5 hours rather than the 7 hours that would be required if he had attempted such a feat internally. His approach is further described on The High Scalability Blog. Cloudbursting can also be used to shoulder the burden of some of an application's processing. For example, basic application functionality could be provided from within the cloud while more critical (e.g. revenue-generating) applications continue to be served from within the controlled enterprise data center. This assumes that only a portion of consumers will actually be interacting with the data-driven side of a web site (customer management, process visibility, etc...) while the greater portion will simply be browsing around on the non-interactive, as it were, side of the site. Bursting has traditionally been applied to resource allocation and automated provisioning/de-provisioning of resources, historically focused on bandwidth. Today, in the cloud, it is being applied to resources such as servers, application servers, application delivery systems, and other infrastructure required to provide on-demand computing environments that expand and contract as necessary, without manual intervention. This requires the ability to automate the cloud's data center. Data center automation in a cloud computing environment, regardless of the opacity of the model, requires more than simple workflow systems. It requires on-demand control and management over all devices in the delivery chain, from the storage to the application and web servers to the load-balancers and acceleration offerings that deliver the applications to end-users. This is more akin to data center orchestration than it is automation, as it requires that many moving parts and pieces be coordinated in order to perform a highly complex set of tasks seamlessly and with as little manual intervention as possible. This is one of the foundational requirements of a cloud computing infrastructure: on-demand, automated scalability. Data center automation is nothing new. Hosting and service providers have long automated their data centers in order to reduce the cost of customer acquisition and management, and to improve efficiency of provisioning and de-provisioning processes. These benefits can also be realized inside the data center, regardless of the model being employed. The same automation required for smooth, cost-effective management of a cloud computing data center can be utilized to achieve smooth, cost-effective management of an enterprise data center. The hybrid application deployment model involving cloud computing requires additional intelligence on the part of the application delivery network. The application delivery network must be able to understand what is being requested and where it resides; it must be able to intelligently route requests. This, too, is a fundamental attribute of cloud computing infrastructure: intelligence. When distributing an application across multiple locations, whether local servers or remote data centers or "in the cloud", it becomes necessary for a controlling node to properly route those requests based on application data. In a less sophisticated model, global load balancing could be substituted as a means of directing requests to the appropriate site, a task for which global load balancers seem a perfect fit. A hybrid approach like cloudbursting seems to be particularly appealing. Enterprises seem reluctant to move business critical applications into the cloud at this juncture but are likely more willing to assign responsibility to an outsourced provider for less critical application functionality with variable volume requirements, which fits well with an on-demand resource bursting model. Cloudbursting may be one solution that makes everyone happy.286Views0likes1CommentCloud Computing: Vertical Scalability is Still Your Problem
Horizontal scalability achieved through the implementation of a load balancing solution is easy. It's vertical scalability that's always been and remains difficult to achieve, and it's even more important in a cloud computing or virtualized environment because now it can hurt you where it counts: the bottom line. Horizontal scalability is the ability of an application to be scaled up to meet demand through replication and the distribution of requests across a pool or farm of servers. It's the traditional load balanced model, and it's an integral component of cloud computing environments. Vertical scalability is the ability of an application to scale under load; to maintain performance levels as the number of concurrent requests increases. While load balancing solutions can certainly assist in optimizing the environment in which an application needs to scale by reducing overhead that can negatively impact performance (such as TCP session management, SSL operations, and compression/caching functionality) it can't solve core problems that prevent vertical scalability. The problem is that a single database table or SQL query that is poorly constructed can destroy vertical scalability and actually increase the cost of deploying in the cloud. Because you generally pay on a resource basis, if the application isn’t scaling up well it will require more resources to maintain performance levels and thus cost a lot more. Cloud computing isn’t going to magically optimize code or database queries or design database tables with performance in mind, that’s still squarely in the hands of the developers regardless of whether or not cloud computing is used as the deployment model. The issue of vertical scalability is very important when considering the use of cloud computing because you’re often charged based on compute resources used, much like the old mainframe model. If an application doesn’t vertically scale well, it’s going to increase the costs to run in the cloud. Cloud computing providers can't, and probably wouldn't if they could (it makes them money, after all), address vertical scalability issues because they are peculiar to the application. No external solution can optimize code such that the application will magically scale up vertically. External solutions can improve overall performance, certainly, by optimizing protocols, reducing protocol and application overhead, and reducing bandwidth requirements, but it can't dig into the application code and rearrange the order in which joins are performed inside an SQL query, rewrite a particularly poorly written loop, or refactor code to use a more efficient data structure. Vertical scalability, whether the application is deployed inside the local data center or out there in the cloud, is still the domain of the application developer. While developers can certainly take advantage of technologies like network-side scripting and inherent features in application delivery solutions to assist with efforts to increase vertical scalability, there is a limit to what solutions can do to address the root cause of an application's failure to vertically scale. Improving the vertical scalability of applications is important in achieving the benefits of a reduction in costs associated with cloud computing and virtualization. Applications that fail to vertically scale well may end up costing more when deployed in the cloud because of the additional demand on compute resources required as demand increases. Things you can do to improve vertical scalability Optimize SQL / database queries Take advantage of offload capabilities of application delivery solutions available Be aware of the impact on performance and scalability of decomposing applications into too finely grained services Remember that API usage will impact vertical scalability Understand the bottlenecks associated with the programming language(s) used and address them Cloud computing and virtualization can certainly address vertical scalability limitations by using horizontal scaling techniques to ensure capacity meets demand and performance level agreements are met. But doing so may cost you dearly and eliminate many of the financial incentives that led you to adopt cloud computing or virtualization in the first place. Related articles by Zemanta Infrastructure 2.0: The Diseconomy of Scale Virus Why you should not use clustering to scale an application Top 10 Concepts That Every Software Engineer Should Know Can Today's Hardware Handle the Cloud? Vendors air the cloud's pros and cons Twitter and the Architectural Challenges of Life Streaming Applications443Views0likes3CommentsIf Load Balancers Are Dead Why Do We Keep Talking About Them?
Commoditized from solution to feature, from feature to function, load balancing is no longer a solution but rather a function of more advanced solutions that’s still an integral component for highly-available, fault-tolerant applications. Unashamed Parody of Monty Python and the Holy Grail Load balancers: I'm not dead. The Market: 'Ere, it says it’s not dead. Analysts: Yes it is. Load balancers: I'm not. The Market: It isn't. Analysts: Well, it will be soon, it’s very ill. Load balancers: I'm getting better. Analysts: No you're not, you'll be stone dead in a moment. Earlier this year, amidst all the other (perhaps exaggerated) technology deaths, Gartner declared that Load Balancers are Dead. It may come as surprise, then, that application delivery network folks keep talking about them. As do users, customers, partners, and everyone else under the sun. In fact, with the increased interest in cloud computing it seems that load balancers are enjoying a short reprieve from death. LOAD BALANCERS REALLY ARE SO LAST CENTURY They aren’t. Trust me, load balancers aren’t enjoying anything. Load balancing on the other hand, is very much in the spotlight as scalability and infrastructure 2.0 and availability in the cloud are highlighted as issues today’s IT staff must deal with. And if it seems that we keep mentioning load balancers despite their apparent demise, it’s only because the understanding of what a load balancer does is useful to slowly moving people toward what is taking its place: application delivery. Load balancing is an integral component to any high-availability and/or on-demand architecture. The ability to direct application requests across a (cluster|pool|farm|bank) of servers (physical or virtual) is an inherent property of cloud computing and on-demand architectures in general. But it is not the be-all and end-all of application delivery, it’s just the point at which application delivery begins and an integral function of application delivery controllers. Load balancers, back in their day, were “teh bomb.” These simple but powerful pieces of software (which later grew into appliances and later into full-fledged application switches) offered a way for companies to address the growing demand for Web-based access to everything from their news stories to their products to their services to their kids’ pictures. But as traffic demands grew so did the load on servers and eventually new functionality began to be added to load balancers – caching, SSL offload and acceleration, and even security-focused functionality. From the core that was load balancing grew an entire catalog of application-rich features that focused on keeping applications available while delivering them fast and securely. At that point we were no longer simply load balancing applications, we were delivering them. Optimizing them. Accelerating them. Securing them. LET THEM REST IN PEACE… So it made sense that in order to encapsulate the concept of application delivery and move people away from focusing on load balancing that we’d give the product and market a new name. Thus arose the term “application delivery network” and “application delivery controller.” But at the core of both is still load balancing. Not load balancers, but load balancing. A function, if you will, of application delivery. But not the whole enchilada; not by a long shot. If we’re still mentioning load balancing (and even load balancers, as incorrect as that term may be today) it’s because the function is very, very, very important (I could add a few more “verys” but I think you get the point) to so many different architectures and to meeting business goals around availability and performance and security that it should be mentioned, if not centrally then at least peripherally. So yes. Load balancers are very much outdated and no longer able to provide the biggest bang for your buck. But load balancing, particularly when leveraged as a core component in an application delivery network, is very much in vogue (it’s trendy, like iPhones) and very much a necessary part of a successfully implemented high-availability or on-demand architecture. Long live load balancing. The House that Load Balancing Built A new era in application delivery Infrastructure 2.0: The Diseconomy of Scale Virus The Politics of Load Balancing Don't just balance the load, distribute it WILS: Network Load Balancing versus Application Load Balancing Cloud computing is not Burger King. You can’t have it your way. Yet. The Revolution Continues: Let Them Eat Cloud423Views0likes0CommentsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?522Views0likes4CommentsCloud Computing: It's the destination, not the journey that is important
How the cloud acts and is used is more important than where it physically resides Cloud computing and SOA suffer from the same lack of prescriptive architectures. They are defined by how they act rather than what they are, or from what they are composed. They are, in a way, existential technology that cannot be confined to a simple architectural diagram but require instead a set of properties or ways of acting in order to be recognized. To over simplify and paraphrase Jean-Paul Sartre's concepts of existentialism, we define ourselves (mankind) through our actions. To apply this to technology is a fairly easy thing: some technology is defined through what it does rather than what it is. Cloud computing is nothing but the way in which an infrastructure deploys and delivers applications. That will surely irritate cloud purists as much as the impure use of object-oriented principles used to annoy me when I was first developing applications. But with age and experience comes wisdom, and the hind-sight to see that there are many roads which lead to the same end. Unlike many philosophical theories, with technology it often is the destination and not the journey that is important. Definition of Cloud Computing The First Principle of Existentialism "Gartner defines cloud computing (hereafter referred to as "cloud") as a style of computing where massively scalable IT-related functions and information are provided as a service across the Internet, potentially to multiple external customers, where the consumers of the services need only care about what the service does for them, not how it is implemented. Cloud is not an architecture, a platform, a tool, an infrastructure, a Web site or a vendor. It is a style of computing. Many architectures can be used to support its implementation and use. For example, it is possible to use cloud in private enterprises to build private clouds, but there is only one public cloud based on the Internet." SOURCE: GARTNER RESEARCH ID: G00157908 28 MAY 2008 Man is nothing but what he makes of himself. SOURCE "ESSAYS IN EXISTENTIALISM" JEAN-PAUL SARTRE 1965 The First Principle of Cloud Computing Cloud computing is nothing but the way in which an infrastructure deploys and delivers applications. Many pundits argue that the "cloud" in "cloud computing" is the Internet, and only the Internet. But it's telling that almost every application architecture diagram offered up uses the same "cloud" 'to represent the network, whether it's internal or external to the organization. That "cloud" represents abstraction, obfuscation, and is meant to show that there is a network responsible for delivering the applications depicted, it's just too complex (or sensitive) to be depicted in a diagram or, as is more often the case, the folks responsible for application infrastructure aren't concerned about the network infrastructure supporting the applications. Which brings us back full circle to the definition of cloud computing, which includes a lack of concern regarding the implementation details of how applications get delivered. As Gartner has posited, the cloud is not an architecture, a platform, a tool, or an infrastructure. It's not a web site, it's not a vendor. "It is a style of computing." It is a deployment model, much in the same way SOA is a style of computing; it is a deployment model and not a prescriptive architecture. It defines itself by how it acts, not what it is. If it is used to deliver applications in a way that is transparent, that does not require the end-user to understand (or concern themselves with) the underlying infrastructure - application and network - then it likely fits under the moniker "cloud computing". If the same principles used by vendors like Amazon, BlueLock, Joyent, and Microsoft are used by organizations to implement a dynamic, scalable on-demand application and network infrastructure, does it really matter where that infrastructure physically resides? If Microsoft deploys an application in its own cloud, in its data center, and then makes use of that cloud for internal organizational applications, is it still cloud computing? Yes, of course it is. So why should it matter if an enterprise does the same thing? It's still cloud computing based on how the infrastructure acts and what it delivers, not where it is or who uses it. What's important is what the cloud infrastructure does. Scalability, transparency, supporting an on-demand computing model. That's what cloud computing is, whether it's implemented as SaaS (Software as a Service) or as IaaS (Infrastructure as a Service) or using automated virtualization solutions within the data center. As long as the infrastructure you build out is capable of providing the benefits of cloud computing: efficiency, scalability, and agility, you're doing cloud computing. That sounds a lot easier than is, because you have to scale while being efficient, and you have to be agile without sacrificing scalability, and you have to do it in a way that the end-user (who may be a developer) doesn't need to know how its implemented. So rather than worry about how, worry about what. At the end of your implementation is your infrastructure agile? Is it scalable (transparently)? Is it efficient? Is it abstracted? Does it support on-demand computing without sacrificing those properties? If it is, then you've reached your cloud computing destination.229Views0likes0Comments