gartner
13 TopicsIs IoT Hype For Real?
It is only fitting that the 20th anniversary of the Gartner Hype Cycle has the Internet of Things right at the top of the coaster. IoT is currently at the peak of Inflated Expectations. The Gartner Hype Cycle give organizations an assessment of the maturity, business benefit and future direction of more than 2,000 technologies. The theme for this year's Emerging Technologies Hype Cycle is Digital Business. As you can see, being at the top really means that there is a ton of media coverage about the technology, so much so that it starts to get a little silly. Everyone is talking about it, including this author. What you can also so is the downward trend to follow. This is the trough of disillusionment. Gamification, Mobile Health Monitoring and Big Data all fall into this area. It means that they already hit their big hype point but doesn't necessarily mean that it's over. The slope of enlightenment shows technologies that are finally mature enough to actually have reasonable expectations about. Each of the technologies also have a time line of when it'll mature. For IoT, it looks like 5 to 10 years. So while we're hearing all the noise about IoT now, society probably won't be fully immersed for another decade...even though we'll see gradual steps toward it over the next few years. Once all our people, places and things are connected, you can also get a sense of what else is coming in the Innovation Trigger area. Come the 2025 time frame, things like Smart Robots, Human Augmentation and a Brain Computer Interface could be the headlines of the day. Just imagine, instead of having to type this blog out on a keyboard, I could simply (and wirelessly) connect my brain chip to the computer and just think this. Hey, Stop reading my mind!! ps Related: Gartner's 2014 Hype Cycle for Emerging Technologies Maps the Journey to Digital Business Chart of the Week: The hype cycle of emerging technologies The Internet of Things and DNS F5 Predicts: Internet of Things Drives Demand for 'Social Intelligence' Internet of Things OWASP Top 10 The Icebox Cometh Technorati Tags: iot,things,sensors,nouns,gartner,hypecycle,media,silva,f5599Views0likes0CommentsThe death of SOA has been greatly exaggerated
Amidst the hype of cloud computing and virtualization have been the publication of several research notes regarding SOA. Adoption, they say, is slowing. Oh noes! Break out the generators, stock up on water and canned food! An article from JavaWorld quotes research firm Gartner as saying: The number of organizations planning to adopt SOA for the first time decreased to 25 percent; it had been 53 percent in last year's survey. Also, the number of organizations with no plans to adopt SOA doubled from 7 percent in 2007 to 16 percent in 2008. This dramatic falloff has been happening since the beginning of 2008, Gartner said. Some have reacted with much drama to the news, as if the reports indicate that SOA has lost its shine and is disappearing into the realm of legacy technology along with COBOL and fat-clients and CORBA. Not true at all. The reports indicate a drop in adoption of SOA, not the use of SOA. That should be unsurprising. At some point the number of organizations who have implemented SOA should reach critical mass, and the number of new organizations adopting the technology will slow down simply because there are fewer of them than there are folks who have already adopted SOA. As Don pointed out when this discussion came up, the economy is factoring in heavily for IT and technology, and the percentages cited by Gartner are not nearly as bad as they look when applied to real numbers. For example, if you ask 100 organizations about their plans for SOA and 16 say "we're not doing anything with it next year" that doesn't sound nearly as impressive as 16%, especially considering that means that 84% are going to be doing something with SOA next year. As with most surveys and polls, it's all about how the numbers are presented. Statistics are the devil's playground. It is also true that most organizations don't consider that by adopting or piloting cloud computing in the next year that they will likely be taking advantage of SOA. Whether it's because their public cloud computing provider requires the use of Web Services (SOA) to deploy and manage applications in the cloud or they are building a private cloud environment and will utilize service-enabled APIs and SOA to integrate virtualization technology with application delivery solutions, SOA remains an integral part of the IT equation. SOA simply isn't the paradigm shift it was five years ago. Organizations who've implemented SOA are still using it, it's still growing in their organizations as they continue to build new functionality and features for their applications, as they integrate new partners and distributors and applications from inside and outside the data center. As organizations continue to get comfortable with SOA and their implementations, they will inevitably look to governance and management and delivery solutions with which to better manage the architecture. SOA is not dead yet; it's merely reached the beginning of its productive life and if the benefits of SOA are real (and they are) then organizations are likely to start truly realizing the return on their investments. Related articles by Zemanta HP puts more automation into SOA governance Gartner reports slowdown in SOA adoption Gartner picks tech top 10 for 2009 SOA growth projections shrinking276Views0likes1CommentDevCentral Top5 09/25/2009
Side-projects and behind the scenes activities abound as the DevCentral team works towards the next goal on our plans for world domination, carefully sketched on Jeff's whiteboard. I'm glad to say that the extended DC team has been helping, as always, to keep the content flowing though, and there's plenty to highlight this week. Take a look at this week's Top5: Closing in on the iRules Contest Deadline http://devcentral.f5.com/s/weblogs/jason/archive/2009/09/15/closing-in-on-the-irules-contest-deadline.aspx Jason points out a very important, timely fact. It's nearly the end of your window to submit killer iRules for great prizes! The iRules contest is coming to a close. We've gotten some awesome entries so far and I've personally loved seeing them flow in from all over the world. There is still time, though. If you've got an iRule that you use that is cool and unique and warrants sharing, now is the time! Get it submitted and put your bid in for one of the pretty killer prizes offered to the winners. Check out Jason's post to get the details of what they are, where to apply, and a cool example iRule from the forums that could easily be submitted. Despite Rumors to the Contrary F5 Remains In the Lead http://devcentral.f5.com/s/weblogs/macvittie/archive/2009/09/25/despite-rumors-to-the-contrary-f5-remains-in-the-lead.aspx Lori comes to you this week with an important news bulletin: F5 is still leading the charge in the ADC market, despite the mutterings you may have heard recently. With the release of the new Magic Quadrant from Gartner there is always a fair amount of posturing and hubbub. Lucky are we that our positioning continues to speak for itself, well in the lead. I'm not usually one to go in for marketing type stuff, but the geek in me loves that we have the coolest technology at the party, bar-none. This is one of the many indicators of that, and I was glad to see Lori point it out. DevCentral Weekly Roundup Episode 104 - Guru, Guy, and My BIG-IP http://devcentral.f5.com/s/weblogs/dcpodcast/archive/2009/09/24/devcentral-weekly-roundup-episode-104-guru-guy-and-my.aspx This week's podcast was a particularly cool one, thanks to the caller that decided to join us. A few weeks ago we started dabbling in live-streaming our podcasts as we record them. This week Joe added the functionality to allow users to call in and chat with us in real-time, while we record. I was pleasantly surprised that we had a community member do precisely that, and share with us what they're currently doing with our tech. If you ever doubt that DevCentral is a far-reaching community with active members, an impromptu call from an international user to chat with us about what they're doing should cure what ails you. Turn Your Podcast Into An Interactive Live Streaming Experience http://devcentral.f5.com/s/weblogs/Joe/archive/2009/09/25/turn-your-podcast-into-a-interactive-live-streaming-experience.aspx As I mentioned above, the past few weeks we've been adding functionality to our podcasts. This once simple process has become increasingly more complex as we've tried to leverage new and cool features to make them more engaging and interactive for our users. With Joe at the helm we've incorporated several tools that make this possible. Today he put out a blog post detailing just how these all work together and exactly how it is that he crafted this bigger, better mousetrap. I found it quite interesting and it's a neat peek behind the curtains into one of the things we do here in DC Land. Reduce your Risk http://devcentral.f5.com/s/weblogs/psilva/archive/2009/09/24/reduce-your-risk.aspx In Pete's 13 th of 26 short topics about security he discusses mitigation. He touches on the fact that you should generally assume, if you're dealing with a publicly facing application, that you will eventually be the target of some malicious activity. He also details a few ways in which we all help to mitigate those risks on a daily basis. From firewalls to strong passwords to access cards to secure facilities, there are many hoops we all jump through daily, whether we think about it or not, to try and mitigate the risks inherent in today's IT world. This series is an interesting one and the pieces are easy to digest. I intend to keep following it as it moves towards topic #26, and I recommend you do the same. There you have it, my Top5 picks from DevCentral for the week. Hopefully you enjoyed them, and I'll be back with more soon. Be sure to check out previous editions of the Top5 here - http://devcentral.f5.com/s/Default.aspx?tabid=101 #Colin251Views0likes0CommentsCloud Computing: The Last Definition You'll Ever Need
The VirtualDC has asked the same question that's been roaming about in every technophile's head since the beginning of the cloud computing craze: what defines a cloud? We've chatted internally about this very question, which led to Alan's questions in a recent blog post. Lori and others have suggested that the cloud comes down to how a service is delivered rather than what is delivered, and I’m fine with that as a long term definition or categorization. I don’t think it’s narrow enough, though, to answer the question “Is Gmail a cloud service?” because if Gmail is delivered over the web, my internet connection is my work infrastructure, so therefore…Gmail is a cloud service for me? No, it's not. It may be for the developers, if they're using cloud computing to develop and deploy GMail, but for you it's naught but cloudware, an application accessed through the cloud. From the end-user perspective it's a hosted application, it's software as a service (SaaS), but it isn't cloud computing or a cloud service. The problem here, I think, is that we're using the same terms to describe two completely different things - and perspectives. The real users of cloud computing are IT folks: developers, architects, administrators. Unfortunately, too many definitions include verbiage indicating that the "user" should not need any knowledge of the infrastructure. Take, for example, Wikipedia's definition: It is a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet ("in the cloud") without knowledge of, expertise with, or control over the technology infrastructure that supports them. It's the use of "user" that's problematic. I would argue that it almost never the case that the end-user of an application has knowledge of the infrastructure. Ask your mom, ask your dad, ask any Internet neophyte and you'll quickly find that it's probably the case that they have no understanding or knowledge (and certainly no control) of the underlying infrastructure for any application. If we used the term "user" to mean the traditional end-user, then every application and web site on the Internet is "cloud computing" and has been for more than a decade. FINALLY, IT REALLY IS ALL ABOUT *US* The "user" in cloud computing definitions are developers, administrators, and IT folks. Folks who are involved in the development and deployment of applications, not necessarily using them. It is from IT's perspective, not the end-user or consumer of the application, from which cloud computing can be - and must be - defined. We are the users, the consumers, of cloud computing services; not our customers or consumers. We are the center of the vortex around which cloud computing revolves, because we are the ones who will consume and make use of those services in order to develop and deploy applications. Cloud computing is not about the application itself; it is about how the application is deployed as how it is delivered. Cloud computing is a deployment model leveraged by IT in order to reduce infrastructure costs and/or address capacity/scalability concerns. Just as an end-user cannot "do" SOA, they can't "do" cloud computing. End-users use applications, and an application is not cloud computing. It is the infrastructure and model of deployment that defines whether it is cloud computing, and even then, it's never cloud computing to the end-user, only the folks involved in developing and deploying that application. Cloud computing is about how an application or service is deployed and delivered. But defining how it is deployed and delivered could be problematic because when we talk about how we often tend to get prescriptive and start talking in absolute checklists. With a fluid concept like cloud computing that doesn't work. There's just not one single model nor is there one single architecture that you can definitively point to and say "We are doing that, ergo we are doing cloud computing." THE FOUR BEHAVIORS THAT DEFINE CLOUD COMPUTING It's really about the behavior of the entire infrastructure; how the cloud delivers an application, that's important. The good thing is that we can define that behavior, we can determine whether an application infrastructure is behaving in a cloud computing manner in order to categorize it as cloud computing or something else. This is not dissimilar to SOA (Service Oriented Architecture), a deployment model in which we look to the way in which applications are architected and subsequently delivered to determine whether we are or are not "doing SOA." DYNAMISM. Amazon calls this "elasticity", but it means the same thing: this is the ability of the application delivery infrastructure to expand and contract automatically based on capacity needs. Note that this does not require virtualization technology, though many providers are using virtualization to build this capability. There are other means of implementing dynamism in an architecture. ABSTRACTION. Do you need to care about the underlying infrastructure when developing an application for deployment in the cloud. If you have to care about the operating system or any piece of the infrastructure, it's not abstracted enough to be cloud computing. RESOURCE SHARING. The architecture must be such that the compute and network resources of the cloud infrastructure are sharable among applications. This ties back to dynamism and the ability to expand and contract as needed. If an application's method of scaling is to simply add more servers on which it is deployed rather than be able to consume resources on other servers as needed, the infrastructure is not capable of resource sharing. PROVIDES A PLATFORM. Cloud computing is essentially a deployment model. If it provides a platform on which you can develop and/or deploy an application and meets the other three criterion, it is cloud computing. Dynamism and resource sharing are the key architectural indicators of cloud computing. Without these two properties you're simply engaging in remote hosting and outsourcing, which is not a bad thing, it's just not cloud computing. Hosted services like Gmail are cloudware, but not necessarily cloud computing, because they are merely accessed through the cloud and don't actually provide a platform on which applications can be deployed. Salesforce.com, however, which provides such a platform - albeit somewhat restricted - then fits into the definition of cloud computing. Cloudware is an extension of cloud computing but they do not enable businesses to leverage cloud computing in the same way as an Amazon or BlueLock or Joyent. Cloudware may grow into cloud computing, as Salesforce.com has done over the years. Remember when Salesforce.com started it was purely SaaS - it simply provided a hosted CRM (Customer Relationship Management) solution. Over the years it has expanded and begun to offer a platform on which organizations can develop and deploy their own applications. Cloud computing, as Gartner analysts have recently put forth, is a "style of computing". That style of computing is defined from the perspective of IT, and has specific properties which make something cloud computing - or not cloud computing as the case may be.237Views0likes3CommentsInfrastructure 2.0: As a matter of fact that isn't what it means
We've been talking a lot about the benefits of Infrastructure 2.0, or Dynamic Infrastructure, a lot about why it's necessary, and what's required to make it all work. But we've never really laid out what it is, and that's beginning to lead to some misconceptions. As Daryl Plummer of Gartner pointed out recently, the definition of cloud computing is still, well, cloudy. Multiple experts can't agree on the definition, and the same is quickly becoming true of dynamic infrastructure. That's no surprise; we're at the beginning of what Gartner would call the hype cycle for both concepts, so there's some work to be done on fleshing out exactly what each means. That dynamic infrastructure is tied to cloud computing is no surprise, either, as dynamic infrastructure is very much an enabler of such elastic models of application deployment. But dynamic infrastructure is applicable to all kinds of models of application deployment: so-called legacy deployments, cloud computing and its many faces, and likely new models that have yet to be defined. The biggest confusion out there seems to be that dynamic infrastructure is being viewed as Infrastructure as a Service (IaaS). Dynamic infrastructure is not the same thing as IaaS. IaaS is a deployment model in which application infrastructure resides elsewhere, in the cloud, and is leveraged by organizations desiring an affordable option for scalability that reduces operating and capital expenses by sharing compute resources "out there" somewhere, at a provider. Dynamic infrastructure is very much a foundational technology for IaaS, but it is not, in and of itself, IaaS. Indeed, simply providing network or application network solution services "as a service" has never required dynamic infrastructure. CDN (Content Delivery Networks), managed VPNs, secure remote access, and DNS services have long been available as services to be used by organizations as a means by which they can employ a variety of "infrastructure services" without the capital expenditure in hardware and time/effort required to configure, deploy, and maintain such solutions. Simply residing "in the cloud" is not enough. A CDN is not "dynamic infrastructure" nor are hosted DNS servers. They are infrastructure 1.0, legacy infrastructure, whose very nature is such that physical location has never been important to their deployment. Indeed, these services were designed without physical location as a requirement, necessarily, as their core functions are supposed to work in a distributed, location agnostic manner. Dynamic infrastructure is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). Adaptable It is able to understand its environment and react to conditions in that environment in order to provide scale, security, and optimal performance for applications. This adaptability comes in many forms, from the ability to make management and configuration changes on the fly as necessary to providing the means by which administrators and developers can manually or automatically make changes to the way in which applications are being delivered. The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved. Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE". Some provide network-side scripting capabilities to extend the ability to react and adapt to situations requiring flexibility while others provide the means by which third-party solutions can be deployed on the solution to address the need for application and user specific capabilities at specific touch-points in the architecture. Context Aware Dynamic infrastructure is able to understand the context that surrounds an application, its deployment environment, and its users and apply relevant policies based on that information. Being context aware means being able to recognize that a user accessing Application X from a coffee shop has different needs than the same user accessing Application X from home or from the corporate office. It is able to recognize that a user accessing an application over a WAN or high-latency connection requires different policies than one accessing that application via a LAN or from close physical proximity over the Internet. Being context aware means being able to recognize the current conditions of the network and the application, and then leveraging its adaptable nature to choose the right policies at the time the request is made such that the application is delivered most efficiently and quickly. Collaborative Dynamic infrastructure is capable of integrating with other application network and network infrastructure, as well as the management and control solutions required to manage both the infrastructure and the applications it is tasked with delivering. The integration capabilities of dynamic infrastructure requires that the solution be able to direct and take direction from other solutions such that changes in the infrastructure at all layers of the stack can be recognized and acted upon. This integration allows network and application network solutions to leverage its awareness of context in a way that ensures it is adaptable and can support the delivery of applications in an elastic, flexible manner. Most solutions use a standards-based control plane through which they can be integrated with other systems to provide the connectivity intelligence necessary to implement IaaS, virtualized architectures, and other cloud computing models in such a way that the perceived benefits of reduced operating expenses and increased productivity through automation can actually be realized. These three properties of dynamic infrastructure work together, in concert, to provide the connectivity intelligence and ability to act on information gathered through that intelligence. All three together form the basis for a fluid, adaptable, dynamic application infrastructure foundation on which emerging compute models such as cloud computing and virtualized architectures can be implemented. But dynamic infrastructure is not exclusively tied to emerging compute models and next-generation application architectures. Dynamic infrastructure can be leveraged to provide benefit to traditional architectures, as well. The connectivity intelligence and adaptable nature of dynamic infrastructure improves the security, availability, and performance of applications in so-called legacy architectures as well. Dynamic infrastructure is a set of capabilities implemented by network and application network solutions that provide the means by which an organization can improve the efficiency of their application delivery and network architecture. That's why it's just not accurate to equate Infrastructure 2.0/Dynamic Infrastructure with Infrastructure as a Service cloud computing models. The former is a description of the next generation of network and network application infrastructure solutions; the evolution from static, brittle solutions to fluid, dynamic, adaptable ones. The latter is a deployment model that, while likely is built atop dynamic infrastructure solutions, is not wholly comprised of dynamic infrastructure. IaaS is not a product, it's a service. Dynamic infrastructure is a product that may or may not be delivered "as a service". Glad we got that straightened out.237Views0likes1CommentHow Sears Could Have Used the Cloud to Stay Available Black Friday
The prediction of the death of online shopping this holiday season were, apparently, greatly exaggerated. As it's been reported, Sears, along with several other well known retailers, were victims of heavy traffic on Black Friday. One wonders if the reports of a dismal shopping season this year due to economic concerns led retailers to believe that there would be no seasonal rush to online sites and therefore preparation to deal with sudden spikes in traffic were unnecessary. Most of the 63 objects (375 KB of total data) comprising sears.com home page are served from sears.com and are either images, scripts, or stylesheets. The rest of their site is similar, with a lot of static data comprising a large portion of the objects. That's a lot of static data being served, and a lot of connections required on the servers just for one page. Not knowing Sears internal architecture, it's quite possible they are already using application delivery and acceleration solutions to ensure availability and responsiveness of their site. If they aren't, they should, because even the simple connection optimizations available in today's application delivery controllers would have likely drastically reduced the burden on servers and increased the capacity of their entire infrastructure. But let's assume they are already using application delivery to its fullest and simply expended all possible capacity on their servers despite their best efforts due to the unexpected high volume of visitors. It happens. After all, server resources are limited in the data center and when the servers are full up, they're full up. Assuming that Sears, like most IT shops, isn't willing to purchase additional hardware and incur the associated management, power, and maintenance costs over the entire year simply to handle a seasonal rush, they still could have prepared for the onslaught by taking advantage of cloud computing. Cloudbursting is an obvious solution, as visitors who pushed Sears servers over capacity would have been automatically directed via global load balancing techniques to a cloud computing hosted version of their site. Not only could they have managed to stay available, this would have also improved performance of their site for all visitors as cloudbursting can use a wide array of variables to determine when requests should be directed to the cloud, including performance-based parameters. A second option would have been a hybrid cloud model, where certain files and objects are served from the local data center while others are served from the cloud. Instead of serving up static stylesheets and images from Sears.com internal servers, they could have easily been hosted in the cloud. Doing so would translate into fewer requests to sears.com internal servers which reduces the processing power required and results in higher capacity of servers. I suppose a third option would have been to commit fully to the cloud and move their entire application infrastructure to the cloud, but even though adoption appears to be imminent for many enterprises according to attendees at Gartner Data Center Conference, 2008 is certainly not "the year of the cloud" and there are still quite a few kinks in full adoption plans that need to be ironed out before folks can commit fully, such as compliance and integration concerns. Still, there are ways that Sears, and any organization with a web presence, could take advantage of the cloud without committing fully to ensure availability under exceedingly high volume. It just takes some forethought and planning. Yeah, I'm thinking it too, but I'm not going to say it either. Related articles by Zemanta Online retailers overloaded on Black Friday Online-only outlets see Black Friday boost over 2007 Sears.com out on Black Friday [Breakdowns] The Context-Aware Cloud Top 10 Reasons for NOT Using a Cloud Half of Enterprises See Cloud Presence by 2010228Views0likes3CommentsCloud Computing: It's the destination, not the journey that is important
How the cloud acts and is used is more important than where it physically resides Cloud computing and SOA suffer from the same lack of prescriptive architectures. They are defined by how they act rather than what they are, or from what they are composed. They are, in a way, existential technology that cannot be confined to a simple architectural diagram but require instead a set of properties or ways of acting in order to be recognized. To over simplify and paraphrase Jean-Paul Sartre's concepts of existentialism, we define ourselves (mankind) through our actions. To apply this to technology is a fairly easy thing: some technology is defined through what it does rather than what it is. Cloud computing is nothing but the way in which an infrastructure deploys and delivers applications. That will surely irritate cloud purists as much as the impure use of object-oriented principles used to annoy me when I was first developing applications. But with age and experience comes wisdom, and the hind-sight to see that there are many roads which lead to the same end. Unlike many philosophical theories, with technology it often is the destination and not the journey that is important. Definition of Cloud Computing The First Principle of Existentialism "Gartner defines cloud computing (hereafter referred to as "cloud") as a style of computing where massively scalable IT-related functions and information are provided as a service across the Internet, potentially to multiple external customers, where the consumers of the services need only care about what the service does for them, not how it is implemented. Cloud is not an architecture, a platform, a tool, an infrastructure, a Web site or a vendor. It is a style of computing. Many architectures can be used to support its implementation and use. For example, it is possible to use cloud in private enterprises to build private clouds, but there is only one public cloud based on the Internet." SOURCE: GARTNER RESEARCH ID: G00157908 28 MAY 2008 Man is nothing but what he makes of himself. SOURCE "ESSAYS IN EXISTENTIALISM" JEAN-PAUL SARTRE 1965 The First Principle of Cloud Computing Cloud computing is nothing but the way in which an infrastructure deploys and delivers applications. Many pundits argue that the "cloud" in "cloud computing" is the Internet, and only the Internet. But it's telling that almost every application architecture diagram offered up uses the same "cloud" 'to represent the network, whether it's internal or external to the organization. That "cloud" represents abstraction, obfuscation, and is meant to show that there is a network responsible for delivering the applications depicted, it's just too complex (or sensitive) to be depicted in a diagram or, as is more often the case, the folks responsible for application infrastructure aren't concerned about the network infrastructure supporting the applications. Which brings us back full circle to the definition of cloud computing, which includes a lack of concern regarding the implementation details of how applications get delivered. As Gartner has posited, the cloud is not an architecture, a platform, a tool, or an infrastructure. It's not a web site, it's not a vendor. "It is a style of computing." It is a deployment model, much in the same way SOA is a style of computing; it is a deployment model and not a prescriptive architecture. It defines itself by how it acts, not what it is. If it is used to deliver applications in a way that is transparent, that does not require the end-user to understand (or concern themselves with) the underlying infrastructure - application and network - then it likely fits under the moniker "cloud computing". If the same principles used by vendors like Amazon, BlueLock, Joyent, and Microsoft are used by organizations to implement a dynamic, scalable on-demand application and network infrastructure, does it really matter where that infrastructure physically resides? If Microsoft deploys an application in its own cloud, in its data center, and then makes use of that cloud for internal organizational applications, is it still cloud computing? Yes, of course it is. So why should it matter if an enterprise does the same thing? It's still cloud computing based on how the infrastructure acts and what it delivers, not where it is or who uses it. What's important is what the cloud infrastructure does. Scalability, transparency, supporting an on-demand computing model. That's what cloud computing is, whether it's implemented as SaaS (Software as a Service) or as IaaS (Infrastructure as a Service) or using automated virtualization solutions within the data center. As long as the infrastructure you build out is capable of providing the benefits of cloud computing: efficiency, scalability, and agility, you're doing cloud computing. That sounds a lot easier than is, because you have to scale while being efficient, and you have to be agile without sacrificing scalability, and you have to do it in a way that the end-user (who may be a developer) doesn't need to know how its implemented. So rather than worry about how, worry about what. At the end of your implementation is your infrastructure agile? Is it scalable (transparently)? Is it efficient? Is it abstracted? Does it support on-demand computing without sacrificing those properties? If it is, then you've reached your cloud computing destination.226Views0likes0CommentsNot all application requests are created equal
ArsTechnica has an interesting little article on what Windows Azure is and is not. During the course of discussion with Steven Martin, Microsoft's senior director of Developer Platform Product Management, a fascinating – or disturbing in my opinion – statement was made: There is a distinction between the hosting world and the cloud world that Martin wanted to underline. Whereas hosting means simply the purchase of space under certain conditions (as opposed to buying the actual hardware), the cloud completely hides all issues of clustering and/or load balancing, and it offers an entirely virtualized instance that takes care of all your application's needs. [emphasis added] The reason this is disturbing is because not all application requests are created equal and therefore should not necessarily be handled in the same way by a “clustering and/or load balancing solution”. But that’s exactly what hiding clustering and/or load balancing ends up doing. While it’s nice that the nitty-gritty details are obscured in the cloud from developers and, in most cases today, the administrators as well, the lack of control over how application requests are distributed actually makes the cloud and its automatic scalability (elasticity) less effective. To understand why you need a bit of background regarding industry standard load balancing algorithms. In the beginning there was Round Robin, an algorithm that is completely application agnostic and simply distributes request based on a list of servers, one after the other. If there are five servers in a pool/farm/cluster, then each one gets a turn. It’s an egalitarian algorithm that treats all servers and all requests the same. Round Robin achieves availability, but often at the cost of application performance. When application performance became an issue we got new algorithms like Least Connections and Fastest Response Time. These algorithms tried to take into account the load on the servers in the pool/farm/cluster before making a decision, and could therefore better improve utilization such that application performance started getting better. But these algorithms only consider the server and its load, and don’t take into consideration the actual request itself. And therein lies the problem, for not all requests are created equal. A request for an image requires X processing on a server and Y memory and is usually consistent across time and users. But a request that actually invokes application logic and perhaps executes a database query is variable in its processing time and memory utilization. Some may take longer than others, and require more memory than others. Each request is a unique snowflake whose characteristics are determined by user, by resource, and by the conditions that exist at the time it was made. It turns out the problem is that in order to effectively determine how to load balance requests in a way that optimizes utilization on servers and offers the best application performance you actually have to understand the request. That epiphany gave rise to layer 7 load balancing and the ability to exact finer-grained control over load balancing. Between understanding the request and digging deeper into the server – understanding CPU utilization, memory, network capacity – load balancers were suddenly very effective at distributing load in a way that made sense on a per request basis. The result was better architectures, better performing applications, and better overall utilization of the resources available. Now comes the cloud and its “we hide all the dirty infrastructure details from you” mantra. The problem with this approach is simple: a generic load balancing algorithm is not the most effective method of distributing load across servers, but a cloud provider is not prescient and therefore has no idea what algorithm might be best for your application. Therefore the provider has very little choice in which algorithm is used for load balancing and therefore any choice made will certainly provide availability, but will likely not be the most effective for your specific application. So while it may sound nice that all the dirty details of load balancing and clustering is “taken care of for you” in the cloud, it’s actually doing you and your application a disservice. Hiding the load balancing and/or clustering capabilities of the cloud, in this case Azure, from the developer is not necessarily the bonus Martin portrays it to be. The ability to control how requests are distributed is just as important in the cloud as it is in your own data center. As Gartner analyst Daryl Plummer points out, underutilizing resources in the cloud, as may happen when using simplistic load balancing algorithms, can be as expensive as running your own data center and may negatively impact application performance. Without some input into the configuration of load balancers and other relevant infrastructure, there isn’t much you can do about that, either, but start up another instance and hope that horizontal scalability will improve performance – at the expense of your budget. Remember that when someone else makes decisions for you that you are necessarily giving up control. That’s not always a bad thing. But it’s important for you to understand what you are giving up before you hand over the reins. So do your research. You may not have direct control, but you can ask about the “clustering and/or load balancing” provided and understand what affect that may – or may not – have on the performance of your application and the effectiveness of the utilization of the resources for which you are paying.220Views0likes2CommentsCloud computing conundrum causing confusion
It seems that every time a new technology breaks through the surface a hundred "experts", vendors, and standards-bodies appear like moths to a flame attempting to define the term such that only "they" have the answer, the solution, the standard, or the product. When my son mentioned a research paper he wrote on cloud computing (which you still haven't sent me, by the way) he did so while disagreeing with a previous post of mine on the subject. He was quite vehement that grid computing did not equal cloud computing, and seemed almost shocked that I would dare to associate the two in any way. I tell you, if a family can't agree on the definition of cloud computing, the industry certainly won't do so any time soon. There are a lot of other folks out there trying to specify what you can and cannot call cloud computing - and arguing with them is like poking a badger. They get real mean and ornery when you disagree and if you aren't careful, you might get your eyes scratched out. The first bit of sanity I've seen in these discussions comes from Gartner fellow David Mitchell Smith, quoted in a Data Center Knowledge post: “The term cloud computing has come to mean two very different things: a broader use that focuses on ‘cloud,’ and a more-focused use on system infrastructure and virtualization,” said David Mitchell Smith, vice president and Gartner Fellow. “Mixing the discussion of ‘cloud-enabling technologies’ with ‘cloud computing services’ creates confusion.” But even as I nod in agreement with the rationality of his statement I have to ask: if you use cloud-enabling technologies to build out an infrastructure that delivers services, what do you end up with? Yeah, sure seems like you'd end up with cloud computing services, wouldn't you? Looks to me like we end up right back at square one, regardless of whether we separate the two concepts or not. I see cloud computing architecture in the same light as I see SOA. It's conceptual, it's an reference architecture, it's a best practices. There isn't an RFC specifying what you MUST or SHOULD implement - and how - in order to qualify as a cloud computing architecture. I'm not going to try to define cloud computing, or determine what is or is not cloud computing because in the end, I don't think it really matters all that much to the folks in the trenches who have a job to get done. And whether we're talking about Google App style cloud computing services (cloudware), enterprise cloud computing architectures, hybrid cloudbursting architectures, or cloud computing service providers, there's one thing that remains certain in my mind: without the right infrastructure cloud computing won't work no matter where it is or what it's called.210Views0likes0Comments3 steps to a fast, secure, and reliable application infrastructure
You have just been promoted to CTO of Widgets, Inc. (Congratulations, by the way!) In your new role, on which of the following will you focus the most attention (and budget): (a) the network (b) the applications (c) the data Trick question. The answer is (d) all of the above. I know I didn't list that choice (I did in the poll, however) but you've taken multiple choice quizzes before; there's always a choice (d) all of the above. Always. And 25% of the time it's the right answer. In the case of technology today, it's always the right answer. As an industry we spend a lot of time (and money) worrying about how to protect our applications and the data that drives them. We spend a lot of time (and money) agonizing over how to scale and keep available those applications which are crucial not only to our business users but our customers and partners. We spend a lot of time (and money) trying to make those applications faster and better and easier to use. The network, however, is often left out of the discussion (and the budget). Like the local road, most people don't think much about it until there's a pothole that slows you down or causes a flat tire. Then it gets a lot of attention - and most of it negative. This lack of attention to the network and its importance to scale and reliability and security is problematic and pervasive. So pervasive that research firm Gartner recently penned a research note on the subject of the network and cloud computing entitled, "You Can't Do Cloud Computing Without the Right Cloud (Network)." [reprint courtesy of F5] There are four different types of networks supporting "cloud computing" and each react differently to different application characteristics. These differences are critical to the selection of a cloud network vendor. Four different types of networks supporting cloud computing. I'm guessing that holds true for non-cloud computing architectures as well. There are many different types of networks and architectures and this is, in part, why the network is always, always depicted as nothing more than an amorphous cloud in application architecture diagrams. Many of Gartner's key findings mirror what we've been asserting are four key attributes of infrastructure supporting cloud computing, whether that cloud is a big one "in the sky" (Internet) or a smaller one "in the backyard" (in your data center). In general, it mirrors what we've been saying for years with regard to improving application performance and scalability: the network is more important than most people tend to believe. It's not just a "pipe", it's an integral part of your architecture and is inherently important to the successful deployment and delivery of applications. Data is the lifeblood of an organization, and applications are the heart of the data center. On that there is no argument. But the network is the complex system of veins and arteries that carries that lifeblood to and from the heart (applications) and without them, well, your business is likely faltering if not dead. Just as doctors try to impart upon us the importance of preventive medicine and a healthy diet, it's just as important to apply that practice to IT in general. We need to be proactive and take preventive action whenever possible in order to deliver applications that are fast, secure, and available. And that means keeping the network on the same level of importance as the data and applications it is serving. That means investing in technologies that make the network better, faster, and more able to continue transporting data between the applications and the users that need them. The network can inhibit the delivery of applications or it can enhance the delivery of applications. Which is ultimately up to you, and based upon what role you assign it within your application infrastructure. The first step is recognizing that the network is as important as the applications and data it is delivering. The second step is discovering how the network might help in deploying and delivering those applications and providing value above and beyond its traditional role of "just a pipe." The third step is to do something about it.199Views0likes0Comments