gartner
13 TopicsIs IoT Hype For Real?
It is only fitting that the 20th anniversary of the Gartner Hype Cycle has the Internet of Things right at the top of the coaster. IoT is currently at the peak of Inflated Expectations. The Gartner Hype Cycle give organizations an assessment of the maturity, business benefit and future direction of more than 2,000 technologies. The theme for this year's Emerging Technologies Hype Cycle is Digital Business. As you can see, being at the top really means that there is a ton of media coverage about the technology, so much so that it starts to get a little silly. Everyone is talking about it, including this author. What you can also so is the downward trend to follow. This is the trough of disillusionment. Gamification, Mobile Health Monitoring and Big Data all fall into this area. It means that they already hit their big hype point but doesn't necessarily mean that it's over. The slope of enlightenment shows technologies that are finally mature enough to actually have reasonable expectations about. Each of the technologies also have a time line of when it'll mature. For IoT, it looks like 5 to 10 years. So while we're hearing all the noise about IoT now, society probably won't be fully immersed for another decade...even though we'll see gradual steps toward it over the next few years. Once all our people, places and things are connected, you can also get a sense of what else is coming in the Innovation Trigger area. Come the 2025 time frame, things like Smart Robots, Human Augmentation and a Brain Computer Interface could be the headlines of the day. Just imagine, instead of having to type this blog out on a keyboard, I could simply (and wirelessly) connect my brain chip to the computer and just think this. Hey, Stop reading my mind!! ps Related: Gartner's 2014 Hype Cycle for Emerging Technologies Maps the Journey to Digital Business Chart of the Week: The hype cycle of emerging technologies The Internet of Things and DNS F5 Predicts: Internet of Things Drives Demand for 'Social Intelligence' Internet of Things OWASP Top 10 The Icebox Cometh Technorati Tags: iot,things,sensors,nouns,gartner,hypecycle,media,silva,f5607Views0likes0CommentsCloud Computing: The Last Definition You'll Ever Need
The VirtualDC has asked the same question that's been roaming about in every technophile's head since the beginning of the cloud computing craze: what defines a cloud? We've chatted internally about this very question, which led to Alan's questions in a recent blog post. Lori and others have suggested that the cloud comes down to how a service is delivered rather than what is delivered, and I’m fine with that as a long term definition or categorization. I don’t think it’s narrow enough, though, to answer the question “Is Gmail a cloud service?” because if Gmail is delivered over the web, my internet connection is my work infrastructure, so therefore…Gmail is a cloud service for me? No, it's not. It may be for the developers, if they're using cloud computing to develop and deploy GMail, but for you it's naught but cloudware, an application accessed through the cloud. From the end-user perspective it's a hosted application, it's software as a service (SaaS), but it isn't cloud computing or a cloud service. The problem here, I think, is that we're using the same terms to describe two completely different things - and perspectives. The real users of cloud computing are IT folks: developers, architects, administrators. Unfortunately, too many definitions include verbiage indicating that the "user" should not need any knowledge of the infrastructure. Take, for example, Wikipedia's definition: It is a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet ("in the cloud") without knowledge of, expertise with, or control over the technology infrastructure that supports them. It's the use of "user" that's problematic. I would argue that it almost never the case that the end-user of an application has knowledge of the infrastructure. Ask your mom, ask your dad, ask any Internet neophyte and you'll quickly find that it's probably the case that they have no understanding or knowledge (and certainly no control) of the underlying infrastructure for any application. If we used the term "user" to mean the traditional end-user, then every application and web site on the Internet is "cloud computing" and has been for more than a decade. FINALLY, IT REALLY IS ALL ABOUT *US* The "user" in cloud computing definitions are developers, administrators, and IT folks. Folks who are involved in the development and deployment of applications, not necessarily using them. It is from IT's perspective, not the end-user or consumer of the application, from which cloud computing can be - and must be - defined. We are the users, the consumers, of cloud computing services; not our customers or consumers. We are the center of the vortex around which cloud computing revolves, because we are the ones who will consume and make use of those services in order to develop and deploy applications. Cloud computing is not about the application itself; it is about how the application is deployed as how it is delivered. Cloud computing is a deployment model leveraged by IT in order to reduce infrastructure costs and/or address capacity/scalability concerns. Just as an end-user cannot "do" SOA, they can't "do" cloud computing. End-users use applications, and an application is not cloud computing. It is the infrastructure and model of deployment that defines whether it is cloud computing, and even then, it's never cloud computing to the end-user, only the folks involved in developing and deploying that application. Cloud computing is about how an application or service is deployed and delivered. But defining how it is deployed and delivered could be problematic because when we talk about how we often tend to get prescriptive and start talking in absolute checklists. With a fluid concept like cloud computing that doesn't work. There's just not one single model nor is there one single architecture that you can definitively point to and say "We are doing that, ergo we are doing cloud computing." THE FOUR BEHAVIORS THAT DEFINE CLOUD COMPUTING It's really about the behavior of the entire infrastructure; how the cloud delivers an application, that's important. The good thing is that we can define that behavior, we can determine whether an application infrastructure is behaving in a cloud computing manner in order to categorize it as cloud computing or something else. This is not dissimilar to SOA (Service Oriented Architecture), a deployment model in which we look to the way in which applications are architected and subsequently delivered to determine whether we are or are not "doing SOA." DYNAMISM. Amazon calls this "elasticity", but it means the same thing: this is the ability of the application delivery infrastructure to expand and contract automatically based on capacity needs. Note that this does not require virtualization technology, though many providers are using virtualization to build this capability. There are other means of implementing dynamism in an architecture. ABSTRACTION. Do you need to care about the underlying infrastructure when developing an application for deployment in the cloud. If you have to care about the operating system or any piece of the infrastructure, it's not abstracted enough to be cloud computing. RESOURCE SHARING. The architecture must be such that the compute and network resources of the cloud infrastructure are sharable among applications. This ties back to dynamism and the ability to expand and contract as needed. If an application's method of scaling is to simply add more servers on which it is deployed rather than be able to consume resources on other servers as needed, the infrastructure is not capable of resource sharing. PROVIDES A PLATFORM. Cloud computing is essentially a deployment model. If it provides a platform on which you can develop and/or deploy an application and meets the other three criterion, it is cloud computing. Dynamism and resource sharing are the key architectural indicators of cloud computing. Without these two properties you're simply engaging in remote hosting and outsourcing, which is not a bad thing, it's just not cloud computing. Hosted services like Gmail are cloudware, but not necessarily cloud computing, because they are merely accessed through the cloud and don't actually provide a platform on which applications can be deployed. Salesforce.com, however, which provides such a platform - albeit somewhat restricted - then fits into the definition of cloud computing. Cloudware is an extension of cloud computing but they do not enable businesses to leverage cloud computing in the same way as an Amazon or BlueLock or Joyent. Cloudware may grow into cloud computing, as Salesforce.com has done over the years. Remember when Salesforce.com started it was purely SaaS - it simply provided a hosted CRM (Customer Relationship Management) solution. Over the years it has expanded and begun to offer a platform on which organizations can develop and deploy their own applications. Cloud computing, as Gartner analysts have recently put forth, is a "style of computing". That style of computing is defined from the perspective of IT, and has specific properties which make something cloud computing - or not cloud computing as the case may be.240Views0likes3CommentsOut, Damn’d Bot! Out, I Say!
Exorcising your digital demons Most people are familiar with Shakespeare’s The Tragedy of Macbeth. Of particularly common usage is the famous line uttered repeatedly by Lady Macbeth, “Out, damn’d spot! Out, I say” as she tries to wash imaginary bloodstains from her hands, wracked with the guilt of the many murders of innocent men, women, and children she and her husband have committed. It might be no surprise to find a similar situation in the datacenter, late at night. With the background of humming servers and cozily blinking lights shedding a soft glow upon the floor, you might hear some of your infosecurity staff roaming the racks and crying out “Out, damn’d bot! Out I say!” as they try to exorcise digital demons from their applications and infrastructure. Because once those bots get in, they tend to take up a permanent residence. Getting rid of them is harder than you’d think because like Lady Macbeth’s imaginary bloodstains, they just keep coming back – until you address the source.183Views0likes0CommentsDon’t Throw the Baby out with the Bath Water
Or in modern technical terms, don’t throw the software out with the hardware Geva Perry recently questioned one of Gartner’s core predictions for 2010, namely that “By 2012, 20 percent of businesses will own no IT assets.” Geva asks a few (very pertinent) questions regarding this prediction that got me re-reading the prediction. Let’s all look at it one more time, shall we? By 2012, 20 percent of businesses will own no IT assets. Several interrelated trends are driving the movement toward decreased IT hardware assets, such as virtualization, cloud-enabled services, and employees running personal desktops and notebook systems on corporate networks. The need for computing hardware, either in a data center or on an employee's desk, will not go away. However, if the ownership of hardware shifts to third parties, then there will be major shifts throughout every facet of the IT hardware industry. For example, enterprise IT budgets will either be shrunk or reallocated to more-strategic projects; enterprise IT staff will either be reduced or reskilled to meet new requirements, and/or hardware distribution will have to change radically to meet the requirements of the new IT hardware buying points. [emphasis added] Geva asks: “’IT assets’ - They probably mean IT assets in the data center because aren't personal desktops and notebooks also IT assets?” That would have been my answer at first as well, but the explanation clearly states that “the need for computing hardware either in a data center or on an employee’s desk will not go away.” Is Gartner saying then that “computing hardware” is not an IT asset? If the need for it – in the data center and on the employee’s desk – will not go away, as it asserts, then how can this prediction be accurate? Even if every commoditized business function is enabled via SaaS and any custom solutions are deployed and managed via IaaS or PaaS solutions, employees still need a way to access them, and certainly they’ve got telecommunications equipment of some kind – Blackberries and iPhones abound – and those are, if distributed by the organization, considered IT assets and must be managed accordingly. As Geva subtly points out, even if an organization moves to a BYOH (bring your own hardware) approach to the problem of client access to remotely (cloud) hosted applications, they still must be – or should be – managed. Without proper management of network access the risk of spreading viruses and other malware to every other employee is much higher. Without proper understanding of what and how the organizational data is being accessed and where it’s being stored, the business is at risk. Without proper controls on employee’s “personal” hardware it is difficult to audit security policies that govern who can and cannot access that laptop and subsequently who might have access to credentials that would allow access to sensitive corporate information. What Gartner seems to be saying with this prediction is not that hardware will go away, but that the ownership of the hardware will go away. Organizations will still need the hardware – both on the desktop and in the data center – but that they will not need to “own” the IT hardware assets. Notice that it says “will own no IT assets” not “need no IT assets.” That said, the prediction is still not accurate as it completely ignores that there are more “IT assets” than just hardware.185Views0likes1CommentNot all application requests are created equal
ArsTechnica has an interesting little article on what Windows Azure is and is not. During the course of discussion with Steven Martin, Microsoft's senior director of Developer Platform Product Management, a fascinating – or disturbing in my opinion – statement was made: There is a distinction between the hosting world and the cloud world that Martin wanted to underline. Whereas hosting means simply the purchase of space under certain conditions (as opposed to buying the actual hardware), the cloud completely hides all issues of clustering and/or load balancing, and it offers an entirely virtualized instance that takes care of all your application's needs. [emphasis added] The reason this is disturbing is because not all application requests are created equal and therefore should not necessarily be handled in the same way by a “clustering and/or load balancing solution”. But that’s exactly what hiding clustering and/or load balancing ends up doing. While it’s nice that the nitty-gritty details are obscured in the cloud from developers and, in most cases today, the administrators as well, the lack of control over how application requests are distributed actually makes the cloud and its automatic scalability (elasticity) less effective. To understand why you need a bit of background regarding industry standard load balancing algorithms. In the beginning there was Round Robin, an algorithm that is completely application agnostic and simply distributes request based on a list of servers, one after the other. If there are five servers in a pool/farm/cluster, then each one gets a turn. It’s an egalitarian algorithm that treats all servers and all requests the same. Round Robin achieves availability, but often at the cost of application performance. When application performance became an issue we got new algorithms like Least Connections and Fastest Response Time. These algorithms tried to take into account the load on the servers in the pool/farm/cluster before making a decision, and could therefore better improve utilization such that application performance started getting better. But these algorithms only consider the server and its load, and don’t take into consideration the actual request itself. And therein lies the problem, for not all requests are created equal. A request for an image requires X processing on a server and Y memory and is usually consistent across time and users. But a request that actually invokes application logic and perhaps executes a database query is variable in its processing time and memory utilization. Some may take longer than others, and require more memory than others. Each request is a unique snowflake whose characteristics are determined by user, by resource, and by the conditions that exist at the time it was made. It turns out the problem is that in order to effectively determine how to load balance requests in a way that optimizes utilization on servers and offers the best application performance you actually have to understand the request. That epiphany gave rise to layer 7 load balancing and the ability to exact finer-grained control over load balancing. Between understanding the request and digging deeper into the server – understanding CPU utilization, memory, network capacity – load balancers were suddenly very effective at distributing load in a way that made sense on a per request basis. The result was better architectures, better performing applications, and better overall utilization of the resources available. Now comes the cloud and its “we hide all the dirty infrastructure details from you” mantra. The problem with this approach is simple: a generic load balancing algorithm is not the most effective method of distributing load across servers, but a cloud provider is not prescient and therefore has no idea what algorithm might be best for your application. Therefore the provider has very little choice in which algorithm is used for load balancing and therefore any choice made will certainly provide availability, but will likely not be the most effective for your specific application. So while it may sound nice that all the dirty details of load balancing and clustering is “taken care of for you” in the cloud, it’s actually doing you and your application a disservice. Hiding the load balancing and/or clustering capabilities of the cloud, in this case Azure, from the developer is not necessarily the bonus Martin portrays it to be. The ability to control how requests are distributed is just as important in the cloud as it is in your own data center. As Gartner analyst Daryl Plummer points out, underutilizing resources in the cloud, as may happen when using simplistic load balancing algorithms, can be as expensive as running your own data center and may negatively impact application performance. Without some input into the configuration of load balancers and other relevant infrastructure, there isn’t much you can do about that, either, but start up another instance and hope that horizontal scalability will improve performance – at the expense of your budget. Remember that when someone else makes decisions for you that you are necessarily giving up control. That’s not always a bad thing. But it’s important for you to understand what you are giving up before you hand over the reins. So do your research. You may not have direct control, but you can ask about the “clustering and/or load balancing” provided and understand what affect that may – or may not – have on the performance of your application and the effectiveness of the utilization of the resources for which you are paying.228Views0likes2CommentsDevCentral Top5 09/25/2009
Side-projects and behind the scenes activities abound as the DevCentral team works towards the next goal on our plans for world domination, carefully sketched on Jeff's whiteboard. I'm glad to say that the extended DC team has been helping, as always, to keep the content flowing though, and there's plenty to highlight this week. Take a look at this week's Top5: Closing in on the iRules Contest Deadline http://devcentral.f5.com/s/weblogs/jason/archive/2009/09/15/closing-in-on-the-irules-contest-deadline.aspx Jason points out a very important, timely fact. It's nearly the end of your window to submit killer iRules for great prizes! The iRules contest is coming to a close. We've gotten some awesome entries so far and I've personally loved seeing them flow in from all over the world. There is still time, though. If you've got an iRule that you use that is cool and unique and warrants sharing, now is the time! Get it submitted and put your bid in for one of the pretty killer prizes offered to the winners. Check out Jason's post to get the details of what they are, where to apply, and a cool example iRule from the forums that could easily be submitted. Despite Rumors to the Contrary F5 Remains In the Lead http://devcentral.f5.com/s/weblogs/macvittie/archive/2009/09/25/despite-rumors-to-the-contrary-f5-remains-in-the-lead.aspx Lori comes to you this week with an important news bulletin: F5 is still leading the charge in the ADC market, despite the mutterings you may have heard recently. With the release of the new Magic Quadrant from Gartner there is always a fair amount of posturing and hubbub. Lucky are we that our positioning continues to speak for itself, well in the lead. I'm not usually one to go in for marketing type stuff, but the geek in me loves that we have the coolest technology at the party, bar-none. This is one of the many indicators of that, and I was glad to see Lori point it out. DevCentral Weekly Roundup Episode 104 - Guru, Guy, and My BIG-IP http://devcentral.f5.com/s/weblogs/dcpodcast/archive/2009/09/24/devcentral-weekly-roundup-episode-104-guru-guy-and-my.aspx This week's podcast was a particularly cool one, thanks to the caller that decided to join us. A few weeks ago we started dabbling in live-streaming our podcasts as we record them. This week Joe added the functionality to allow users to call in and chat with us in real-time, while we record. I was pleasantly surprised that we had a community member do precisely that, and share with us what they're currently doing with our tech. If you ever doubt that DevCentral is a far-reaching community with active members, an impromptu call from an international user to chat with us about what they're doing should cure what ails you. Turn Your Podcast Into An Interactive Live Streaming Experience http://devcentral.f5.com/s/weblogs/Joe/archive/2009/09/25/turn-your-podcast-into-a-interactive-live-streaming-experience.aspx As I mentioned above, the past few weeks we've been adding functionality to our podcasts. This once simple process has become increasingly more complex as we've tried to leverage new and cool features to make them more engaging and interactive for our users. With Joe at the helm we've incorporated several tools that make this possible. Today he put out a blog post detailing just how these all work together and exactly how it is that he crafted this bigger, better mousetrap. I found it quite interesting and it's a neat peek behind the curtains into one of the things we do here in DC Land. Reduce your Risk http://devcentral.f5.com/s/weblogs/psilva/archive/2009/09/24/reduce-your-risk.aspx In Pete's 13 th of 26 short topics about security he discusses mitigation. He touches on the fact that you should generally assume, if you're dealing with a publicly facing application, that you will eventually be the target of some malicious activity. He also details a few ways in which we all help to mitigate those risks on a daily basis. From firewalls to strong passwords to access cards to secure facilities, there are many hoops we all jump through daily, whether we think about it or not, to try and mitigate the risks inherent in today's IT world. This series is an interesting one and the pieces are easy to digest. I intend to keep following it as it moves towards topic #26, and I recommend you do the same. There you have it, my Top5 picks from DevCentral for the week. Hopefully you enjoyed them, and I'll be back with more soon. Be sure to check out previous editions of the Top5 here - http://devcentral.f5.com/s/Default.aspx?tabid=101 #Colin252Views0likes0CommentsInfrastructure 2.0: As a matter of fact that isn't what it means
We've been talking a lot about the benefits of Infrastructure 2.0, or Dynamic Infrastructure, a lot about why it's necessary, and what's required to make it all work. But we've never really laid out what it is, and that's beginning to lead to some misconceptions. As Daryl Plummer of Gartner pointed out recently, the definition of cloud computing is still, well, cloudy. Multiple experts can't agree on the definition, and the same is quickly becoming true of dynamic infrastructure. That's no surprise; we're at the beginning of what Gartner would call the hype cycle for both concepts, so there's some work to be done on fleshing out exactly what each means. That dynamic infrastructure is tied to cloud computing is no surprise, either, as dynamic infrastructure is very much an enabler of such elastic models of application deployment. But dynamic infrastructure is applicable to all kinds of models of application deployment: so-called legacy deployments, cloud computing and its many faces, and likely new models that have yet to be defined. The biggest confusion out there seems to be that dynamic infrastructure is being viewed as Infrastructure as a Service (IaaS). Dynamic infrastructure is not the same thing as IaaS. IaaS is a deployment model in which application infrastructure resides elsewhere, in the cloud, and is leveraged by organizations desiring an affordable option for scalability that reduces operating and capital expenses by sharing compute resources "out there" somewhere, at a provider. Dynamic infrastructure is very much a foundational technology for IaaS, but it is not, in and of itself, IaaS. Indeed, simply providing network or application network solution services "as a service" has never required dynamic infrastructure. CDN (Content Delivery Networks), managed VPNs, secure remote access, and DNS services have long been available as services to be used by organizations as a means by which they can employ a variety of "infrastructure services" without the capital expenditure in hardware and time/effort required to configure, deploy, and maintain such solutions. Simply residing "in the cloud" is not enough. A CDN is not "dynamic infrastructure" nor are hosted DNS servers. They are infrastructure 1.0, legacy infrastructure, whose very nature is such that physical location has never been important to their deployment. Indeed, these services were designed without physical location as a requirement, necessarily, as their core functions are supposed to work in a distributed, location agnostic manner. Dynamic infrastructure is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). Adaptable It is able to understand its environment and react to conditions in that environment in order to provide scale, security, and optimal performance for applications. This adaptability comes in many forms, from the ability to make management and configuration changes on the fly as necessary to providing the means by which administrators and developers can manually or automatically make changes to the way in which applications are being delivered. The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved. Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE". Some provide network-side scripting capabilities to extend the ability to react and adapt to situations requiring flexibility while others provide the means by which third-party solutions can be deployed on the solution to address the need for application and user specific capabilities at specific touch-points in the architecture. Context Aware Dynamic infrastructure is able to understand the context that surrounds an application, its deployment environment, and its users and apply relevant policies based on that information. Being context aware means being able to recognize that a user accessing Application X from a coffee shop has different needs than the same user accessing Application X from home or from the corporate office. It is able to recognize that a user accessing an application over a WAN or high-latency connection requires different policies than one accessing that application via a LAN or from close physical proximity over the Internet. Being context aware means being able to recognize the current conditions of the network and the application, and then leveraging its adaptable nature to choose the right policies at the time the request is made such that the application is delivered most efficiently and quickly. Collaborative Dynamic infrastructure is capable of integrating with other application network and network infrastructure, as well as the management and control solutions required to manage both the infrastructure and the applications it is tasked with delivering. The integration capabilities of dynamic infrastructure requires that the solution be able to direct and take direction from other solutions such that changes in the infrastructure at all layers of the stack can be recognized and acted upon. This integration allows network and application network solutions to leverage its awareness of context in a way that ensures it is adaptable and can support the delivery of applications in an elastic, flexible manner. Most solutions use a standards-based control plane through which they can be integrated with other systems to provide the connectivity intelligence necessary to implement IaaS, virtualized architectures, and other cloud computing models in such a way that the perceived benefits of reduced operating expenses and increased productivity through automation can actually be realized. These three properties of dynamic infrastructure work together, in concert, to provide the connectivity intelligence and ability to act on information gathered through that intelligence. All three together form the basis for a fluid, adaptable, dynamic application infrastructure foundation on which emerging compute models such as cloud computing and virtualized architectures can be implemented. But dynamic infrastructure is not exclusively tied to emerging compute models and next-generation application architectures. Dynamic infrastructure can be leveraged to provide benefit to traditional architectures, as well. The connectivity intelligence and adaptable nature of dynamic infrastructure improves the security, availability, and performance of applications in so-called legacy architectures as well. Dynamic infrastructure is a set of capabilities implemented by network and application network solutions that provide the means by which an organization can improve the efficiency of their application delivery and network architecture. That's why it's just not accurate to equate Infrastructure 2.0/Dynamic Infrastructure with Infrastructure as a Service cloud computing models. The former is a description of the next generation of network and network application infrastructure solutions; the evolution from static, brittle solutions to fluid, dynamic, adaptable ones. The latter is a deployment model that, while likely is built atop dynamic infrastructure solutions, is not wholly comprised of dynamic infrastructure. IaaS is not a product, it's a service. Dynamic infrastructure is a product that may or may not be delivered "as a service". Glad we got that straightened out.245Views0likes1CommentThe death of SOA has been greatly exaggerated
Amidst the hype of cloud computing and virtualization have been the publication of several research notes regarding SOA. Adoption, they say, is slowing. Oh noes! Break out the generators, stock up on water and canned food! An article from JavaWorld quotes research firm Gartner as saying: The number of organizations planning to adopt SOA for the first time decreased to 25 percent; it had been 53 percent in last year's survey. Also, the number of organizations with no plans to adopt SOA doubled from 7 percent in 2007 to 16 percent in 2008. This dramatic falloff has been happening since the beginning of 2008, Gartner said. Some have reacted with much drama to the news, as if the reports indicate that SOA has lost its shine and is disappearing into the realm of legacy technology along with COBOL and fat-clients and CORBA. Not true at all. The reports indicate a drop in adoption of SOA, not the use of SOA. That should be unsurprising. At some point the number of organizations who have implemented SOA should reach critical mass, and the number of new organizations adopting the technology will slow down simply because there are fewer of them than there are folks who have already adopted SOA. As Don pointed out when this discussion came up, the economy is factoring in heavily for IT and technology, and the percentages cited by Gartner are not nearly as bad as they look when applied to real numbers. For example, if you ask 100 organizations about their plans for SOA and 16 say "we're not doing anything with it next year" that doesn't sound nearly as impressive as 16%, especially considering that means that 84% are going to be doing something with SOA next year. As with most surveys and polls, it's all about how the numbers are presented. Statistics are the devil's playground. It is also true that most organizations don't consider that by adopting or piloting cloud computing in the next year that they will likely be taking advantage of SOA. Whether it's because their public cloud computing provider requires the use of Web Services (SOA) to deploy and manage applications in the cloud or they are building a private cloud environment and will utilize service-enabled APIs and SOA to integrate virtualization technology with application delivery solutions, SOA remains an integral part of the IT equation. SOA simply isn't the paradigm shift it was five years ago. Organizations who've implemented SOA are still using it, it's still growing in their organizations as they continue to build new functionality and features for their applications, as they integrate new partners and distributors and applications from inside and outside the data center. As organizations continue to get comfortable with SOA and their implementations, they will inevitably look to governance and management and delivery solutions with which to better manage the architecture. SOA is not dead yet; it's merely reached the beginning of its productive life and if the benefits of SOA are real (and they are) then organizations are likely to start truly realizing the return on their investments. Related articles by Zemanta HP puts more automation into SOA governance Gartner reports slowdown in SOA adoption Gartner picks tech top 10 for 2009 SOA growth projections shrinking282Views0likes1CommentHow Sears Could Have Used the Cloud to Stay Available Black Friday
The prediction of the death of online shopping this holiday season were, apparently, greatly exaggerated. As it's been reported, Sears, along with several other well known retailers, were victims of heavy traffic on Black Friday. One wonders if the reports of a dismal shopping season this year due to economic concerns led retailers to believe that there would be no seasonal rush to online sites and therefore preparation to deal with sudden spikes in traffic were unnecessary. Most of the 63 objects (375 KB of total data) comprising sears.com home page are served from sears.com and are either images, scripts, or stylesheets. The rest of their site is similar, with a lot of static data comprising a large portion of the objects. That's a lot of static data being served, and a lot of connections required on the servers just for one page. Not knowing Sears internal architecture, it's quite possible they are already using application delivery and acceleration solutions to ensure availability and responsiveness of their site. If they aren't, they should, because even the simple connection optimizations available in today's application delivery controllers would have likely drastically reduced the burden on servers and increased the capacity of their entire infrastructure. But let's assume they are already using application delivery to its fullest and simply expended all possible capacity on their servers despite their best efforts due to the unexpected high volume of visitors. It happens. After all, server resources are limited in the data center and when the servers are full up, they're full up. Assuming that Sears, like most IT shops, isn't willing to purchase additional hardware and incur the associated management, power, and maintenance costs over the entire year simply to handle a seasonal rush, they still could have prepared for the onslaught by taking advantage of cloud computing. Cloudbursting is an obvious solution, as visitors who pushed Sears servers over capacity would have been automatically directed via global load balancing techniques to a cloud computing hosted version of their site. Not only could they have managed to stay available, this would have also improved performance of their site for all visitors as cloudbursting can use a wide array of variables to determine when requests should be directed to the cloud, including performance-based parameters. A second option would have been a hybrid cloud model, where certain files and objects are served from the local data center while others are served from the cloud. Instead of serving up static stylesheets and images from Sears.com internal servers, they could have easily been hosted in the cloud. Doing so would translate into fewer requests to sears.com internal servers which reduces the processing power required and results in higher capacity of servers. I suppose a third option would have been to commit fully to the cloud and move their entire application infrastructure to the cloud, but even though adoption appears to be imminent for many enterprises according to attendees at Gartner Data Center Conference, 2008 is certainly not "the year of the cloud" and there are still quite a few kinks in full adoption plans that need to be ironed out before folks can commit fully, such as compliance and integration concerns. Still, there are ways that Sears, and any organization with a web presence, could take advantage of the cloud without committing fully to ensure availability under exceedingly high volume. It just takes some forethought and planning. Yeah, I'm thinking it too, but I'm not going to say it either. Related articles by Zemanta Online retailers overloaded on Black Friday Online-only outlets see Black Friday boost over 2007 Sears.com out on Black Friday [Breakdowns] The Context-Aware Cloud Top 10 Reasons for NOT Using a Cloud Half of Enterprises See Cloud Presence by 2010234Views0likes3CommentsInfrastructure 2.0: Aligning the network with the business (and the rest of IT)
When SOA was the hot topic of the day (not that long ago) everyone was pumped up about the ability finally align IT with the business. Reusability, agility, and risk mitigation were benefits that would enable the business itself to be more agile and react dynamically to the constant maelstrom that is "the market". But only half of IT saw those benefits; the application half. Even though pundits tried to remind folks that the "A" in SOA stood for "architecture", and that it necessarily included more than just applications, still the primary beneficiary of SOA has been applications and through their newfound agility and reusability, the business. The network has remained, for many, just as brittle and unchanging (and thus not agile) as it has ever been, mired in its own "hardwired" architectures, unable to flex or extend its abilities to support the applications it is tasked with delivering. And no one seemed to mind, really, because the benefits of SOA were being realized anyway, and no one could really quantify the benefits of also rearchitecting the network infrastructure to be as flexible and agile as the application infrastructure. But along comes virtualization and cloud computing, and an epiphany was had by many: the network and application delivery infrastructure must be as agile and flexible as the application infrastructure in order to achieve the full measure of benefits from this newest technology. Without an application delivery infrastructure that is as able to adapt dynamically the infrastructure is the wall between a successful deployment and failure. In order to truly align the network with the business - and the other half of IT - it becomes necessary to dig deeper into the network stack and really take a look at how you're delivering those agile applications and services. It's important to consider the ramifications of a static, brittle delivery infrastructure on the successful deployment and delivery of virtually hosted applications and services. It's necessary to look at the ability of your delivery infrastructure and evaluate its abilities in terms of reusability, scalability, and dynamism. Analyst and research firm Gartner said is as succinctly as it can be said: You Can't Do Cloud Computing Without the Right Cloud (Network) and the same holds true for virtualization efforts. You can't efficiently deliver virtualized applications without the right network infrastructure. Until your network and application delivery infrastructure is as agile and reusable as your application infrastructure you won't be able to align all of IT with the business. Until you have a completely agile architecture that spans all of IT, you're not truly aligned with the business.192Views0likes0Comments