iaas
20 TopicsApplication Security in the Cloud is still Cloudy
#infosec #cloud An interesting statistic raises an even more interesting question. IBM shared a security related infographic via Twitter recently and in looking through the statistics (most of which are attributed to 2011 research, by the way) I happened to catch a statement claiming "The average company is attacked 60,000 times a day." IBM notes that "average" is average for the study, which consisted of mostly large enterprises, and while I'm certain there are still experts who would dispute this claim (it's higher! it's lower! That's only an average of a subset of a selection of a ...) for me it raised an interesting question with respect to attacks and cloud-based applications: If your application is deployed in the cloud, how do you if/when it's being attacked? Perhaps more importantly, though, is whether or not you should know. After all, "the cloud" is taking care of all that infrastructure and networky stuff under the covers for you, right? And part of that "stuff" is addressing attacks. NETWORK versus APPLICATION Certainly there are many instances of organizations pointing out that their cloud provider is, in fact, dealing with network layer attacks. We could spend an entire post detailing attacks (there've been many) but that's not the point. In general, the attacks that cloud providers are able to detect and thus address (even reactively, if not proactively) are network layer attacks, not application layer attacks. You know, the kind of attacks that target vulnerabilities in either your application or its platform (web / application server) and sometimes in its network stack (think SYN floods and such). Cloud providers would (logically and correctly) point out that they have no control over the application or its platform (unless it's a PaaS or SaaS provider) and thus it is not their responsibility to monitor for attacks against those components. And they'd be (mostly) right. Even in the cloud, the security of your application is still your responsibility because it's your application. Potential vulnerabilities in the application layer - whether data, logic, behavior or code based - are yours to address, not the provider. Where things get murky is at the protocol (HTTP & TCP) layers [pdf], where exploitation can be used with relatively few resources to successfully execute a DDoS attack against your application and yet there is virtually no way for the application instance to recognize such an attack. That's because detecting some types of application layer attacks requires visibility into both client characteristics as well as its behavior. A client connecting from a 100Mbps capable LAN that purposefully tries to use a tiny TCP window size or seems to be receiving data at a rate far below what it is capable of just might be part of an attempt to consume and keep busy application resources and in doing so with enough clients, successfully deny service to legitimate clients. This kind of attack (which is often part of a holistic and much larger (not to mention coordinated) attack designed to distract organizations from true intent) is not detectable by an application instance unless it has end-to-end visibility. In the cloud, that's just not possible. Visibility into client characteristics is typically no deeper than a client IP address stuffed into a custom HTTP header. You can't infer an attack from an IP address, and transfer speed means little without other variables against which to measure the normality (or abnormality) of the behavior. Even deploying traditional services to address (web application firewall, application delivery firewall) may not provide the visibility required because those services are being deployed atop the abstracted, service-based infrastructure and are treated more like application instances than infrastructure services in need of visibility into the network. Certainly these services will assist in mitigating many application layer attacks that focus on logic, platform, or data exploitation, but they may not be able to fully analyze the protocol layer because of the many layers of abstraction between them and the variables they need. Thus, while aspects of application security in the cloud are certainly the responsibility of the developer (or organization), there are other aspects that require the assistance of the provider. Either in the form of services that have access to the information necessary or a means of sharing that same information with services deployed atop the infrastructure. Which leaves us where we are today: application security in the cloud is still cloudy. It's definitely an area in which there is room for new services and solutions.194Views0likes0CommentsThe Future of Hybrid Cloud
You keep using that word. I do no think it means what you think it means... An interesting and almost ancillary point was made during a recent #cloudtalk hosted by VMware vCloud with respect to the definition of "hybrid" cloud. Sure, it implies some level of integration, but how much integration is required to be considered a hybrid cloud? The way I see it, there has to be some level of integration that supports the ability to automate something - either resources or a process - in order for an architecture to be considered a hybrid cloud. A "hybrid" anything, after all, is based on the premise of joining two things together to form something new. Simply using Salesforce.com and Ceridian for specific business functions doesn't seem to quality. They aren't necessarily integrated (joined) in any way to the corporate systems or even to processes that execute within the corporate environs. Thus, it seems to me that in order to truly be a "hybrid" cloud, there must be some level of integration. Perhaps that's simply at the process level, as is the case with SaaS providers when identity is federated as a means to reassert control over access as well as potentially provide single sign-on services. Similarly, merely launching a development or test application in a public IaaS environment doesn't really "join" anything, does it? To be classified as "hybrid" one would expect there be network or resource integration, via such emerging technologies as cloud bridges and gateways. The same is true internally with SDN and existing network technologies. Integration must be more than "able to run in the environment". There must be some level of orchestration and collaboration between the networking models in order to consider it "hybrid". From that perspective, the future of hybrid cloud seems to rely upon the existence of a number of different technological solutions: Cloud bridges Cloud gateways Cloud brokers SDN (both application layer and network layer) APIs (to promote the integration of networks and resources) Standards (such as SAML to enable the orchestration and collaboration at the application layer) Putting these technologies all together and you get what seems to be the "future" of a hybrid cloud: SaaS, IaaS, SDN and traditional technology integrated at some layer that enables both the business and operations to choose the right environment for the task at hand at the time they need it. In other words, our "network" diagrams of the future will necessarily need to extend beyond the traditional data center perimeter and encompass both SaaS and IaaS environments. That means as we move forward IT and operations needs to consider how such environments will fit into and with existing solutions, as well as how emerging solutions will enable this type of hybrid architecture to come to fruition. Yes, you did notice I left out PaaS. Isn't that interesting?270Views0likes0CommentsDoes Cloud Solve or Increase the 'Four Pillars' Problem?
It has long been said – often by this author – that there are four pillars to application performance: Memory CPU Network Storage As soon as you resolve one in response to application response times, another becomes the bottleneck, even if you are not hitting that bottleneck yet. For a bit more detail, they are “memory consumption” – because this impacts swapping in modern Operating Systems. “CPU utilization” – because regardless of OS, there is a magic line after which performance degrades radically. “Network throughput” – because applications have to communicate over the network, and blocking or not (almost all coding for networks today is), the information requested over the network is necessary and will eventually block code from continuing to execute. “Storage” – because IOPS matter when writing/reading to/from disk (or the OS swaps memory out/back in). These four have long been relatively easy to track. The relationship is pretty easy to spot, when you resolve one problem, one of the others becomes the “most dangerous” to application performance. But historically, you’ve always had access to the hardware. Even in highly virtualized environments, these items could be considered both at the Host and Guest level – because both individual VMs and the entire system matter. When moving to the cloud, the four pillars become much less manageable. The amount “much less” implies depends a lot upon your cloud provider, and how you define “cloud”. Put in simple terms, if you are suddenly struck blind, that does not change what’s in front of you, only your ability to perceive it. In the PaaS world, you have only the tools the provider offers to measure these things, and are urged not to think of the impact that host machines may have on your app. But they do have an impact. In an IaaS world you have somewhat more insight, but as others have pointed out, less control than in your datacenter. Picture Courtesy of Stanley Rabinowitz, Math Pro Press. In the SaaS world, assuming you include that in “cloud”, you have zero control and very little insight. If you app is not performing, you’ll have to talk to the vendors’ staff to (hopefully) get them to resolve issues. But is the problem any worse in the cloud than in the datacenter? I would have to argue no. Your ability to touch and feel the bits is reduced, but the actual problems are not. In a pureplay public cloud deployment, the performance of an application is heavily dependent upon your vendor, but the top-tier vendors (Amazon springs to mind) can spin up copies as needed to reduce workload. This is not a far cry from one common performance trick used in highly virtualized environments – bring up another VM on another server and add them to load balancing. If the app is poorly designed, the net result is not that you’re buying servers to host instances, it is instead that you’re buying instances directly. This has implications for IT. The reduced up-front cost of using an inefficient app – no matter which of the four pillars it is inefficient in – means that IT shops are more likely to tolerate inefficiency, even though in the long run the cost of paying monthly may be far more than the cost of purchasing a new server was, simply because the budget pain is reduced. There are a lot of companies out there offering information about cloud deployments that can help you to see if you feel blind. Fair disclosure, F5 is one of them, I work for F5. That’s all you’re going to hear on that topic in this blog. While knowing does not always directly correlate to taking action, and there is some information that only the cloud provider could offer you, knowing where performance bottlenecks are does at least give some level of decision-making back to IT staff. If an application is performing poorly, looking into what appears to be happening (you can tell network bandwidth, VM CPU usage, VM IOPS, etc, but not what’s happening on the physical hardware) can inform decision-making about how to contain the OpEx costs of cloud. Internal cloud is a much easier play, you still have access to all the information you had before cloud came along, and generally the investigation is similar to that used in a highly virtualized environment. From a troubleshooting performance problems perspective, it’s much the same. The key with both virtualization and internal (private) clouds is that you’re aiming for maximum utilization of resources, so you will have to watch for the bottlenecks more closely – you’re “closer to the edge” of performance problems, because you designed it that way. A comprehensive logging and monitoring environment can go a long way in all cloud and virtualization environments to keeping on top of issues that crop up – particularly in a large datacenter with many apps running. And developer education on how not to be a resource hog is helpful for internally developed apps. For externally developed apps the best you can do is ask for sizing information and then test their assumptions before buying. Sometimes, cloud simply is the right choice. If network bandwidth is the prime limiting factor, and your organization can accept the perceived security/compliance risks, for example, the cloud is an easy solution – bandwidth in the cloud is either not limited, or limited by your willingness to write a monthly check to cover usage. Either way, it’s not an Internet connection upgrade, which can be dastardly expensive not just at install, but month after month. Keep rocking it. Get the visibility you need, don’t worry about what you don’t need. Related Articles and Blogs: Don MacVittie - Load Balancing For Developers Advanced Load Balancing For Developers. The Network Dev Tool Load Balancers for Developers – ADCs Wan Optimization ... Intro to Load Balancing for Developers – How they work Intro to Load Balancing for Developers – The Gotchas Intro to Load Balancing for Developers – The Algorithms Load Balancing For Developers: Security and TCP Optimizations Advanced Load Balancers for Developers: ADCs - The Code Advanced Load Balancing For Developers: Virtual Benefits Don MacVittie - ADCs for Developers Devops Proverb: Process Practice Makes Perfect Devops is Not All About Automation 1024 Words: Why Devops is Hard Will DevOps Fork? DevOps. It's in the Culture, Not Tech. Lori MacVittie - Development and General Devops: Controlling Application Release Cycles to Avoid the ... An Aristotlean Approach to Devops and Infrastructure Integration How to Build a Silo Faster: Not Enough Ops in your Devops241Views0likes0CommentsThe Challenges of Cloud: Infrastructure Diaspora
#webperf #cloud With performance rising as a concern for cloud computing adoption, the disparity between services in the data center and the cloud needs to be addressed. One of the negative's of cloud computing is it's one-size-fits-all approach to infrastructure. A single load balancing system (and subsequently configuration) is considered acceptable for all applications. After all, it's just about distributing requests, isn't it? Except it isn't, and neither are myriad other infrastructure services that provide not only customized services for applications but additional benefits not currently offered by what are commoditized versions of functionality. Even assuming an organization is using a fairly non-customized Load balancer, there is a disparity between the algorithms supported by the industry and those supported today by cloud computing providers. If you don't think something as simple as the choice of a load balancing algorithm has an impact on availability and performance, think again. The reason there's a list of more than six "industry standard" algorithms is the maturation of distribution algorithms over time. Different methods are better suited to specific types of applications and usage patterns, while those same algorithms are wholly unsuited for others. Determining the best algorithm is part of the process of deploying said solutions, and one that's completely ignored by providers of cloud computing load balancing services. Similarly, organizations that have deployed web application firewall or web filtering (web secure gateway in today's vernacular) solutions, recognize that the policies created and enforced by such solutions are not just application but URI specific, making shared, generic configurations almost completely useless. Such solutions must be deployed and configured on a per-application basis at a minimum, and the time and effort involved in doing so is generally non-trivial (though collaborative efforts around Persistent Threat Management offer a potential solution to drastically reducing the time required to configure WAF solutions for the most common threats). NOT JUST COSTS, CAPABILITIES Thus when organizations look outward to the cloud, it's not just a matter of costs but also capabilities that becomes important. Replication of infrastructure services is beginning to be recognized as an imperative. Given the rising importance of performance as a concern for cloud computing deployments, the impact of infrastructure diaspora on application performance should be treated with the seriousness it deserves. "I don't feel that sticking your servers out there and saying, 'OK, you've got cloud now,' is the way to go," said Tom Hollingsworth, a senior network engineer with United Systems, an Oklahoma City-based value-added reseller (VAR). "I want to replicate [in the cloud with] as much functionality [customers] have for load balancers, firewalls and things like that." Hollingsworth described a hypothetical situation where an enterprise has a mail server that has been tuned to a specific in-house load balancer and then wants to move that mail server to an IaaS provider that offers fundamentally different load balancing capabilities. Attempting to recreate those Layer 4-7 services from a data center to the cloud is complex, time-consuming and difficult to manage once you've got it up and running. Many IaaS providers sell Layer 4-7 cloud networking services (firewalls, load balancers, application accelerators) to customers, but these services tend to be monolithic, feature-limited and in some cases proprietary. -- Layer 4-7 cloud networking still scarce in IaaS market There are myriad options in the TCP RFC that enable organizations to tune networking stacks to improve performance for a given application and its unique usage patterns. TCP window sizes, turning on or off Nagle, and controlling time-out values have a significant impact on not only performance but capacity of web applications. Eliminating the ability to tweak and tune these settings in a cloud computing environment removes a very important set of tools upon which the enterprise relies to address performance issues in the data center. This infrastructure diaspora has other consequences, as well, including the introduction of a separate set of operational processes that must be managed along with existing procedures. This burdens operations with more management and monitoring duties, and introduces additional risk in the form of mis-configuration or missteps in deployment processes. While some application delivery vendors have addressed this disparity with cloud-enabled ADN offerings, these are still not universally available or supported across all cloud computing offerings. Similarly, some customers will have no complementary offerings in their own data center (if they have a data center) but will still experience the same performance-degrading scenarios which could be addressed by more robust Layer 4-7 services in cloud computing environments. The challenge for providers is balancing costs of their services versus costs to organizations who lose revenue due to applications exhibiting poor performance when deployed in their environment. The cost-benefit analysis for enterprises will certainly include this value, and thus providers who move to address the use of more robust application delivery services as a means to redress potential performance problems will be better positioned to vie for enterprise customers for whom performance is as important – or more so – than other inhibiting concerns. Referenced blogs & articles: It’s 2am: Do You Know What Algorithm Your Load Balancer is Using? Persistent Threat Management Layer 4-7 cloud networking still scarce in IaaS market F5 Friday: Avoiding the Operational Debt of Cloud The Conspecific Hybrid Cloud Complexity Drives Consolidation Cloud Computing and the Truth About SLAs Curing the Cloud Performance Arrhythmia200Views0likes0CommentsF5 Friday: Avoiding the Operational Debt of Cloud
#F5CLP F5 Cloud Licensing Program enables #cloud providers to differentiate and accelerate advanced infrastructure service offerings while reducing operational debt for the enterprise If you ask three different people why they are adopting cloud it’s likely you’ll get three different reasons. The rationale for adopting cloud – whether private or public – depends entirely on the strategy IT has in place to address the unique combination of operational and business requirements for their organizations. But one thing seems clear through all these surveys: cloud is here to stay, in one form or another. Those who are “going private” today may “go hybrid” tomorrow. Those who are “in the cloud” today may reverse direction and decide to, as Alan Leinwand puts it so well, “own the base and rent the spike” by going “hybrid.” What the future seems to hold is hybrid architectures, with use of public and private cloud mixed together to provide the best of both worlds. This state of possibility certainly leaves both enterprise and service providers alike somewhat on edge. How can service providers entice the enterprise? How do they prove their services are above and beyond the other thousand-or-so offerings out there? How does the enterprise go about choosing an IaaS partner (and have no doubts, enterprises want partners, not providers, when it comes to managing their data and applications)? How do they ensure the operational efficiency gained through their private cloud implementation isn’t lost by disjointed processes imposed by the differences in core application delivery services in public offerings? How do organizations avoid going into operational debt from managing two environments with two different sets of management and solutions? Architectural consistency is key to the answer, achieved through a fully cloud-enabled application delivery network. The F5 Cloud Licensing Program Whether the goal is scalability, security, better performance, availability, consolidation, or reducing costs, F5 enterprise customers have achieved these goals using F5 solutions. The next step is ensuring these same goals can be achieved in a public cloud, whether the implementation is pure public or hybrid cloud. To do that requires enabling cloud service providers with the ability to offer a complete application delivery network (ADN) in the cloud, with a cost structure appropriate to a utility service model. Given that 43% of respondents in a Cloud Computing Outlook 2011 survey indicated “lack of training” was inhibiting their cloud adoption, being able to offer such services that customers are familiar with is important. That’s the impetus behind the creation of the F5 Cloud Licensing Program, a new service-provider focused licensing model for the industry’s only complete cloud-enabled ADN. With services encompassing the entire application delivery chain – from security to acceleration to access control – this offering brings to the table the ability to maintain operational consistency from the data center into the cloud, without compromising on the infrastructure services needed by enterprises to take advantage of public cloud models. The Conspecific Hybrid Cloud Complexity Drives Consolidation Cloud Bursting: Gateway Drug for Hybrid Cloud Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs200Views0likes0CommentsThe Future of Cloud: Infrastructure as a Platform
Cloud needs to become a platform, and that means its comprising infrastructure must also embrace the platform paradigm. There’s been a spate of articles, blogs, and mentions of OpenFlow in the past few months. IBM was the latest entry into the OpenFlow game, releasing an enabling RackSwitch G8264, an update of a 64-port, 10 Gigabit Ethernet switch IBM put out a year ago. Interest in the specification appears to be growing and not just because it’s got the prefix-du-jour as part of its name, implying everything to everyone – free, extensible, interoperable, etc… While all those modifiers are indeed interesting and, to some, a highly important facet of the would-be standard, there’s something else about it that is driving its popularity. That something-else can be summed it with the statement: “infrastructure as a platform.” THE WEB 2.0 LESSON. AGAIN. The importance of turning infrastructure into a platform can be evidenced by noting commentary on Web 2.0, a.k.a. social networking, applications and their failure/success to garner mind-share. Recently, a high-profile engineer at Google mistakenly posted a length and refreshingly blunt commentary on what he views as Google’s failure to recognize the importance of platform to successful offerings in today’s demanding marketplace. To Google’s credit, once the erroneous posting was discovered, it decided to “let it stand” and thus we are able to glean some insight about the importance of platform to today’s successful offerings: While Yegge doesn’t have a lot of good things to say about Amazon and its founder Jeff Bezos, he does note that Bezos – unlike Google – understands that its not just about developing interesting products, but that it takes a platform to create a great product. -- SiliconFilter, “Google Engineer: “Google+ is a Prime Example of Our Complete Failure to Understand Platforms” This insight is not restricted to software developers and engineers at all; the rising interest of PaaS (Platform as a Service) and the continued siren’s song that it will dominate the cloud landscape in the future is all tied to the same premise: it is the availability of a robust platform that makes or breaks solutions today, not features or functions or price. It is the ability to be successful by building, as Yegge says in his post, “an entire constellation of products by allowing other people to do the work.” Lest you think this concept applicable only to software, let me remind you of Nokia CEO Stephen Elop’s somewhat blunt assessment of his company’s failure to recognize this truth: The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. -- DevCentral F5 Friday, “A War of Ecosystems” Interestingly, 47% of respondents surveyed by Zenoss/Cloud.com for its Cloud Computing Outlook 2011 indicated use of PaaS in 2011. Like SaaS, PaaS has some wiggle room in its definition, but its general popularity seems to indicate that yes, indeed, platform is an important factor. OpenFlow essentially provides this capability, turning infrastructure into a platform and enabling extensibility and customization that could not be achieved otherwise. It basically turns a piece of infrastructure into a giant backplane for new functions, features, and services. It introduces, allegedly, dynamism into what is typically a static network. It is what IaaS had the promise to be, but as of yet has failed to achieve. CLOUD as a PLATFORM The takeaway for cloud and infrastructure providers is that organizations want platforms. Developers want platforms. Operations wants platforms (see Puppet and Chef as examples of operational platforms). It’s about enabling an ecosystem that encourages innovation, i.e. new features and functions and services, without requiring the wheel to be reinvented. It’s about drag and drop, figuratively speaking, in the realm of infrastructure. Bringing the ability to deploy new services atop a platform that provides the basics. OpenFlow promises just such capabilities for infrastructure much in the same way Facebook provides these basics for game and application developers. Mobile platforms offer the same for devices and operating systems. It’s about enabling an ecosystem in which organizations can focus on not the core infrastructure, but on custom functionality and process automation that delivers efficiency to IT across operations and development alike. “The beauty of this is it gives more flexibility and control to the network,” said Shaughnessy [marketing manager for system networking at IBM], “so you could actually adjust the way the traffic flows go through your network dynamically based on what’s going on with your applications.” -- IBM releases OpenFlow-enabled switch It enables flexibility in the network, the means to deploy more dynamism in traffic policy enforcement and shaping and ties back to cloud with its ability to impart multi-tenant capabilities to infrastructure without completely modifying the internal architecture of components – a major obstacle for many network-focused devices. OpenFlow is not a panacea, there are myriad reasons why it may not be appropriate as the basis for architecting the cloud platform foundation required to support future initiatives. But it is a prime example of the kind of platform-focused capabilities organizations desire to move ahead in their journey to IT as a Service. The cloud on which organizations will be able to build their future data center architecture will be a platform, and that means from the bottom (infrastructure) to the middle (development) to the top (operations). What cloud and infrastructure providers must do is simulate the Facebook experience at the infrastructure layer. Infrastructure as a platform is the next step in the evolution of cloud computing . IT Services: Creating Commodities out of Complexity IBM releases OpenFlow-enabled switch The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things WILS: Automation versus Orchestration The Infrastructure Turk: Lessons in Services Putting the Cloud Before the Horse245Views0likes0CommentsCloud Computing: Architectural Limbo
When abstraction becomes a distraction, cloud computing becomes a realm of architectural limbo… Cloud. It sounds so grand in NIST’s description; full of promises with respect to the ability to provision and manage resources without having to muck around in the trenches. Compute! Network! Storage! Cheap, efficiently provisioned resources in minutes, not months! The siren call of cloud continues to lure many a curious folk, only to trap it in what is rapidly becoming architectural limbo. Differing slightly from the original meaning, in colloquial speech, "limbo" is any status where a person or project is held up, and nothing can be done until another action happens. -- Wikipedia The problem is, unfortunately, at the root of all architectures – the network. ARCHITECTURE and the NETWORK Architecturally, from a “stack” point of view, the network always resides at the bottom. Like other forms of architecture, that shouldn’t be taken to mean it's of less importance than the upper layers of the stack, but rather that it is the foundation upon which all other layers are ultimately laid. A strong foundation is critical to the resilience of the rest of the architecture. That is not to say that cloud computing environments have weak foundations. On the contrary, they have very firm network foundations that make the rest of the stack possible. The problem is that cloud promises us provisioning and management of resources and that includes the network, and yet many cloud providers seem to stop short of offering this capability in a way that meets the needs of enterprise-class architectures. Instead, providers encourage (read: require) a change in the way network resources are architected and ultimately managed. Consider, for a moment, the stark reality of a realm with no real network boundaries offered by AWS in “Building three-tier architectures with security groups”: Unlike with traditional on-premise physical deployments, AWS's virtualization of compute, storage, and network elements requires that you think differently about how to build network segregation into your projects. There are no distinct physical networks, no VLANs, and no DMZs. The post goes on to describe the means in which a secure, traditional three-tiered application architecture can be deployed using AWS security groups. This architecture is a fine approximation of the traditional, data center deployed architecture based on the available abstractions offered by AWS. Note the use of the term “approximation”. That’s important, because it’s indicative of one of the core issues with cloud today: the inability to replicate architecture. You might be thinking that’s okay as long as you can replicate it using available services. No, actually, it’s not necessarily okay – especially when you consider the close relationship between architecture and operational process and the implication of radically changing either one. ARCHITECTURE and OPERATIONAL PROCESS The problem is that in order to fully deploy in the cloud you have to deploy an architecture that will be different from the one you currently maintain in the data center. What that ultimately entails is a separate and environment-specific set of processes, as well, that could quickly become operationally expensive. This is especially true when compliance enters the picture, and even more so when the regulations in question are those that focus on process (think SOX) and not just technological implementation. By encouraging (read: requiring) changes in the core architecture of applications, cloud computing is introducing another set of processes and challenges, many of which have already been faced and overcome in the data center by careful application of infrastructure architecture principles. Those processes must be managed, they must adhere to regulations and comply with requirements, and they must be integrated into and with existing data center operational processes. Because the tools and mechanisms by which those processes are managed are likely very different from those used to manage the data center, IT organizations run the risk of needing two separate but equally important sets of operations teams, each of which are focused on their areas of responsibility. Operational silos, by necessity. Architectural limbo. Neither fully here (the data center) nor there (the cloud), such approaches are fraught with the same potential to minimize the benefits associated with cloud computing by wrapping it in another set of mostly manual IT processes. A more operationally consistent approach is necessary, one that does not require new architectures (and the operational processes to manage them) but instead incorporates cloud computing resources as part of an existing data center model. A hybrid cloud model based on the premise of operational consistency. If we treat cloud computing as it was originally described, as utility computing, and view it as a vast pool of readily available compute and storage resources to be integrated into existing architecture and processes, the biggest benefits are likely to be realized. Cloud computing is unlikely to mature fast enough to provide the full range of infrastructure services that are required to properly transplant enterprise-class applications and systems into the public cloud, but it can be a great asset in implementing what are already proven, trusted and controllable architectures that exist inside the data center today. The cloud can be a vital asset, if it's viewed with the proper perspective of what it is, not what it could be or might be one day. Avoid architectural limbo. Leverage - but don't live - in the cloud. Related blogs & articles: Resolution to the Case (For & Against) X-Driven Scalability in Cloud Computing Environments Intercloud: Are You Moving Applications or Architectures? The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things The Consumerization of IT: The OpsStore An Aristotlean Approach to Devops and Infrastructure Integration The Impact of Security on Infrastructure Integration Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The Infrastructure Turk: Lessons in Services197Views0likes0CommentsCloud Computing Goes Back to College
The University of Washington adds a cloud computing certificate program to its curriculum It’s not unusual to find cloud computing in a college environment. My oldest son was writing papers on cloud computing years ago in college, before “cloud” was a marketing term thrown about by any and everyone pushing solutions and products hosted on the Internet. But what isn’t often seen is a focus on cloud computing on its own; as its own “area of study” within the larger context of computer science. That could be because when you get down to it, cloud computing is merely an amalgamation of computer science topics and is more about applying processes and technology to modern data center problems than it is a specific technology itself. But it is a topic of interest, and it is a complex subject (from the perspective of someone building out a cloud or even architecting solutions that take advantage of the cloud) so a program of study may in fact be appropriate to provide a firmer foundation in the concepts and technologies underpinning the nebulous “cloud” umbrella. The University of Washington recently announced the addition of a cloud computing certificate program to its curriculum. This three-course program of study is intended to explore cloud computing across a broad spectrum of concerns, from IaaS to PaaS to SaaS, with what appears to be a focus on IaaS in later courses. The courses and instructors are approved by the UW Department of Computer Science, and are designed for college-level and career professionals. They are non-credit courses that will set you back approximately $859 per course. Those of us not in close proximity may want to explore the online option, if you’re interested in such a certificate to hang upon your wall. This is one of the first certificates available, so it will be interesting to see whether it’s something the market is seeking or whether it’s just a novelty. In general, the winter course appears to really get into the meat and serves up a filling course. While I’m not dismissing the first course offered in the fall, it does appear light on the computer science and heavy on the market which, in general, seems more appropriate for an MBA-style program than one tied to computer science. The spring selection looks fascinating – but may be crossing too many IT concerns at one time. There’s very few folks who are as comfortable on a switch command line that are also able to deal with the programmatic intricacies of data-related topics like Hadoop, HIVE, MapReduce and NoSQL. My guess is that the network and storage network topics will be a light touch given the requirement for programming experience and the implicit focus on developer-related topics. The focus on databases and lack of a topic specifically addressing scalability models of applications is also interesting, though given the inherent difficulties and limitations on scaling “big data” in general, it may be necessary to focus more on the data tier and less on the application tiers. Of course I’m also delighted beyond words to see the load testing component in the winter session, as it cannot be stressed enough that load testing is an imperative when building any highly scalable system and it’s rarely a topic discussed in computer science degree programs. The program is broken down into a trimester style course of study, with offerings in the fall, winter and spring. Fall: Introduction to Cloud Computing Overview of cloud (IaaS/PaaS/Saas, major vendors, market overview) Cloud Misconceptions Cloud Economics Fundamentals of distributed systems Data center design Cloud at startup Cloud in the Enterprise Future Trends Winter: Cloud Computing in Action Basic Cloud Application Building Instances Flexible persistent storage (aka EBS) Hosted SQL Load testing Operations (Monitoring, version control, deployment, backup) Small Scaling Autoscaling Continued Operations Advanced Topics ( Query optimization, NoSQL solutions, memory caching, fault tolerance, disaster recovery) Spring: Scalable and Data-Intensive Computing in the Cloud Components of scalable computing Cloud building topics (VLAN, NAS, SAN, Network switches, VMotion) Consistency models for large-scale distributed systems MapReduce/Big Data/NoSQL Systems Programming Big Data (Hadoop, HIVE, Pig, etc) Database-as-a- Service (SQL Azure, RDS, Database.com) Apposite to the view that cloud computing is a computer science related topic, not necessarily a business-focused technology, are the requirements for the course: programming experience, a fundamental understanding of protocols and networking, and the ability to remotely connect to Linux instances via SSH are expected to be among the skill set of applicants. The requirement for programming experience is an interesting one, as it seems to assume the intended users are or will be developers, not operators. The question becomes is scripting as is often leveraged by operators and admins to manage infrastructure considered “programming experience?” Looking deeper into the courses it later appears to focus on operations and networking, diving into NAS, SAN, VLAN and switching concerns; a focus in IT which is unusual for developers. That’s interesting because in general computer science as a field of study tends to be highly focused on system design and programming, with some degree programs across the country offering more tightly focused areas of expertise in security or networking. But primarily “computer science” degrees focus more on programmatic concerns and less on protocols, networking and storage. Cloud computing, however, appears poised to change that – with developers needing more operational and networking fu and vice-versa. A focus of devops has been on adopting programmatic methodologies such as agile and applying them to operations as a means to create repeatable deployment patterns within production environments. Thus, a broad overview of all the relevant technologies required for “cloud computing” seems appropriate, though it remains to be seen whether such an approach will provide the fundamentals really necessary for its attendees to successfully take advantage of cloud computing in the Real World™. Regardless, it’s a step forward for cloud computing to be recognized as valuable enough to warrant a year of study, let alone a certificate, and it will be interesting to hear what students of the course think of it after earning a certificate. You can learn more about the certificate program at the University of Washington’s web site. Cloud is not Rocket Science but it is Computer Science The Database Tier is Not Elastic Certificate in Cloud Computing UW The Impossibility of CAP and Cloud Brewer’s CAP Theorem Joe Weinman – Cloud Computing is NP-Complete Proof Greedy (IT) Algorithms Not all application requests are created equal189Views0likes0CommentsThey’re Called Black Boxes Not Invisible Boxes
Infrastructure can be a black box only if its knobs and buttons are accessible I spent hours at Interop yesterday listening to folks talk about “infrastructure.” It’s a hot topic, to be sure, especially as it relates to cloud computing. After all, it’s a keyword in “Infrastructure as a Service.” The problem is that when most of people say “infrastructure” it appears what they really mean is “server” and that just isn’t accurate. If you haven’t been a data center lately there is a whole lot of other “stuff” that falls under the infrastructure moniker in a data center that isn’t a server. You might also have a firewall, anti-virus scanning solutions, a web application firewall, a Load balancer, WAN optimization solutions, identity management stores, routers, switches, storage arrays, a storage network, an application delivery network, and other networky-type devices. Oh there’s more than that but I can’t very well just list every possible solution that falls under the “infrastructure” umbrella or we’d never get to the point. In information technology and on the Internet, infrastructure is the physical hardware used to interconnect computers and users. Infrastructure includes the transmission media, including telephone lines, cable television lines, and satellites and antennas, and also the routers, aggregators, repeaters, and other devices that control transmission paths. Infrastructure also includes the software used to send, receive, and manage the signals that are transmitted. In some usages, infrastructure refers to interconnecting hardware and software and not to computers and other devices that are interconnected. However, to some information technology users, infrastructure is viewed as everything that supports the flow and processing of information. -- TechTarget definition of “infrastructure” The reason this is important to remember is that people continue to put forth the notion that cloud should be a “black box” with regards to infrastructure. Now in a general sense I agree with that sentiment but if – and only if – there is a mechanism to manage the resources and services provided by that “black boxed” infrastructure. For example, “servers” are infrastructure and today are very “black box” but every IaaS (Infrastructure as a Service) provider offers the means by which those resources can be managed and controlled by the customer. The hardware is the black box, not the software. The hardware becomes little more than a service. That needs to – nay, must extend to – the rest of the infrastructure. You know, the network infrastructure that is ultimately responsibly for delivering the applications that are being deployed on that black-box server infrastructure. The devices and services that interconnect users and applications. It simply isn’t enough to wave a hand at the network infrastructure and say “it doesn’t matter” because as a matter of fact it certainly does matter.214Views0likes1CommentMulti-Tenancy Requires More Than Just Isolating Customers
Multi-tenancy encompasses the management of heterogeneous business, technical, delivery, and security models. Last week, during what was certainly an invigorating if not agonizingly redundant debate regarding the value of public versus private cloud computing , it was suggested that perhaps if we’d just refer to “private cloud” computing as “single-tenant cloud” all would be well. I could point out that we’ve been over this before, and that the value proposition of shared infrastructure internal to an “organization” is the sharing of resources across projects, departments, and lines of business all of which are endowed with their very own budgets. There are “customer” level distinctions to be made internal to an organization, particularly a large one, that may perhaps be lost on those who’ve never been (un)fortunate enough to work within the trenches of an actual enterprise IT organization. The problem is larger than that, however, and goes far beyond the simplistic equating of “line of business” with “company”. Both still assume that tenant is analogous to business (customer in the eyes of a public cloud provider) and that’s simply not always the case. THE TYPE of CLOUD DETERMINES the NATURE of the TENANT Certainly in certain types of clouds, specifically a SaaS (Software as a Service) offering, the heterogeneity of the tenancy is at the customer level. But as you dive down the cloud “stack” from SaaS –> PaaS –> IaaS you’ll find that the “tenant” being managed changes. In a SaaS, of course, the analogy holds true – to an extent. It is business unit and financial obligation that defines a “tenant”, but primarily because SaaS focuses on delivering one application and “customer” at that point becomes the only real way to distinguish one from another. An organization that is deploying a similar on-premise SaaS may in fact be multi-tenant simply by virtue of supporting multiple lines of business, all of whom have individual financial responsibility and in many cases may be financially independent from the “mothership.” Tenancy becomes more granular and, at the very bottom layer, at IaaS, you’ll find that the tenant is actually an application and that each one has its own unique set of operational and infrastructure needs. Two applications, even though deployed by the same organization, may have a completely different – and sometimes conflicting – set of parameters under which it must be deployed, secured, delivered, and managed. A truly “multi-tenant” cloud (or any other multi-tenant architecture) recognizes this. Any such implementation must be able to differentiate between applications either by applying the appropriate policy or by routing through the appropriate infrastructure such that the appropriate policies are automatically applied by virtue of having traversed the component. The underlying implementation is not what defines an architecture as multi-tenant or not, it’s how it behaves. When you consider a high-level architectural view of a public cloud versus an on-premise cloud, it should be fairly clear that the only thing that really changes between the two is who is getting billed. The same requirements regarding isolation, services, and delivery on a per-application basis remain. THE FUTURE VALUE of CLOUD is in RECOGNIZING APPLICATIONS as INDIVIDUAL ENTITIES This will become infinitely more important as infrastructure services begin to provide differentiation for cloud providers. As different services are able to be leveraged in a public cloud computing environment, each application will become more and more its own entity with its own infrastructure and thus metering and ultimately billing. This is ultimately the way cloud providers will be able to grow their offerings and differentiate from their competitors – their value-added services in the infrastructure that delivers applications powered by on-demand compute capacity. The tenants are the applications, not necessarily the organization, because the infrastructure itself must support the ability to isolate each application from every other application. Certainly a centralized management and billing framework may allow customers to manage all their applications from one console, but in execution the infrastructure – from the servers to the network to the application delivery network – must be able to differentiate and treat each individual application as its own, unique “customer”. And there’s no reason an organization with multiple internal “customers” can’t – or won’t – build out an infrastructure that is ultimately a smaller version of a public cloud computing environment that supports such a business model. In fact, they will – and they’ll likely be able to travel the path to maturity faster because they have a smaller set of “customers” for which they are responsible. And this, ultimately, is why the application of the term “single-tenant” to an enterprise deployed cloud computing environment is simply wrong. It ignores that differentiation in a public IaaS cloud is (or should be) at the same level of the hierarchy as an internal IaaS cloud. CLOUD COMPUTING is ULTIMATELY a DEPLOYMENT and DELIVERY MODEL Dismissing on-premise cloud as somehow less sophisticated because its customers (who are billed in most organizations) are more granular is naive or ignorant, perhaps both. It misses the fact that public cloud only bills by customer, its actual delivery model is per-application, just as it would be in the enterprise. And it is certainly insulting to presume that organizations building out their own on-premise cloud don’t face the same challenges and obstacles as cloud providers. In most cases the challenges are the same, simply on a smaller scale. For the largest of enterprises – the Fortune 50, for example – the challenges are actually more demanding because they, unlike public cloud providers, have myriad regulations with which they must comply while simultaneously building out essentially the same architecture. Anyone who has worked inside a large enterprise IT shop knows that most inter-organizational challenges are also intra-organizational challenges. IT even talks in terms of customers; their customers may be internal to the organization but they are treated much the same as any provider-customer relationship. And when it comes to technology, if you think IT doesn’t have the same supply-chain management issues, the same integration challenges, the same management and reporting issues as a provider then you haven’t been paying attention. Dividing up a cloud by people makes little sense because the reality is that the architectural model divides resources up by application. Ultimately that’s because cloud computing is used by applications, not people or businesses. Related Posts207Views0likes2Comments