inter-cloud
7 TopicsHybrid–The New Normal
From Cars to Clouds, The Hybrids are Here Most of us are hybrids. I’m Hawaiian and Portuguese with a bit of English and old time Shogun. The mix is me. I bet you probably have some mix from your parents which makes you a hybrid. The U.S. has been called the melting pot due to all the different ethnicities that live here. I’ve got hybrid seeds for planting – my grass is a hybrid that contains 90% of the fescue and 10% bluegrass so bare spots grow back and also got some hybrid corn growing. With the drought this year, some farmers are using more drought resistant hybrid crops. There are hybrid cats, hybrid bicycles and of course, hybrid cars which has a 3% market share according to hybridcars.com. My favorite has always been SNL’s Shimmer Floor Wax – A Floor Wax and a Dessert Topping! Hybrid is the new normal. Hybrid has even made it’s way into our IT terminology with hybrid cloud and hybrid infrastructures. There are Public Clouds, those cloud services that are available to the general public over the internet; Private (Internal or Corporate) Clouds, which provides cloud hosted services to an authorized group of people in a secure environment; Hybrid Clouds, which is a combo of at least one public cloud and one private cloud; and, what I think will become the norm, a Hybrid Infrastructure or Hybrid IT, where there is a full mix of in-house corporate resources, dedicated servers, virtual servers, cloud services and possibly leased raised floor – resources are located anywhere data can live, but not necessarily all-cloud. This past June, North Bridge Venture Partners announced the results of its second annual Future of Cloud Computing Survey which noted that companies are growing their trust in cloud solutions, with 50% of respondents confident that cloud solutions are viable for mission critical business applications. At the same time, scalability remains the top reason for adopting the cloud, with 57% of companies identifying it as the most important driver for cloud adoption. Business agility ranked second, with 54% of respondents focused on agility. They also noted that cloud users are changing their view with regard to public vs. hybrid cloud platforms. Today, 40% of respondents’ are deploying public cloud strategies, with 36 percent emphasizing a hybrid approach and within five years, hybrid clouds will be the emphasis of 52% of respondents’ cloud strategies. Most respondents (53%) believe that cloud computing maintains a lower TCO and creates a less complex IT. Earlier this year, CIO.com ran a story called, Forget Public Cloud or Private Cloud, It's All About Hyper-Hybrid, where they discussed that as more organizations adopt cloud services, both public and private, for mission critical business operations, connecting, integrating and orchestrating the data back to the core of the business is critical but a challenge. It’s no longer about cloud but it’s about clouds. Multiple cloud services that must link back to the core and to each other. Even when organizations that are cloud heavy, IT shops need to keep up the on-premise side as well, since it's not likely to go anywhere soon. They offer 5 attributes that, if relevant to a business problem, the cloud is a potential fit: Predictable pricing, Ubiquitous network access, Resource pooling & location independence, Self-service and Elasticity of supply. If you are heading in the Hybrid direction, then take a look at BCW’s article from April this year called, Hybrid Cloud Adoption Issues Are A Case In Point For The Need For Industry Regulation Of Cloud Computing. They discuss that the single most pressing issue with hybrid cloud is that it is never really yours which obviously leads to security concerns. Even when a ‘private cloud’ is hosted by a third party, 100% control is still impossible since an organizations is still relying on ‘others’ for certain logistics. Plus, interoperability is not guaranteed. So a true hybrid is actually hard to achieve with security and interoperability issues still a concern. The fix? Vladimir Getov suggests a regulatory framework that would allow cloud subscribers to undergo a risk assessment prior to data migration, helping to make service providers accountable and provide transparency and assurance. He also mentions the IEEE's Cloud Computing Initiative with the goal of creating some cloud standards. He states that a global consensus on regulation and standards will increase trust and lower the risk to organizations when precious data is in someone else’s hands. The true benefits of the cloud will then be realized. ps References: Forget Public Cloud or Private Cloud, It's All About Hyper-Hybrid Hybrid Cloud Adoption Issues Are A Case In Point For The Need For Industry Regulation Of Cloud Computing 2012 Future of Cloud Computing Survey Exposes Hottest Trends in Cloud Adoption Cloud Computing Both More Agile and Less Expensive How to Protect Your Intellectual Property in the Cloud The IEEE's Cloud Computing Initiative IEEE Cloud Computing Web Portal Charting a course for the cloud: The role of the IEEE The Venerable Vulnerable Cloud Cloud vs Cloud FedRAMP Ramps Up The Three Reasons Hybrid Clouds Will Dominate F5 Cloud Computing Solutions192Views0likes0CommentsCloudFucius Shares: Cloud Research and Stats
Sharing is caring, according to some and with the shortened week, CloudFucius decided to share some resources he’s come across during his Cloud exploration in this abbreviated post. A few are aged just to give a perspective of what was predicted and written about over time. Some Interesting Cloud Computing Statistics (2008) Mobile Cloud Computing Subscribers to Total Nearly One Billion by 2014 (2009) Server, Desktop Virtualization To Skyrocket By 2013: Report (2009) Gartner: Brace yourself for cloud computing (2009) A Berkeley View of Cloud Computing (2009) Cloud computing belongs on your three-year roadmap (2009) Twenty-One Experts Define Cloud Computing (2009) 5 cool cloud computing research projects (2009) Research Clouds (2010) Cloud Computing Growth Forecast (2010) Cloud Computing and Security - Statistics Center (2010) Cloud Computing Experts Reveal Top 5 Applications for 2010 (2010) List of Cloud Platforms, Providers, and Enablers 2010 (2010) The Cloud Computing Opportunity by the Numbers (2010) Governance grows more integral to managing cloud computing security risks, says survey (2010) The Cloud Market EC2 Statistics (2010) Experts believe cloud computing will enhance disaster management (2010) Cloud Computing Podcast (2010) Security experts ponder the cost of cloud computing (2010) Cloud Computing Research from Business Exchange (2010) Just how green is cloud computing? (2010) Senior Analyst Guides Investors Through Cloud Computing Sector And Gives His Top Stock Winners (2010) Towards Understanding Cloud Performance Tradeoffs Using Statistical Workload Analysis and Replay (2010) …along with F5’s own Lori MacVittie who writes about this stuff daily. And one from Confucius: Study the past if you would define the future. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7, 8306Views0likes1CommentThe Inter-Cloud: Will MAE become a MAC?
If public, private, hybrid, cumulus, stratus wasn’t enough, the ‘Inter-Cloud’ concept came up again at the Cloud Connect gathering in San Jose last week. According to the Wikipedia entry, it was first introduced in 2007 by Kevin Kelly, both Lori MacVittie and Greg Ness wrote about the Intercloud last June and many reference James Urquhart in bringing it to everyone’s attention. Since there is no real interoperability between clouds, what happens when one cloud instance wants to reference a service in another cloud? Enter the Inter-Cloud. As with most things related to cloud computing, there has been lots of debate about exactly what it is, what it’s supposed to do and when it’s time will come. In the ‘Infrastructure Interoperability in a Cloudy World’ session at Cloud Connect, the Inter-Cloud was referenced as the ‘transition point’ when applications in a particular cloud need to move. Application mobility comes into play with Cloud Balancing, Cloud Bursting, disaster recovery, sensitive data in private/application in public and any other scenario where application fluidity is desired and/or required. An Inter-Cloud is, in essence, a mesh of different cloud infrastructures governed by standards that allow them to interoperate. As ISPs were building out their own private backbones in the 1990’s, the Internet needed a way to connect all the autonomous systems to exchange traffic. The Network Access Points (NAPs) and Metropolitan Area Ethernets (now Exchange – MAE East/MAE West/etc) became today’s Internet Exchange Points (IXP). Granted, the agreed standard for interoperability, TCP/IP and specifically BGP, made that possible and we’re still waiting on something like that for the cloud; plus we’re now dealing with huge chunks of data (images, systems, etc) rather than simple email or light web browsing. I would imagine that the major cloud providers already have connections the major peering points and someday there just might be the Metro Area Clouds (MAC West, MAC East, MAC Central) and other cloud peering locations for application mobility. Maybe cloud providers with similar infrastructures (running a particular hypervisor on certain hardware with specific services) will start with private peering, like the ISPs of yore. The reality is that it probably won’t happen that way since clouds are already part of the internet, the needs of the cloud are different and an agreed method is far from completion. It is still interesting to envision though. I also must admit, I had completely forgotten about the Inter-Cloud and you hear me calling it the ‘Intra-Cloud’ in this interview with Lori at Cloud Connect. Incidentally, it’s fun to read articles from 1999 talking about the Internet’s ‘early days’ of ISP Peering and those from today on how it has changed over the years. ps223Views0likes1CommentCloudFucius Asks: Will Open Source Open Doors for Cloud Computing?
There has been a lot of press already about OpenStack’s announcement yesterday about their new open source cloud computing software. OpenStack says that the goal is, ‘to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware.’ The software is intended to to allow companies to automatically create and manage large deployments of virtual private servers and remove the concern of vendor lock-in since the software will allow customers to span multiple cloud providers. Customers and service providers alike can use their own physical hardware to create large cloud environments, public or private, across the globe. It is also positioned to give customers more choice in how they want their specific cloud environment designed and deployed. Almost 30 companies are participating with the folks at Rackspace and NASA (Nebula cloud computing platform) leading the charge. Certainly, there are several attractive pieces to this, including the notion of cloud-standards, but will it finally open the flood gates for mass adoption of Cloud deployments? Maybe not for the enterprise, at least initially. Openstack honestly admits, ‘OpenStack is probably not something that the average business would consider deploying themselves yet. The big news for end customers is the potential for a halo effect of providers adopting an open and standard cloud: easy migration, cloud-bursting, better security audits, and a large ecosystem of compatible tools and services that work across cloud providers.’ This means that Openstack is really aimed at *very* technical enterprises (very large with lots of resources) and service providers. Thus, the play for the enterprise does not exist (yet) here, *except* for management layer players who could leverage it to build something they could sell to enterprises to “make it easy” for them. (thanks Lori!) In addition, as Ted Julian of the Yankee Group points out in this story, security is still the great unknown since there doesn’t seem to be a security vendor on the list of Openstack participants. I’m sure that list will grow over time, especially with the press that it’s getting, and the ever present cloud security concerns will eventually be addressed. This project is in the very early stages and will continue to evolve as folks pick up the code, test it and decide how it might work for them. Maybe it’ll also help push along and enable the whole Inter-Cloud notion. And one from Confucius: The cautious seldom err. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 Resources: The Recipe for Clouds Goes Open-Source Open Stack Launches NASA and Rackspace part the clouds with open source project Rackspace's Risky Open Cloud Bet Rackspace Launches Open-Source Cloud Platform Called OpenStack NASA drops Ubuntu's Koala food for (real) open source NASA and Rackspace open source cloud fluffer All for One Cloud and One Cloud for All Rackspace, NASA launch OpenStack: Can it prevent cloud lock-in? OpenStack is great, but Clouds need security. Meet the Clean Cloud. OpenStack Wiki Openstack Code209Views0likes0CommentsCloudFucius Inspects: Hosts in the Cloud
So much has been written about all the systems, infrastructure, applications, content and everything else IT related that’s making it’s way to the cloud yet I haven’t seen much discussion (or maybe I just missed it) about all the clients connecting to the cloud to access those systems. Securing those systems has made some organizations hesitate in deploying IT resources in the cloud whether due to compliance, the sensitivity of the data, the shared infrastructure or simply persuaded by survey results. Once a system is ‘relatively’ secure, how do you keep it that way when the slew of potentially dangerous, infected clients connect? With so many different types of users connecting from various devices, and with a need to access vastly different cloud resources, it’s important to inspect every requesting host to ensure both the user and the device can be trusted. Companies have done this for years with remote/SSL VPN users who request access to internal systems – is antivirus installed and up to date, is a firewall enabled, is the device free of malware and so forth. Ultimately, the hosts are connecting to servers housed in some data center and all the same precautions you have with your own space should be enforced in the cloud. Since cloud computing has opened application deployment to the masses, and all that’s required for access is *potentially* just a browser, you must be able to detect not only the type of computer (laptop, mobile device, kiosk, etc.) but also its security posture. IDC predicts that ‘The world's mobile worker population will pass the one billion mark this year and grow to nearly 1.2 billion people – more than a third of the world's workforce – by 2013’ With so many Internet-enabled devices available; a Windows computer, a Linux box, an Apple iteration, a mobile device and anything else with an IP address, they could all be trying to gain access to your cloud environment at any given moment. It might be necessary to inspect each of these before granting users access in order to make sure it’s something you want to allow. If the inspection fails, how should you fix the problem so that the user can have some level of access? If the requesting host is admissible, how do you determine what they are authorized to access? And, if you allow a user and their device, what is the guarantee that nothing proprietary either gets taken or left behind? The key is to make sure that only “safe” systems are allowed to access your cloud infrastructure, especially if it contains highly sensitive information and context helps with that. One of the first steps to accomplishing this is to chart usage scenarios. Working in conjunction with the security policy, it is essential to uncover the usage scenarios and access modes for the various types of users and the many devices that they might be using. The chart will probably vary based on your company’s and/or website’s Acceptable Use Policy, but this exercise gets administrators started in determining the endpoint plan. Sounds a lot like a remote access policy, huh, with one exception. Usually there is a notion of ‘trusted’ and ‘un-trusted’ with remote access. If a user requests access from a corporate issued laptop, often that’s considered a trusted device since there is something identifiable to classify it as an IT asset. These days, with so many personal devices entering the cloud, all hosts should be considered un-trusted until they prove otherwise. And as inter-clouds become reality, you’ll need to make sure that a client coming from someone else’s infrastructure abides by your requirements. Allowing an infected device access to your cloud infrastructure can be just as bad as allowing an invalid user access to proprietary internal information. This is where endpoint security checks can take over. Endpoint security prevents infected PCs, hosts, or users from connecting to your cloud environment. Automatic re-routing for infected PCs reduces Help Desk calls and prevents sensitive data from being snooped by keystroke loggers and malicious programs. Simply validating a user is no longer the starting point for determining access to cloud systems; the requesting device should get the first review. Pre-access checks can run prior to the actual logon (if there is one) page appearing, so if the client is not in compliance, they won’t even get the chance to enter credentials. These checks can determine if antivirus or firewall is running, if it is up-to-date, and more. Systems can direct the user to a remediation page for further instructions to gain access. It’s easy to educate the user as to why the failure occurred and relay the possible steps to resolve the problem. For example: “We noticed you have antivirus installed but not running. Please enable your antivirus software for access.” Or, rather than deny logon and communicate a detailed remedy, you could automatically send them to a remediation website designed to correct or update the client’s software environment, assuring policies required for access are satisfied without any user interaction. Inspectors can look for certain registry keys or files that are part of your corporate computer build/image to determine if this is a corporate asset and thus, which system resources are allowed. Pre-access checks can retrieve extended Windows and Internet Explorer info to ensure certain patches are in place. If, based on those checks, the system finds a non-compliant client but an authorized user; you might be able to initiate a secure, protected, virtual workspace for that session. As the ever-expanding cloud network grows, the internal corporate resources require the most protection as it’s always been. Most organizations don’t necessarily want all users’ devices to have access to all resources all the time. Working in conjunction with the pre-access sequence, controllers can gather device information (like IP address or time of day) and determine if a resource should be offered. A protected configuration measures risk factors using information collected by the pre-access check; thus, they work in conjunction. For example, Fake Company, Inc. (FCI) has some contractors who need access to Fake Company’s corporate cloud. While this is not an issue during work hours, FCI does not want them accessing the system after business hours. The controller can check the time if a contractor tries to log on at 2 AM; it knows the contractor’s access is only available during FCI’s regular business hours and can deny access. Post-access actions can protect against sensitive information being “left” on the client. The controller can impose a cache-cleaner to eliminate any user residue such as browser history, forms, cookies, auto-complete information, and more. For systems unable to install a cleanup control, you can block all file downloads to avoid the possibility of the inadvertent left-behind temporary file—yet still allow access to needed cloud applications. These actions are especially important when allowing non-recognized machines access without wanting them to take any data with them after the session. In summary: First, inspect the requesting device; second, protect resources based on the data gathered during the check; third, make sure no session residue is left behind. Security is typically a question of trust. Is there sufficient trust to allow a particular user and a particular device full access to enterprise cloud resources? Endpoint security gives the enterprise the ability to verify how much trust and determine whether the client can get all the cloud resources, some of the cloud resources, or just left in the rain. And one from Confucius: When you know a thing, to hold that you know it; and when you do not know a thing, to allow that you do not know it - this is knowledge. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5223Views0likes0CommentsCloudFucius Ponders: High-Availability in the Cloud
According to Gartner, “By 2012, 20 percent of businesses will own no IT assets.” While the need for hardware will not disappear completely, hardware ownership is going through a transition: Virtualization, total cost of ownership (TCO) benefits, an openness to allow users run their personal machines on corporate networks, and the advent of cloud computing are all driving the movement to reduce hardware assets. Cloud computing offers the ability to deliver critical business applications, systems, and services around the world with a high degree of availability, which enables a more productive workforce. No matter which cloud service — IaaS, PaaS, or SaaS (or combination thereof) — a customer or service provider chooses, the availability of that service to users is paramount, especially if service level agreements (SLAs) are part of the contract. Even with a huge cost savings, there is no benefit for either the user or business if an application or infrastructure component is unavailable or slow. As hype about the cloud has turned into the opportunity for cost savings, operational efficiency, and IT agility, organizations are discussing, testing, and deploying some form of cloud computing. Many IT departments initially moved to the cloud with non-critical applications and, after experiencing positive results and watching cloud computing quickly mature, are starting to move their business critical applications, enabling business units and IT departments to focus on the services and workflows that best serve the business. Since the driver for any cloud deployment, regardless of model or location, is to deliver applications in the most efficient, agile, and secure way possible, the dynamic control plane of cloud architecture requires the capability to intercept, interpret, and instruct where the data must go and must have the necessary infrastructure, at strategic points of control, to enable quick, intelligent decisions and ensure consistent availability. The on-demand, elastic, scalable, and customizable nature of the cloud must be considered when deploying cloud architectures. Many different customers might be accessing the same back-end applications, but each customer has the expectation that only their application will be properly delivered to users. Making sure that multiple instances of the same application are delivered in a scalable manner requires both load balancing and some form of server virtualization. An Application Delivery Controller (ADC) can virtualize back-end systems and can integrate deeply with the network and application servers to ensure the highest availability of a requested resource. Each request is inspected using any number of metrics and then routed to the best available server. Knowing how an ADC can enhance your application delivery architecture is essential prior to deployment. Many applications have stellar performance during the testing phase, only to fall apart when they are live. By adding a Virtual ADC to your development infrastructure, you can build, test and deploy your code with ADC enhancements from the start. With an ADC, load balancing is just the foundation of what can be accomplished. In application delivery architectures, additional elements such as caching, compression, rate shaping, authentication, and other customizable functionality, can be combined to provide a rich, agile, secure and highly available cloud infrastructure. Scalability is also important in the cloud and being able to bring up or take down application instances seamlessly — as needed and without IT intervention — helps to prevent unnecessary costs if you’ve contracted a “pay as you go” cloud model. An ADC can also isolate management and configuration functions to control cloud infrastructure access and keep network traffic separate to ensure segregation of customer environments and the security of the information. The ability of an ADC to recognize network and application conditions contextually in real-time, as well as its ability to determine the best resource to deliver the request, ensures the availability of applications delivered from the cloud. Availability is crucial; however, unless applications in the cloud are delivered without delay, especially when traveling over latency-sensitive connections, users will be frustrated waiting for “available” resources. Additional cloud deployment scenarios like disaster recovery or seasonal web traffic surges might require a global server load balancer added to the architecture. A Global ADC uses application awareness, geolocation, and network condition information to route requests to the cloud infrastructure that will respond best and using the geolocation of users based on IP address, you can route the user to the closest cloud or data center. In extreme situations, such as a data center outage, a Global ADC will already know if a user’s primary location is unavailable and it will automatically route the user to the responding location. Cloud computing, while still evolving in all its iterations, can offer IT a powerful alternative for efficient application, infrastructure, and platform delivery. As businesses continue to embrace the cloud as an advantageous application delivery option, the basics are still the same: scalability, flexibility, and availability to enable a more agile infrastructure, faster time-to-market, a more productive workforce, and a lower TCO along with happier users. And one from Confucius: The man of virtue makes the difficulty to be overcome his first business, and success only a subsequent consideration. ps The CloudFucius Series: Intro, 1, 2, 3180Views0likes0CommentsF5 Long Distance VMotion Solution Demo
Watch how F5's WAN Optimization enables long distance VMotion migration between data centers over the WAN. This solution can be automated and orchestrated and preserves user sessions/active user connections allowing seamless migration. Erick Hammersmark, Product Management Engineer, hosts this cool demonstration. ps233Views0likes0Comments