data center
44 TopicsThe Cloud is Still a Datacenter Somewhere
Application delivery is always evolving. Initially, applications were delivered out of a physical data center, either dedicated raised floor at the corporate headquarters or from some leased space rented from one of the web hosting vendors during the late 1990’s to early 2000’s or some combination of both. Soon global organizations and ecommerce sites alike, started to distribute their applications and deploy them at multiple physical data centers to address geo-location, redundancy and disaster recovery challenges. This was an expensive endeavor back then even without adding the networking, bandwidth and leased line costs. When server virtualization emerged and organizations had the ability to divide resources for different applications, content delivery was no longer tethered 1:1 with a physical device. It could live anywhere. With virtualization technology as the driving force, the cloud computing industry was formed and offered yet another avenue to deliver applications. Application delivery evolved again. As cloud adoption grew, along with the Softwares, Platforms and Infrastructures enabling it, organizations were able to quickly, easily and cost effectively distribute their resources around the globe. This allows organizations to place content closer the user depending on location, and provides some fault tolerance in case of a data outage. Today, there is a mixture of options available to deliver critical applications. Many organizations have on-premises private, owned data center facilities, some leased resources at a dedicated location and maybe even some cloud services. In order to achieve or even maintain continuous application availability and keep up with the pace of new application rollouts, many organizations are looking to expand their data center options, including cloud, to ensure application availability. This is important since 84% of datacenters had issues with power, space and cooling capacity, assets, and uptime that negatively impacted business operations according to IDC. This leads to delays in application rollouts, disrupted customer service or even unplanned expenses to remedy the situation. Operating in multiple data centers is no easy task, however, and new data center deployments or even integrating existing data centers can cause havoc for visitors, employees and IT staff alike. Critical areas of attention include public web properties, employee access to corporate resources and communication tools like email along with the security and required back end data replication for content consistency. On top of that, maintaining control over critical systems spread around the globe is always a major concern. A combination of BIG-IP technologies provides organizations the global application services for DNS, federated identity, security, SSL offload, optimization & application health/availability to create an intelligent cost effective, resilient global application delivery infrastructure across a hybrid mix of data centers. Organizations can minimize downtime, ensure continuous availability and have on demand scalability when needed. Simplify, secure and consolidate across multiple data centers while mitigating impact to users or applications. ps Related: Datacenter Transformation and Cloud The Event-Driven Data Center Hybrid Architectures Do Not Require Private Cloud The Dynamic Data Center: Cloud's Overlooked Little Brother Decade old Data Centers Technorati Tags: datacenter,f5,big-ip,hybrid,cloud,private,multi,applications,silva Connect with Peter: Connect with F5:457Views0likes0CommentsApplication Availability Between Hybrid Data Centers
Reliable access to mission-critical applications is a key success factor for enterprises. For many organizations, moving applications from physical data centers to the cloud can increase resource capacity and ensure availability while reducing system management and IT infrastructure costs. Achieving this hybrid data center model the right way requires healthy resource pools and the means to distribute them. The F5 Application Availability Between Hybrid Data Centers solution provides core load-balancing, DNS and acceleration services that result in non-disruptive, seamless migration between private and public cloud environments. Check out the new Reference Architecture today along with a new video below! ps Related: Application Availability Between Hybrid Data Centers Reference Architecture Hybrid Data Center Infographic Hybrid Data Center Solution Diagram Technorati Tags: hybrid,data center,cloud,application delivery,office 365,goldengate,reference_architecture,f5,big-ip,silva Connect with Peter: Connect with F5:356Views0likes0CommentsHardware Acceleration Critical Component for Cost-Conscious Data Centers
Better performance, reduced costs and data center footprint are not niche-market interests. The fast-paced world of finance is taking a hard look at the benefits of hardware acceleration for performance and finding additional benefits such as a reduction in rack-space via consolidation of server hardware. Rich Miller over at Data Center Knowledge writes: Hardware acceleration addresses computationally-intensive software processes that task the CPU, incorporating special-purpose hardware such as a graphics processing unit (GPUs) or field programmable gate array (FPGA) to shift parallel software functions to the hardware level. … “The value proposition is not just to sustain speed at peak but also a reduction in rack space at the data center,” Adam Honore, senior analyst at Aite Group, told WS&T. Depending on the specific application, Honore said a hardware appliance can reduce the amount of rack space by 10-to-1 or 20-to-1 in certain market data and some options events. Thus, a trend that bears watching for data center providers. But confining the benefits associated with hardware acceleration to just data center providers or financial industries is short-sighted, because similar benefits can be achieved by any data center in any industry looking for cost-cutting technologies. And today, that’s just about … everyone. USING SSL? YOU CAN BENEFIT FROM HARDWARE ACCELERATION Now maybe I’m just too into application delivery and hardware and all its associated benefits, but the idea that hardware acceleration and offloading of certain computationally expensive tasks like encryption, decryption, TCP session management, etc… seems pretty straightforward, and not exclusive to financial markets. Any organization using SSL, for example, can see benefits in both performance and a reduction in costs through consolidation by offloading the responsibility for SSL to an external device that employs some sort of hardware-based acceleration of the specific computationally expensive functions. This is the same concept used by routers and switches, and why they employ FPGAs and ASICs to perform network processing: they’re faster and capable of much greater speeds than their software predecessors. Unlike routers and switches, however, solutions capable of hardware-based acceleration provide the added benefit of reducing the utilization on hardware servers while improving the speed at which such computations can be executed. Reducing the utilization on servers means increased capacity on each server, which results in either the ability to eliminate a number of servers or the need to invest in even more servers. Both strategies result in a reduction in costs associated with the offloading of the expensive functionality. Add hardware-based acceleration of SSL operations with hardware-based acceleration for compression of data and you can offload yet another computationally expensive piece of functionality to an external device, which again saves resources on the server and increases its capacity as well as the overall response time for transfers requiring compression. Now put that functionality onto your load-balancer, a fairly logical place in your architecture to apply such functionality both ingress and egress, and what you’ve got is an application delivery controller. Add to the hardware-based acceleration of SSL and compression an optimized TCP stack that reuses TCP connections and you not only increase performance but decrease utilization on the server yet again because it’s handling fewer connections and not going through the tedium of opening and closing connections at a fairly regular rate. NOT JUST FOR ADMINS and NETWORK ARCHITECTS Developers and architects, too, can apply the benefits of hardware accelerated services to their applications and frameworks. Cookie encryption, for example, is a fairly standard method of protecting web applications against cookie-based attacks such as cookie tampering and poisoning. Encryption of cookies mitigates that risk by ensuring that cookies stored on clients are not human-readable. But encryption and decryption of cookies can be expensive and often comes at the cost of performance of the application and, if not implemented as part of the original design, can cost in terms of the time and money necessary to add the feature to the application. Leveraging the network-side scripting capabilities of application delivery controllers removes the need to rewrite the application by allowing cookies to be encrypted and decrypted on the application delivery controller. By moving the task of (de|en)cryption to the application delivery controller, the expensive computations required by the process are accelerated in hardware and will not negatively impact the performance of the application. If the functionality is moved from within the application to an application delivery controller, the resulting shift in computational burden can reduce utilization on the server – particularly in heavily used applications or those with a larger set of cookies – which, like other reductions in server utilization, can lead to the ability to consolidate or retire servers in the data center. HARDWARE ACCELERATION REDUCES COSTS, INCREASES EFFICIENCY By the time you get finished, the case for consolidating servers seems fairly obvious: you’ve offloaded so much intense functionality that you can cut the number of servers you need by a considerable amount, and either retire them (decreasing power, cooling, heating, and rack space in the process) or re-provision them for use on other projects (decreasing investment and acquisition costs for the other project and maintaining current operating expenses rather than increasing them). Basically, if you need load balancing you’ll benefit both technically and financially from investing in an application delivery controller rather than a traditional simple load balancer. And if you don’t need load balancing, you can still benefit simply by employing the offloading capabilities inherent in such platforms endowed with hardware-assisted acceleration technologies. The increased efficiency of servers resulting from the use of hardware-assisted offload of computationally expensive operations can be applied to any data center and any application in any industry.321Views0likes2CommentsChallenging the Firewall Data Center Dogma
Do you really need a firewall to secure web and application services? Some organizations would say no based on their experiences while others are sure to quail at the very thought of such an unnatural suggestion. Firewalls are, in most organizations, the first line of defense for web and application services. This is true whether those services are offered to the public or only to off-site employees via secure remote access. The firewall is, and has been, the primary foundation around which most network security architectures are built. We’ve spent years designing highly-available, redundant architectures that include the firewall. We’ve deployed them not only at “the edge” but moved them further and further into the data center in architectures that have commonly become known as “firewall sandwiches”. The reasons for this are simple – we want to protect those services that are critical to the business and the primary means by which we accomplish that task is by controlling access to them via often simple but powerful access control. In later years we’ve come to rely upon additional intrusion detection systems such as IPS (Intrusion Prevention Systems) that are focused on sniffing out (sometimes literally) malicious attacks and attempts to circumvent security policies and stop them. One of the core attacks against which such solutions protect services is a denial of service. >Unfortunately, it is increasingly reality that the firewall is neither able to detect or withstand such attacks and ultimately such devices fail – often at a critical moment. The question then is what to do about it. The answer may be to simply remove the firewall from the critical data path for web services./p> THAT’S UNNATURAL! Just about anything is unnatural the first time you try it, but that doesn’t mean it isn’t going to work or that it’s necessarily wrong. One of my favorite fantasy series – David Eddings’ Belgariad – illustrates this concept quite nicely. A couple of armies need to move their ships up an escarpment to cross a particular piece of land to get where they need to be. Now usually fording – historically – involves manhandling ships across land. This is hard and takes a lot of time. No one looked forward to this process. In the story, someone is wise enough to put these extremely large ships on wheels and then leverage the power of entire herds of horses to move them over the land, thus improving performance of the process and saving a whole lot of resources. One of the kings is not all that sure he likes violating a precept that has always been akin to dogma – you ford ships by hand. King Rhodar put on a perfectly straight face. “I’ll be the first to admit that it’s probably not nearly as good as moving them by hand, Anheg. I’m sure there are some rather profound philosophical reasons for all that sweating and grunting and cursing, but it is faster, wouldn’t you say? And we really ought to move right along with this.” “It’s unnatural,” Anheg growled, still glaring at the two ships, which were already several hundred yards away. >Rhodar shrugged. “Anything’s unnatural the first time you try it.” -- “Enchanter’s End Game”, David Eddings (p 147) > Needless to say King Anheg eventually gave in and allowed his ships to be moved in this new, unnatural way, finding it to be more efficient and faster and ultimately it kept his men from rebelling against him for making them work so hard. This same lesson can be applied to removing the firewall from the critical inbound data path of services. Sure, it sounds unnatural and perhaps it is if it’s the first time you’re trying it, but necessity is the mother of invention and seems to also help overcome the feeling that something shouldn’t be done because it hasn’t been done before. If you need convincing as to why you might consider such a tactic, consider a recent survey conducted by Arbor Networks showing an increasing failure rate of firewalls and IPS solutions due to attacks. “Eighty-six percent of respondents indicated that they or their customers have placed stateful firewall and/or IPS devices in their IDCs. Nearly half of all respondents—a solid majority of those who actually have deployed these devices within their IDCs— experienced stateful firewall and/or IPS failure as a direct result of DDoS attacks during the survey period. Only 14 percent indicated that they follow the IDC BCP of enforcing access policy via stateless ACLs deployed on hardware-based routers/Layer 3 switches capable of handling millions of packets per second.”[emphasis added] >-- Network Infrastructure Security Report Volume VI, Arbor Networks, Feb 1 2011 That is a lot of failures, especially given that firewalls are a critical data center component and are almost certainly in the path of a business critical web or application service. But it’s dogma; you simply must have a firewall in front of these services. Or do you? BASIC FIREWALLING ISN’T ENOUGH The reality is that you need firewall functionality – services - but you also need a lot more. You need to control access to services at the network layers but you also need to mitigate access and attacks occurring at the application layers. That means packet-based firewalls – even with their “deep packet inspection” capabilities – are not necessarily up to the task of protecting the services they’re supposed to be protecting. The Anonymous attacks taught us that attacks are now not only distributed from a client perspective, they’re also distributed from a service perspective; attacking not only the network but the application layers. That means every device between clients and servers must be capable of handling not only the increase in traffic but somehow detecting and preventing those attacks from successfully achieving their goal: denial of service. During the anonymous attacks, discussions regarding what to do about traffic overwhelming firewalls resulted in what might be considered an “unnatural” solution: removal of the firewall. That’s because the firewall was actually part of the problem, not the solution, and removing it from the inbound data path resulted in a more streamlined (and efficient) route that enabled continuous availability of services despite ongoing attacks – without compromising security. Yes, you heard that right. Some organizations are running sans firewall and finding that for inbound web services, at least, the streamlined path is maintaining a positive security posture while ensuring availability and performance. That doesn’t mean they are operating without those security services in place, it just means they’ve found that other components in the inbound data path are capable of providing those basic firewalling services without negatively impacting availability. ATTACKS AREN’T the ONLY PROBLEM It isn’t just attacks that are going to pose problems in the near future for firewalls and IPS components. The increase in attacks and attack surfaces are alarming, yes, but it’s that combined with an increase in traffic in general that’s pushing load on all data center components off the charts. Cisco recently shared the results of its latest Visual Networking Index Forecast: “By 2015, Cisco says that mobile data traffic will grow to 6.3 exabytes of data or about 1 billion gigabytes of data per month. The report indicates that two-thirds of the mobile data traffic on carrier networks in 2015 will come from video services. This trend follows a similar trend in traditional broadband traffic growth.” >Read more: http://news.cnet.com/8301-30686_3-20030291-266.html#ixzz1CtYWZPAk Cisco’s report is obviously focused on service providers as they will bear the brunt of the increase in traffic (and in many cases they bear the majority of the impact from denial of service attacks) but that traffic is going somewhere, and somewhere is often your data center, accessing your services, increasing load on your data center infrastructure. Load testing, to be sure, of an active architecture is important. It’s the only way to really determine what the real capacity for your data center will be and how it will respond under heavy load – and that includes the additional strain resulting from an attack. Cloud-based load testing services are available and can certainly be of assistance in performing such testing on live infrastructure. And yes, it has to be live or it won’t find all the cracks and fissures in your architecture. It isn’t your lab environment, after all, that’s going to be under attack or stressed out by sudden surges in traffic. Perhaps no problems exist, but you really don’t want to find out there are when the pressure’s on and you have to make the decision in the heat of the moment. Try testing with your firewall, and without (assuming you have solutions capable of providing the security services required in the inbound data path). See if there is an impact (positive or negative) and then you’ll be better able to make a decision in the event it becomes necessary. Putting firewalls in front of your Internet services has been dogma for a long, long time. But are they up to the task? It would appear in many cases they aren’t. When a solid majority of folks have found their sites down due to firewall failure, we may need to rethink the role of a firewall in securing services. That doesn’t mean we’ll come to a different conclusion, especially as only part of the architectural decisions made regarding data center security are dependent on technological considerations; other factors such as risk tolerance by the business are often the driving factor and play a much larger role in such decisions whether IT likes it or not. But it does mean that we should occasionally re-evaluate our data center strategies and consider whether traditional architectural dogma is still appropriate in today’s environment. Especially when that architectural dogma may be part of the problem. Infrastructure Matters: Challenges of Cloud-based Testing The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Cloud Testing: The Next Generation [Network World] Network Infrastructure Security Report Volume VI [arbornetworks.com] Load Testing as a Service: A Look at Load Impact (beta) Cloud Testing: The Next Generation To Boldly Go Where No Production Application Has Gone Before It’s 2am: Do You Know What Algorithm Your Load Balancer is Using? Data Center Feng Shui: Process Equally Important as Preparation Don’t Conflate Virtual with Dynamic Data Center Feng Shui What We Learned from Anonymous: DDoS is now 3DoS The Many Faces of DDoS: Variations on a Theme or Two269Views0likes1CommentHighly Available Hybrid
Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply. Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works. Have reliable crossover to the duplicate systems so they are ready when needed. And have the ability to detect failures as they occur so proper action can be taken. If the first two are in place, hopefully you never see a failure but maintenance is a must. BIG-IP high availability (HA) functionality, such as connection mirroring, configuration synchronization, and network failover, allow core system services to be available for BIG-IP to manage in the event that a particular application instance becomes unavailable. Organizations can synchronize BIG-IP configurations across data centers to ensure the most up to date policy is being enforced throughout the entire infrastructure. In addition, BIG-IP itself can be deployed as a redundant system either in active/standby or active/active mode. Web applications come in all shapes and sizes from static to dynamic, from simple to complex from specific to general. No matter the size, availability is important to support the customers and the business. The most basic high-availability architecture is the typical 3-tier design. A pair of ADCs in the DMZ terminates the connection; they in turn intelligently distribute the client request to a pool (multiple) of application servers which then query the database servers for the appropriate content. Each tier has redundant servers so in the event of a server outage, the others take the load and the system stays available. This is a tried and true design for most operations and provides resilient application availability within a typical data center. But fault tolerance between two data centers is even more reliable than multiple servers in a single location, simply because that one data center is a single point of failure. A hybrid data center approach allows organizations to not only distribute their applications when it makes sense but can also provide global fault tolerance to the system overall. Depending on how an organization’s disaster recovery infrastructure is designed, this can be an active site, a hot-standby, some leased hosting space, a cloud provider or some other contained compute location. As soon as that server, application, or even location starts to have trouble, organizations can seamlessly maneuver around the issue and continue to deliver their applications. Driven by applications and workloads, a hybrid data center is really a technology strategy of the entire infrastructure mix of on premise and off-premise data compute resources. IT workloads reside in conventional enterprise IT (legacy systems), an on premise private cloud (mission critical apps), at a third-party off-premise location (managed, hosting or cloud provider) or a combination of all three. The various combinations of hybrid data center types can be as diverse as the industries that use them. Enterprises probably already have some level of hybrid, even if it is a mix of owned space plus SaaS. Enterprises typically like to keep sensitive assets in house but have started to migrate workloads to hybrid data centers. Financial industries might have different requirements than retail. Startups might start completely with a cloud based service and then begin to build their own facility if needed. Mobile app developers, particularly games, often use the cloud for development and then bring it in-house once it is released. Enterprises, on the other hand, have (historically) developed in house and then pushed out to a data center when ready. The variants of industries, situations and challenges the hybrid approach can address is vast. Manage services rather than boxes. ps Related Hybrid DDoS Needs Hybrid Defense The Conspecific Hybrid Cloud The future of cloud is hybrid ... and seamless Hybrid–The New Normal Hybrid Architectures Do Not Require Private Cloud Technorati Tags: f5,hybrid,cloud,datacenter,applications,availability,silva Connect with Peter: Connect with F5:261Views0likes0CommentsQuarantine First to Mitigate Risk of VM App Stores
Internal processes may be the best answer to mitigating risks associated with third-party virtual appliances The enterprise data center is, in most cases, what aquarists would call a “closed system.” This is to say that from a systems and application perspective, the enterprise has control over what goes in. The problem is, of course, those pesky parasites (viruses, trojans, worms) that find their way in. This is the result of allowing external data or systems to enter the data center without proper security measures. For web applications we talk about things like data scrubbing and web application firewalls, about proper input validation codified by developers, and even anti-virus scans of incoming e-mail. But when we start looking at virtual appliances, at virtual machines, being hosted in “vm stores” much in the same manner as mobile applications are hosted in “app stores” today, the process becomes a little more complicated. Consider Stuxnet as a good example of the difficulty in completely removing some of these nasty contagions. Now imagine public AMIs or other virtual appliances downloaded from a “virtual appliance store”. Hoff first raised this as a potential threat vector a while back, and reintroduced it when it was tangentially raised by Google’s announcement it had “pulled 21 popular free apps from the Android Market” because “the apps are malware aimed at getting root access to the user’s device.” Hoff continues to say: This is going to be a big problem in the mobile space and potentially just as impacting in cloud/virtual datacenters as people routinely download and put into production virtual machines/virtual appliances, the provenance and integrity of which are questionable. Who’s going to police these stores? -- Christofer Hoff, “App Stores: From Mobile Platforms To VMs – Ripe For Abuse” Even if someone polices these stores, are you going to run the risk, ever so slight as it may be, that a dangerous pathogen may be lurking in that appliance? We had some similar scares back in the early days of open source, when a miscreant introduced a trojan into a popular open source daemon that was subsequently downloaded, compiled, and installed by a lot of people. It’s not a concept with which the enterprise is unfamiliar. THE DATA CENTER QUARANTINE (TANK) I cannot count the number of desperate pleas for professional advice and help with regards to “sick fish” that start with: I did not use a quarantine tank. A quarantine tank (QT) in the fish keeping hobby is a completely separate (isolated) tank maintained with the same water parameters as the display tank (DT). The QT provides a transitory stop for fish destined for the display tank that offers a chance for the fish to become acclimated to the water and light parameters of the system while simultaneously allowing the hobbyist to observe the fish for possible signs of infection. Interestingly, the QT is used before an infection is discovered, not just afterwards as is the case with people infected with highly contagious diseases. The reason fish are placed into quarantine even though they may be free of disease or parasites is because they will ultimately be placed into a closed system and it is nearly impossible to eradicate disease and parasites in a closed system without shutting it all down first. To avoid that catastrophic event, fish go into QT first and then, when it’s clear they are healthy, they can join their new friends in the display tank. Now, the data center is very similar to a closed system. Once a contagion gets into its systems, it can be very difficult to eradicate it. While there are many solutions to preventing contagion, one of the best solutions is to use a quarantine “tank” to ensure health of any virtual appliance prior to deployment. Virtualization affords organizations the ability to create a walled-garden, an isolated network environment, that is suitable for a variety of uses. Replicating production environments for testing and validation of topology and architecture is often proposed as the driver for such environments, but use as a quarantine facility is also an option. Quarantine is vital to evaluating the “health” of any virtual network appliance because you aren’t looking just for the obvious – worms and trojans that are detectable using vulnerability scans – but you’re looking for the stealth infection. The one that only shows itself at certain times of the day or week and which isn’t necessarily as interested in propagating itself throughout your network but is instead focused on “phoning home” for purposes of preparing for a future attack. It’s necessary to fire up that appliance in a constrained environment and then watch it. Monitor its network and application activity over time to determine whether or not it’s been infected with some piece of malware that only rears its ugly head when it thinks you aren’t looking. Within the confines of a quarantined environment, within the ‘turn it off and start it over clean’ architecture comprised of virtual machines, you have the luxury of being able to better evaluate the health of any third-party virtual machine (or application for that matter) before turning it loose in your data center. QUARANTINE in the DATA CENTER is not NEW The idea of quarantine in the data center is not new. We’ve used it for some time as an assist in dealing with similar situations; particularly end-users infected with some malware detectable by end-user inspection solutions. Generally we’ve used that information to quarantine the end-user on a specific network with limited access to data center resources – usually just enough to clean their environment or install the proper software necessary to protect them. We’ve used a style of quarantine to aid in the application lifecycle progression from development to deployment in production in the QA or ‘test’ phase wherein applications are deployed into an environment closely resembling the production environment as a means to ensure that configurations, dependencies and integrations are properly implemented and the application works as expected. So the concept is not new, it’s more the need to recognize the benefits of a ‘quarantine first’ policy and subsequently implementing such a process in the data center to support the use of third-party virtual network appliances. As with many cloud and virtualization-related challenges, part of the solution almost always involves process. It is in recognizing the challenges and applying the right mix of process, product and people to mitigate operational risks associated with the deployment of new technology and architectures. Cloud Control Does Not Always Mean ‘Do it yourself’ App Stores: From Mobile Platforms To VMs – Ripe For Abuse Operational Risk Comprises More Than Just Security The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Cloud Chemistry 101 More Users, More Access, More Clients, Less Control Get Your Money for Nothing and Your Bots for Free Control, choice, and cost: The Conflict in the Cloud The Corollary to Hoff’s Law258Views0likes0CommentsThe STAR of Cloud Security
The Cloud Security Alliance (CSA), a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, recently announced that they are launching (Q4 of 2011) a publicly accessible registry that will document the security controls provided by various cloud computing offerings. The idea is to encourage transparency of security practices within cloud providers and help users evaluate and determine the security of their current cloud provider or a provider they are considering. The service will be free. CSA STAR (Security, Trust and Assurance Registry) is open to all cloud providers whether they offer SaaS, PaaS or IaaS and allows them to submit self assessment reports that document compliance in relation to the CSA published best practices. The CSA says that the searchable registry will allow potential cloud customers to review the security practices of providers, accelerating their due diligence and leading to higher-quality procurement experiences. There are two different types of reports that the cloud provider can submit to to indicate their compliance with CSA best practices. The Consensus Assessments Initiative Questionnaire (CAIQ), a 140 question document which provides industry-accepted ways to document what security controls exist in IaaS, PaaS, and SaaS offerings and the Cloud Control Matrix (CCM) which provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in areas like ISACA COBIT, PCI, and NIST. Providers who chose to take part and submit the documents are on the ‘honor system’ since this is a self assessment and users will need to trust that the information is accurate. CSA is encouraging providers to participate and says, in doing so, they will address some of the most urgent and important security questions buyers are asking, and can dramatically speed up the purchasing process for their services. In addition to self-assessments, CSA will provide a list of providers who have integrated CAIQ and CCM and other components from CSA’s Governance, Risk Management and Compliance (GRC) stack into their compliance management tools. This should help with those who are still a bit hesitant about Cloud services. The percentage of those claiming ‘security issues’ as a deterrent for cloud deployments has steadily dropped over the last year. Last year around this time on any given survey, anywhere from 42% to 73% of those respondents said cloud technology does not provide adequate security safeguards and that that security concerns have prevented their adoption of cloud computing. In a recent cloud computing study from TheInfoPro, only 13% cited security worries as a cloud roadblock, after up-front costs at 15%. Big difference than a year ago. In this most recent survey, they found that ‘fear of change’ to be the biggest hurdle for cloud adoption. Ahhhh, change. One of the things most difficult for humans. Change is constant yet the basics are still the same - education, preparation, and anticipation of what cloud is about and what it can offer is a necessity for success. ps References: CSA focuses best-practice lens on cloud security Assessing the security of cloud providers CSA Registry Strives for Security Transparency of Providers Cloud Security Alliance Introduces Provider Trust and Assurance Registry Transparency Key To Cloud Security Cloud Security Alliance launches registry: not a moment too soon Fear of Change Impedes Cloud Adoption for Many Companies F5 Cloud Computing Solutions255Views0likes0CommentsCloud Computing: Location is important, but not the way you think
The debate this week is on location, specifically we're back arguing over whether there exist such things as "private" clouds. Data Center Knowledge has a good recap of some of the opinions out there on the subject, and of course I have my own opinion. Location is, in fact, important to cloud computing, but probably not in the way most people are thinking right now. While everyone is concentrating on defining cloud computing based on whether it's local or remote, folks have lost sight that location is important for other reasons. It is the location of data centers that is important to cloud computing. After all, a poor choice in physical location can incur additional risk for enterprises trusting their applications to a cloud computing provider. Enterprises residing physically in high risk areas - those prone to natural disasters, primarily - understand this and often try to mitigate that risk by building out a secondary data center in a less risky location, just in case. But it's not only the physical and natural risk factors that need to be considered. The location of a data center can have a significant impact on the performance of applications delivered out of a cloud computing environment. If a cloud computing provider's primary data center is in India, or Russia, for example, and most of your users are in the U.S., the performance of that application will be adversely affected by the speed of light problem - the one that says packets can only travel so fast, and no faster, due to the laws of physics. While there are certainly ways to ameliorate the affects of the speed-of-light problem - acceleration and optimization techniques, for example - they are not a cure all. The recent loss of 3 of 4 undersea cables that transport most of the Internet data between continents proves that accidents are not only naturally occurring, but man-made as well, and the effects can be devastating on applications and their users. If you're using a cloud computing provider such as Blue Lock as a secondary or tertiary data center for disaster recovery, but their primary data center is merely a few miles from your primary data center, you aren't gaining much protection against a natural disaster, are you? Location is, in fact, important in the choice of a cloud computing provider. You need to understand where their primary and secondary data centers are located in order to ensure that the business justification for using a cloud computing provider is actually valid. If your business case is built on the reduction of CapEx and OpEx maintaining a disaster recovery site, you should make certain that in the event of a local disaster that the cloud computing provider's data center is unlikely to be affected as well, or you risk wasting your investment in that disaster recovery plan. Waste, whether large or small, of budgets today is not looked upon favorably by those running your business. Given that portability across cloud computing providers today is limited, despite the claims of providers, it is difficult to simply move your applications from one cloud to another quickly. So choose your provider carefully, based not only on matching your business and technological needs to the model they support but on the physical location and distribution of their data centers. Location is important; not to the definition of cloud computing but in its usage. Related articles by Zemanta The Case for "Private Clouds" Economic and Environmental advantages of Cloud Computing Cloud Costing: Fixed Costs vs Variable Costs & CAPEX Vs OPEX Sun feeds data center pods to credit crunched Microsoft 2.0 feels data center pinch 3 steps to a fast, secure, and reliable application infrastructure Cloud API Propagation and the Race to Zero (Cloud Interoperability) Data Center Knowledge: Amazon's Cloud Computing Data Center Locations244Views0likes0CommentsF5 Long Distance VMotion Solution Demo
Watch how F5's WAN Optimization enables long distance VMotion migration between data centers over the WAN. This solution can be automated and orchestrated and preserves user sessions/active user connections allowing seamless migration. Erick Hammersmark, Product Management Engineer, hosts this cool demonstration. ps232Views0likes0CommentsCloud vs Cloud
The Battle of the Clouds Aloha! Welcome ladies and gentleman to the face off of the decade. The Battle of the Clouds. In this corner, the up and comer, the phenom that has changed the way IT works, wearing the light shorts - The Cloud! And in this corner, your reigning champ, born and bred of Mother Nature with unstoppable power, wearing the dark trunks - Storm Clouds! You’ve either read about or lived through the massive storm that hit the Mid-Atlantic coast last week. And, by the way, if you are going through a loss, damage or worse, I do hope you can recover quickly and wish you the best. The weather took out power for millions including a Virginia ‘cloud’ datacenter which hosts a number of entertainment and social media sites. Many folks looking to get thru the candle-lit evenings were without their fix. While there has been confusion and growing pains over the years as to just what ‘cloud computing’ is, this instance highlights the fact that even The Cloud is still housed in a data center, with four walls, with power pulls, air conditioning, generators and many of the features we’ve become familiar with ever since the early days of the dot com boom (and bubble). They are physical structures, like our homes, that are susceptible to natural disasters among other things. Data centers have outages all the time but a single traditional data center outage might not get attention since it may only involve a couple companies – when a ‘cloud’ data center crashes, it could impact many companies and like last week, it grabbed headlines. Business continuity and disaster recovery are one of the main concerns for organizations since they rely on their system’s information to run their operations. Many companies use multiple data centers for DR and most cloud providers offer multiple cloud ‘locations’ as a service to protect against the occasional failure. But it is still a data center and most IT professionals have come to accept that a data center will have an outage – it’s just a question of how long and what impact or risk is introduced. In addition, you need the technology in place to be able to swing users to other resources when a outage occurs. A good number of companies don’t have a disaster recovery plan however, especially when backing up their virtual infrastructure in multiple locations. This can be understandable for a smaller start ups if backing up data means doubling their infrastructure (storage) costs but can be double disastrous for a large multi-national corporation. While most of the data center services have been restored and the various organizations are sifting through the ‘what went wrong’ documents, it is an important lesson in redundancy….or the risk of lack of. It might be an acceptable risk and a conscious decision since redundancy comes with a cost – dollars and complexity. A good read about this situation is Ben Coe’s My Friday Night With AWS. The Cloud has been promoting (and proven to some extent) it’s resilience, DR capabilities and it’s ability to technologically recover quickly yet Storm Clouds have proven time and again, that it’s power is unmatched…especially when you need power to turn on a data center. ps Resources Virginia Storm Knocks Out Popular Websites Millions without power as heat wave hammers eastern US Amazon Power Outage Exposes Risks Of Cloud Computing My Friday Night With AWS Modern life halted as Netflix, Pinterest, Instagram go down Storm Blamed for Instagram, Netflix, and Foursquare Outages (Real) Storm Crushes Amazon Cloud, Knocks out Netflix, Pinterest, Instagram232Views0likes0Comments