data center
44 TopicsApplication Availability Between Hybrid Data Centers
Reliable access to mission-critical applications is a key success factor for enterprises. For many organizations, moving applications from physical data centers to the cloud can increase resource capacity and ensure availability while reducing system management and IT infrastructure costs. Achieving this hybrid data center model the right way requires healthy resource pools and the means to distribute them. The F5 Application Availability Between Hybrid Data Centers solution provides core load-balancing, DNS and acceleration services that result in non-disruptive, seamless migration between private and public cloud environments. Check out the new Reference Architecture today along with a new video below! ps Related: Application Availability Between Hybrid Data Centers Reference Architecture Hybrid Data Center Infographic Hybrid Data Center Solution Diagram Technorati Tags: hybrid,data center,cloud,application delivery,office 365,goldengate,reference_architecture,f5,big-ip,silva Connect with Peter: Connect with F5:366Views0likes0CommentsHighly Available Hybrid
Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply. Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works. Have reliable crossover to the duplicate systems so they are ready when needed. And have the ability to detect failures as they occur so proper action can be taken. If the first two are in place, hopefully you never see a failure but maintenance is a must. BIG-IP high availability (HA) functionality, such as connection mirroring, configuration synchronization, and network failover, allow core system services to be available for BIG-IP to manage in the event that a particular application instance becomes unavailable. Organizations can synchronize BIG-IP configurations across data centers to ensure the most up to date policy is being enforced throughout the entire infrastructure. In addition, BIG-IP itself can be deployed as a redundant system either in active/standby or active/active mode. Web applications come in all shapes and sizes from static to dynamic, from simple to complex from specific to general. No matter the size, availability is important to support the customers and the business. The most basic high-availability architecture is the typical 3-tier design. A pair of ADCs in the DMZ terminates the connection; they in turn intelligently distribute the client request to a pool (multiple) of application servers which then query the database servers for the appropriate content. Each tier has redundant servers so in the event of a server outage, the others take the load and the system stays available. This is a tried and true design for most operations and provides resilient application availability within a typical data center. But fault tolerance between two data centers is even more reliable than multiple servers in a single location, simply because that one data center is a single point of failure. A hybrid data center approach allows organizations to not only distribute their applications when it makes sense but can also provide global fault tolerance to the system overall. Depending on how an organization’s disaster recovery infrastructure is designed, this can be an active site, a hot-standby, some leased hosting space, a cloud provider or some other contained compute location. As soon as that server, application, or even location starts to have trouble, organizations can seamlessly maneuver around the issue and continue to deliver their applications. Driven by applications and workloads, a hybrid data center is really a technology strategy of the entire infrastructure mix of on premise and off-premise data compute resources. IT workloads reside in conventional enterprise IT (legacy systems), an on premise private cloud (mission critical apps), at a third-party off-premise location (managed, hosting or cloud provider) or a combination of all three. The various combinations of hybrid data center types can be as diverse as the industries that use them. Enterprises probably already have some level of hybrid, even if it is a mix of owned space plus SaaS. Enterprises typically like to keep sensitive assets in house but have started to migrate workloads to hybrid data centers. Financial industries might have different requirements than retail. Startups might start completely with a cloud based service and then begin to build their own facility if needed. Mobile app developers, particularly games, often use the cloud for development and then bring it in-house once it is released. Enterprises, on the other hand, have (historically) developed in house and then pushed out to a data center when ready. The variants of industries, situations and challenges the hybrid approach can address is vast. Manage services rather than boxes. ps Related Hybrid DDoS Needs Hybrid Defense The Conspecific Hybrid Cloud The future of cloud is hybrid ... and seamless Hybrid–The New Normal Hybrid Architectures Do Not Require Private Cloud Technorati Tags: f5,hybrid,cloud,datacenter,applications,availability,silva Connect with Peter: Connect with F5:266Views0likes0CommentsThe Cloud is Still a Datacenter Somewhere
Application delivery is always evolving. Initially, applications were delivered out of a physical data center, either dedicated raised floor at the corporate headquarters or from some leased space rented from one of the web hosting vendors during the late 1990’s to early 2000’s or some combination of both. Soon global organizations and ecommerce sites alike, started to distribute their applications and deploy them at multiple physical data centers to address geo-location, redundancy and disaster recovery challenges. This was an expensive endeavor back then even without adding the networking, bandwidth and leased line costs. When server virtualization emerged and organizations had the ability to divide resources for different applications, content delivery was no longer tethered 1:1 with a physical device. It could live anywhere. With virtualization technology as the driving force, the cloud computing industry was formed and offered yet another avenue to deliver applications. Application delivery evolved again. As cloud adoption grew, along with the Softwares, Platforms and Infrastructures enabling it, organizations were able to quickly, easily and cost effectively distribute their resources around the globe. This allows organizations to place content closer the user depending on location, and provides some fault tolerance in case of a data outage. Today, there is a mixture of options available to deliver critical applications. Many organizations have on-premises private, owned data center facilities, some leased resources at a dedicated location and maybe even some cloud services. In order to achieve or even maintain continuous application availability and keep up with the pace of new application rollouts, many organizations are looking to expand their data center options, including cloud, to ensure application availability. This is important since 84% of datacenters had issues with power, space and cooling capacity, assets, and uptime that negatively impacted business operations according to IDC. This leads to delays in application rollouts, disrupted customer service or even unplanned expenses to remedy the situation. Operating in multiple data centers is no easy task, however, and new data center deployments or even integrating existing data centers can cause havoc for visitors, employees and IT staff alike. Critical areas of attention include public web properties, employee access to corporate resources and communication tools like email along with the security and required back end data replication for content consistency. On top of that, maintaining control over critical systems spread around the globe is always a major concern. A combination of BIG-IP technologies provides organizations the global application services for DNS, federated identity, security, SSL offload, optimization & application health/availability to create an intelligent cost effective, resilient global application delivery infrastructure across a hybrid mix of data centers. Organizations can minimize downtime, ensure continuous availability and have on demand scalability when needed. Simplify, secure and consolidate across multiple data centers while mitigating impact to users or applications. ps Related: Datacenter Transformation and Cloud The Event-Driven Data Center Hybrid Architectures Do Not Require Private Cloud The Dynamic Data Center: Cloud's Overlooked Little Brother Decade old Data Centers Technorati Tags: datacenter,f5,big-ip,hybrid,cloud,private,multi,applications,silva Connect with Peter: Connect with F5:461Views0likes0CommentsF5 Friday: Secure Data in Flight
#bigdata #infosec Fat apps combined with SSL Everywhere strategies suggest a need for more powerful processing in the application delivery tier According to Netcraft, who tracks these kinds of things, SSL usage has doubled from 2008 and 2011. That's a good thing, as it indicates an upswing in adherence to security best practices that say "SSL Everywhere" just makes good sense. The downside is overhead, which despite improvements in processing power and support for specific cryptographic processing in hardware still exists. How much overhead is more dependent on the size of data and the specific cryptographic algorithms chosen. SSL is one of those protocols that has different overhead and impacts on performance based on the size of the data. With data less than 32kb, overhead is primarily incurred during session negotiation. After 32kb, bulk encryption becomes the issue. The problem is that a server is likely going to feel both, because it has to negotiate the session and the average response size for web applications today is well above the 32kb threshold, with most pages serving up 41kb in HTML alone – that's not counting scripts, images, and other objects. It turns out that about 70% of the total processing time of an HTTPS transaction is spent in SSL processing. As a result, a more detailed understanding of the key overheads within SSL processing was required. By presenting a detailed description of the anatomy of SSL processing, we showed that the major overhead incurred during SSL processing lies in the session negotiation phase when small amount of data are transferred (as in banking transactions). On the other hand, when the data exchanged in the session crosses over 32K bytes, the bulk data encryption phase becomes important. -- Anatomy and Performance of SSL Processing [pdf] An often overlooked benefit of improvements in processing power is that just as it helps improve processing of SSL on servers, so too do such improvements help boost the processing of SSL on intermediate devices such as application delivery controllers. On such devices, where complete control over the network and operating system stacks is possible, even greater performance benefits are derived from advances in processing power. Those benefits are also seen in other processing on devices such as compression and intelligent traffic management. But also a benefit of more processing power and improvements in core bus architectures is the ability to do more with less, which enables consolidation of application delivery services on to a shared infrastructure platform like BIG-IP. From traffic management to acceleration, from network to application firewall services, from DNS to secure remote access – hardware improvements from the processor to the NIC to the switching backplane offer increased performance as well as increased utilization across multiple functions which, in and of itself, improves performance by eliminating multiple hops in the application delivery chain. Each hop removed improves performance because the latency associated with managing flows and connections is eliminated. Introducing BIG-IP 4200v The BIG-IP 4200v hardware platform takes advantage of this and the result is better performance with a lower power footprint (80+ Gold Certified power supplies) that improves security across all managed applications. Consolidation further reduces power consumption by eliminating redundant services and establishes a strategic point of control through which multiple initiatives can be realized including unified secure remote access, an enhanced security posture, and increased server utilization by leveraging offload services at the application delivery tier. A single, unified application delivery platform offers many benefits, not the least of which is visibility into all operational components: security, performance, and availability. BIG-IP 4200v supports provisioning of BIG-IP Analytics (AVR) in conjunction with other BIG-IP service modules, enabling breadth and depth of traffic management analytics across all shared services. This latest hardware platform provides mid-size enterprises and service providers with the performance and capacity required to implement more comprehensive application delivery services that address operational risk. BIG-IP Hardware – Product Information Page BIG-IP Hardware – Datasheet Hardware Acceleration Critical Component for Cost-Conscious Data Centers When Did Specialized Hardware Become a Dirty Word? Data Center Feng Shui: SSL Why should I care about the hardware? F5 Friday: What’s Inside an F5? F5 Friday: Have You Ever Played WoW without a Good Graphics Card?210Views0likes0CommentsCloud isn't Social, it's Business.
Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS. Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model. While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization. By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations. A service subscription business model: Makes it easier to project costs across entire infrastructure Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project. Easier to justify cost of infrastructure Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not. Enables business to manage costs over time Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not. Easier to start up a project/application and ramp up over time as associated revenue increases Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets. Enables consistent comparison with off-premise cloud computing A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required. The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand. That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well. One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT. Curing the Cloud Performance Arrhythmia The Cloud Integration Stack Devops is Not All About Automation 1024 Words: The Devops Butterfly Effect Cloud Delivery Model is about Ops, not Apps Cloud Bursting: Gateway Drug for Hybrid Cloud164Views0likes0CommentsThe Venerable Vulnerable Cloud
Ever since cloud computing burst onto the technology scene a few short years ago, Security has always been a top concern. It was cited as the biggest hurdle in many surveys over the years and in 2010, I covered a lot of those in my CloudFucius blog series. A recent InformationWeek 2012 Cloud Security and Risk Survey says that 27% of respondents have no plans to use public cloud services while 48% of those respondents say their primary reason for not doing so is related to security - fears of leaks of customer and proprietary data. Certainly, a lot has been done to bolster cloud security, reduce the perceived risks associated with cloud deployments and even with security concerns, organizations are moving to the cloud for business reasons. A new survey from Everest Group and Cloud Connect, finds cloud adoption is widespread. The majority of the 346 executive respondents, 57%, say they are already using Software as a Service (SaaS) applications, with another 38% adopting Platform as a Service (PaaS) solutions. The most common applications already in the cloud or in the process of being migrated to the cloud include application development/test environments (54%), disaster recovery and storage (45%), email/collaboration (41%), and business intelligence/analytics (35%). Also, the survey found that cloud buyers say the two top benefits they anticipate the most is a more flexible infrastructure capacity and reduced time for provisioning and 61% say they are already meeting their goals for achieving more flexibility in their infrastructures. There’s an interesting article by Dino Londis on InformationWeek.com called How Consumerization is Lowering Security Standards where he talks about how Mob Rule or the a democratization of technology where employees can pick the best products and services from the market is potentially downgrading security in favor of convenience. We all may forgo privacy and security in the name of convenience – just look at loyalty rewards cards. You’d never give up so much personal info to a stranger yet when a store offers 5% discount and targeted coupons, we just might spill our info. He also includes a list of some of the larger cloud breaches so far in 2012. Also this week, the Cloud Security Alliance (CSA) announced more details of its Open Certification Framework, and its partnership with BSI (British Standards Institution). The BSI partnership ensures the Open Certification Framework is in line with international standards. The CSA Open Certification Framework is an industry push that offers cloud providers a trusted global certification scheme. This flexible three-stage scheme will be created in line with the CSA's security guidance and control objectives. The Open Certification Framework is composed of three levels, each one providing an incremental level of trust and transparency to the operations of cloud service providers and a higher level of assurance to the cloud consumer. Additional details can be found at: http://cloudsecurityalliance.org/research/ocf/ The levels are: CSA STAR Self Assessment: The first level of certification allows cloud providers to submit reports to the CSA STAR Registry to indicate their compliance with CSA best practices. This is available now. CSA STAR Certification: At the second level, cloud providers require a third-party independent assessment. The certification leverages the requirements of the ISO/IEC 27001:2005 management systems standard together with the CSA Cloud Controls Matrix (CCM). These assessments will be conducted by approved certification bodies only. This will be available sometime in the first half of 2013. The STAR Certification will be enhanced in the future by a continuous monitoring-based certification. This level is still in development. Clearly the cloud has come a long way since we were all trying to define it a couple years ago yet, also clearly, there is still much to be accomplished. It is imperative that organizations take the time to understand their provider’s security controls and make sure that they protect your data as good or better as you do. Also, stop by Booth 1101 at VMworld next week to learn how F5 can help with Cloud deployments. ps207Views0likes0CommentsCloud vs Cloud
The Battle of the Clouds Aloha! Welcome ladies and gentleman to the face off of the decade. The Battle of the Clouds. In this corner, the up and comer, the phenom that has changed the way IT works, wearing the light shorts - The Cloud! And in this corner, your reigning champ, born and bred of Mother Nature with unstoppable power, wearing the dark trunks - Storm Clouds! You’ve either read about or lived through the massive storm that hit the Mid-Atlantic coast last week. And, by the way, if you are going through a loss, damage or worse, I do hope you can recover quickly and wish you the best. The weather took out power for millions including a Virginia ‘cloud’ datacenter which hosts a number of entertainment and social media sites. Many folks looking to get thru the candle-lit evenings were without their fix. While there has been confusion and growing pains over the years as to just what ‘cloud computing’ is, this instance highlights the fact that even The Cloud is still housed in a data center, with four walls, with power pulls, air conditioning, generators and many of the features we’ve become familiar with ever since the early days of the dot com boom (and bubble). They are physical structures, like our homes, that are susceptible to natural disasters among other things. Data centers have outages all the time but a single traditional data center outage might not get attention since it may only involve a couple companies – when a ‘cloud’ data center crashes, it could impact many companies and like last week, it grabbed headlines. Business continuity and disaster recovery are one of the main concerns for organizations since they rely on their system’s information to run their operations. Many companies use multiple data centers for DR and most cloud providers offer multiple cloud ‘locations’ as a service to protect against the occasional failure. But it is still a data center and most IT professionals have come to accept that a data center will have an outage – it’s just a question of how long and what impact or risk is introduced. In addition, you need the technology in place to be able to swing users to other resources when a outage occurs. A good number of companies don’t have a disaster recovery plan however, especially when backing up their virtual infrastructure in multiple locations. This can be understandable for a smaller start ups if backing up data means doubling their infrastructure (storage) costs but can be double disastrous for a large multi-national corporation. While most of the data center services have been restored and the various organizations are sifting through the ‘what went wrong’ documents, it is an important lesson in redundancy….or the risk of lack of. It might be an acceptable risk and a conscious decision since redundancy comes with a cost – dollars and complexity. A good read about this situation is Ben Coe’s My Friday Night With AWS. The Cloud has been promoting (and proven to some extent) it’s resilience, DR capabilities and it’s ability to technologically recover quickly yet Storm Clouds have proven time and again, that it’s power is unmatched…especially when you need power to turn on a data center. ps Resources Virginia Storm Knocks Out Popular Websites Millions without power as heat wave hammers eastern US Amazon Power Outage Exposes Risks Of Cloud Computing My Friday Night With AWS Modern life halted as Netflix, Pinterest, Instagram go down Storm Blamed for Instagram, Netflix, and Foursquare Outages (Real) Storm Crushes Amazon Cloud, Knocks out Netflix, Pinterest, Instagram238Views0likes0CommentsIPExpo London Presentations
A few months back I attended and spoke at the IPExpo 2011 at Earl’s Court Two in London. I gave 3 presentations which were recorded and two of them are available online from the IPExpo website. I haven’t figured out a way to download or embed the videos but did want to send the video links. The slides for each are also available. Sign-up (free) may be required to view the content but it’s pretty good, if I do say so myself. A Cloud To Call Your Own – I was late for this one due to some time confusion but I run in get mic’d and pull it all together. I run thru various areas of focus/concern/challenges of deploying applications in the cloud – many of them no different than a typical application in a typical data center. The Encryption Dance gets it’s first international performance and the UK crowd wasn’t quite sure what to do. It is the home of Monty Python, isn’t it? Catching up to the Cloud: Roadmap to the Dynamic Services Model – This was fun since it was later in the afternoon and there were only a few folks in the audience. I talk about the need to enable enterprises to add, remove, grow and shrink services on-demand, regardless of location. ps Related: F5 EMEA London IPEXPO 2011 London IPEXPO 2011 - The Wrap Up F5 EMEA Video F5 Youtube Channel F5 UK Web Site Technorati Tags: F5, ipexpo, integration, Pete Silva, security, business, emea, technology, trade show, big-ip, video, education179Views0likes0Comments2011 Telly Award Winner - The F5 Dynamic Data Center
Founded in 1978 to honor excellence in local, regional, cable TV commercials along with non-broadcast video and TV programs, The Telly Awards is the premier award honoring the finest film and video productions, groundbreaking web commercials, videos and films, and outstanding local, regional, and cable TV commercials and programs. Produced in conjunction with Connect Marketing, we are proud to share that F5’s video, The Dynamic Data Center, is a Silver Winner for the 32nd Annual Telly Awards. This video sets the stage for IT having to manage multiple networking challenges when faced with a natural disaster causing their data center to shut down. With careful planning, the evolution of the network and application delivery allows the single point of control to automate, provision and secure their virtual and cloud environments. ";" alt="" /> The F5 Dynamic Data Center ps Resources: 32nd Annual Telly Awards - 2011 Silver Winners Telly Awards F5 Security Vignette: Proactive Security F5 Security Vignette: DNSSEC Wrapping F5 Security Vignette: Hacktivism Attack F5 Security Vignette: SSL Renegotiation F5 Security Vignette: Credit Card iRule F5 Security Vignette: Apache HTTP RANGE Vulnerability F5 Security Vignette: iHealth Security is our Job F5 YouTube Feed Technorati Tags: F5, F5 News, dynamic data center, security, performance, availability, video, Telly Award, youtube190Views0likes0CommentsChallenging the Firewall Data Center Dogma
Do you really need a firewall to secure web and application services? Some organizations would say no based on their experiences while others are sure to quail at the very thought of such an unnatural suggestion. Firewalls are, in most organizations, the first line of defense for web and application services. This is true whether those services are offered to the public or only to off-site employees via secure remote access. The firewall is, and has been, the primary foundation around which most network security architectures are built. We’ve spent years designing highly-available, redundant architectures that include the firewall. We’ve deployed them not only at “the edge” but moved them further and further into the data center in architectures that have commonly become known as “firewall sandwiches”. The reasons for this are simple – we want to protect those services that are critical to the business and the primary means by which we accomplish that task is by controlling access to them via often simple but powerful access control. In later years we’ve come to rely upon additional intrusion detection systems such as IPS (Intrusion Prevention Systems) that are focused on sniffing out (sometimes literally) malicious attacks and attempts to circumvent security policies and stop them. One of the core attacks against which such solutions protect services is a denial of service. >Unfortunately, it is increasingly reality that the firewall is neither able to detect or withstand such attacks and ultimately such devices fail – often at a critical moment. The question then is what to do about it. The answer may be to simply remove the firewall from the critical data path for web services./p> THAT’S UNNATURAL! Just about anything is unnatural the first time you try it, but that doesn’t mean it isn’t going to work or that it’s necessarily wrong. One of my favorite fantasy series – David Eddings’ Belgariad – illustrates this concept quite nicely. A couple of armies need to move their ships up an escarpment to cross a particular piece of land to get where they need to be. Now usually fording – historically – involves manhandling ships across land. This is hard and takes a lot of time. No one looked forward to this process. In the story, someone is wise enough to put these extremely large ships on wheels and then leverage the power of entire herds of horses to move them over the land, thus improving performance of the process and saving a whole lot of resources. One of the kings is not all that sure he likes violating a precept that has always been akin to dogma – you ford ships by hand. King Rhodar put on a perfectly straight face. “I’ll be the first to admit that it’s probably not nearly as good as moving them by hand, Anheg. I’m sure there are some rather profound philosophical reasons for all that sweating and grunting and cursing, but it is faster, wouldn’t you say? And we really ought to move right along with this.” “It’s unnatural,” Anheg growled, still glaring at the two ships, which were already several hundred yards away. >Rhodar shrugged. “Anything’s unnatural the first time you try it.” -- “Enchanter’s End Game”, David Eddings (p 147) > Needless to say King Anheg eventually gave in and allowed his ships to be moved in this new, unnatural way, finding it to be more efficient and faster and ultimately it kept his men from rebelling against him for making them work so hard. This same lesson can be applied to removing the firewall from the critical inbound data path of services. Sure, it sounds unnatural and perhaps it is if it’s the first time you’re trying it, but necessity is the mother of invention and seems to also help overcome the feeling that something shouldn’t be done because it hasn’t been done before. If you need convincing as to why you might consider such a tactic, consider a recent survey conducted by Arbor Networks showing an increasing failure rate of firewalls and IPS solutions due to attacks. “Eighty-six percent of respondents indicated that they or their customers have placed stateful firewall and/or IPS devices in their IDCs. Nearly half of all respondents—a solid majority of those who actually have deployed these devices within their IDCs— experienced stateful firewall and/or IPS failure as a direct result of DDoS attacks during the survey period. Only 14 percent indicated that they follow the IDC BCP of enforcing access policy via stateless ACLs deployed on hardware-based routers/Layer 3 switches capable of handling millions of packets per second.”[emphasis added] >-- Network Infrastructure Security Report Volume VI, Arbor Networks, Feb 1 2011 That is a lot of failures, especially given that firewalls are a critical data center component and are almost certainly in the path of a business critical web or application service. But it’s dogma; you simply must have a firewall in front of these services. Or do you? BASIC FIREWALLING ISN’T ENOUGH The reality is that you need firewall functionality – services - but you also need a lot more. You need to control access to services at the network layers but you also need to mitigate access and attacks occurring at the application layers. That means packet-based firewalls – even with their “deep packet inspection” capabilities – are not necessarily up to the task of protecting the services they’re supposed to be protecting. The Anonymous attacks taught us that attacks are now not only distributed from a client perspective, they’re also distributed from a service perspective; attacking not only the network but the application layers. That means every device between clients and servers must be capable of handling not only the increase in traffic but somehow detecting and preventing those attacks from successfully achieving their goal: denial of service. During the anonymous attacks, discussions regarding what to do about traffic overwhelming firewalls resulted in what might be considered an “unnatural” solution: removal of the firewall. That’s because the firewall was actually part of the problem, not the solution, and removing it from the inbound data path resulted in a more streamlined (and efficient) route that enabled continuous availability of services despite ongoing attacks – without compromising security. Yes, you heard that right. Some organizations are running sans firewall and finding that for inbound web services, at least, the streamlined path is maintaining a positive security posture while ensuring availability and performance. That doesn’t mean they are operating without those security services in place, it just means they’ve found that other components in the inbound data path are capable of providing those basic firewalling services without negatively impacting availability. ATTACKS AREN’T the ONLY PROBLEM It isn’t just attacks that are going to pose problems in the near future for firewalls and IPS components. The increase in attacks and attack surfaces are alarming, yes, but it’s that combined with an increase in traffic in general that’s pushing load on all data center components off the charts. Cisco recently shared the results of its latest Visual Networking Index Forecast: “By 2015, Cisco says that mobile data traffic will grow to 6.3 exabytes of data or about 1 billion gigabytes of data per month. The report indicates that two-thirds of the mobile data traffic on carrier networks in 2015 will come from video services. This trend follows a similar trend in traditional broadband traffic growth.” >Read more: http://news.cnet.com/8301-30686_3-20030291-266.html#ixzz1CtYWZPAk Cisco’s report is obviously focused on service providers as they will bear the brunt of the increase in traffic (and in many cases they bear the majority of the impact from denial of service attacks) but that traffic is going somewhere, and somewhere is often your data center, accessing your services, increasing load on your data center infrastructure. Load testing, to be sure, of an active architecture is important. It’s the only way to really determine what the real capacity for your data center will be and how it will respond under heavy load – and that includes the additional strain resulting from an attack. Cloud-based load testing services are available and can certainly be of assistance in performing such testing on live infrastructure. And yes, it has to be live or it won’t find all the cracks and fissures in your architecture. It isn’t your lab environment, after all, that’s going to be under attack or stressed out by sudden surges in traffic. Perhaps no problems exist, but you really don’t want to find out there are when the pressure’s on and you have to make the decision in the heat of the moment. Try testing with your firewall, and without (assuming you have solutions capable of providing the security services required in the inbound data path). See if there is an impact (positive or negative) and then you’ll be better able to make a decision in the event it becomes necessary. Putting firewalls in front of your Internet services has been dogma for a long, long time. But are they up to the task? It would appear in many cases they aren’t. When a solid majority of folks have found their sites down due to firewall failure, we may need to rethink the role of a firewall in securing services. That doesn’t mean we’ll come to a different conclusion, especially as only part of the architectural decisions made regarding data center security are dependent on technological considerations; other factors such as risk tolerance by the business are often the driving factor and play a much larger role in such decisions whether IT likes it or not. But it does mean that we should occasionally re-evaluate our data center strategies and consider whether traditional architectural dogma is still appropriate in today’s environment. Especially when that architectural dogma may be part of the problem. Infrastructure Matters: Challenges of Cloud-based Testing The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Cloud Testing: The Next Generation [Network World] Network Infrastructure Security Report Volume VI [arbornetworks.com] Load Testing as a Service: A Look at Load Impact (beta) Cloud Testing: The Next Generation To Boldly Go Where No Production Application Has Gone Before It’s 2am: Do You Know What Algorithm Your Load Balancer is Using? Data Center Feng Shui: Process Equally Important as Preparation Don’t Conflate Virtual with Dynamic Data Center Feng Shui What We Learned from Anonymous: DDoS is now 3DoS The Many Faces of DDoS: Variations on a Theme or Two275Views0likes1Comment