strategic points of control
23 TopicsHe Who Defends Everything Defends nothing… Right?
There has been much made in Information Technology about the military quote: “He Who Defends Everything Defends Nothing” – Originally uttered by Frederick The Great of Prussia. He has some other great quotes, check them out when you have a moment. The thing is that he was absolutely correct in a military or political context. You cannot defend every inch of ground or even the extent of a very long front with a limited supply of troops. You also cannot refuse to negotiate on all points in the political arena. The nature of modern representative government is such that the important things must be defended and the less important offered up in trade for getting other things you want or need. In both situations, demanding that everything be saved results in nothing being saved. Militarily because you will be defeated piecemeal with your troops spread out, and politically because your opponent has no reason to negotiate with you if you are not willing to give on any issue at all. But in high tech, things are a little more complex. That phrase is most often uttered to refer to defense against hacking attempts, and on the surface seems to fit well. But with examination, it does not suit the high-tech scenario at all. While defense in depth is important in datacenter defense, just in case someone penetrates your outer defenses. But we all know that there are one or two key choke-points that allow you to stop intruders who do not have inside help – your Internet connections. If those are adequately protected, the chances of your network being infiltrated, your website taken down, or any of a million other ugly outcomes are much smaller. The problem, in the 21st century, is the definition of “adequate”. Recent attacks have taken down firewalls previously assumed to be “adequate”, and the last several years have seen a couple of spectacular DNS vulnerabilities focusing on a primary function that had seriously seen little attention from attackers or security folks. In short, the entire face you present to the world is susceptible to attack. And at the application layer, attacks can slip through your outer defenses pretty easily. That’s why the future network defensive point for the datacenter will be a full proxy at the Strategic Point of Control where your network connects to the Internet. Keeping attacks from dropping your network requires a high-speed connection in front of all available resources. The Wikileaks attacks took out a few more than “adequate” firewalls, while the DNS vulnerabilities attacked DNS through its own protocol. A device in the strategic point of control between the Internet and your valuable resources needs to be able handle high-volume attacks and be resilient enough to respond to new threats be they at the protocol or application layers. It needs to be intelligent enough to compare user/device against known access allowances and quarantine the user appropriately if things appear fishy. It also needs to be adaptable enough to adapt to new attacks before they overwhelm the network. Zero day attacks by definition almost never have canned fixes available, so waiting for your provider to plug the hole is a delay that you might not be able to afford. That requires the ability for you to work in fixes and an environment that encourages the sharing of fixes – like DevCentral or a similar site. So that you can quickly solve the problem either by identifying the problem and creating a fix, or by downloading someone else’s fix and installing it. While an “official” solution might follow, and eventually the app will get patched, you are protected in the interim. You can defend everything by placing the correct tool at the correct location. You can manage who has access to what, from which devices, when, and how they authenticate. All while protecting against DOS attacks that cripple some infrastructures. That’s the direction IT needs to head. We spend far too many resources and far too much brainpower on defending rather than enabling. Time to get off the merry-go-round, or at least slow it down enough that you can return your focus to enabling the business and worry less about security. Don’t expect security concerns will ever go away though, because we can – and by the nature of the threats must – defend everything.1.8KViews0likes0CommentsVideos from F5's recent Agility customer / partner conference in London
A week or so ago, F5 in EMEA held our annual customer / partner conference in London. I meant to do a little write-up sooner but after an incredibly busy conference week I flew to F5's HQ in Seattle and didn't get round to posting there either. So...better late than never? One of the things we wanted to do at Agility was take advantage of the DevCentral team's presence at the event. They pioneered social media as a community tool, kicking off F5's DevCentral community (now c. 100,000 strong) in something like 2004. They are very experienced and knowledgeable about how to use rich media to get a message across. So we thought we'd ask them to do a few videos with F5's customers and partners about what drives them and how F5 fits in. Some of them are below, and all of them can be found here.262Views0likes0CommentsForce Multipliers and Strategic Points of Control Revisited
On occasion I have talked about military force multipliers. These are things like terrain and minefields that can make your force able to do their job much more effectively if utilized correctly. In fact, a study of military history is every bit as much a study of battlefields as it is a study of armies. He who chooses the best terrain generally wins, and he who utilizes tools like minefields effectively often does too. Rommel in the desert often used Wadis to hide his dreaded 88mm guns – that at the time could rip through any tank the British fielded. For the last couple of years, we’ve all been inundated with the story of The 300 Spartans that held off an entire army. Of course it was more than just the 300 Spartans in that pass, but they were still massively outnumbered. Over and over again throughout history, it is the terrain and the technology that give a force the edge. Perhaps the first person to notice this trend and certainly the first to write a detailed work on the topic was von Clausewitz. His writing is some of the oldest military theory, and much of it is still relevant today, if you are interested in that type of writing. For those of us in IT, it is much the same. He who chooses the best architecture and makes the most of available technology wins. In this case, as in a war, winning is temporary and must constantly be revisited, but that is indeed what our job is – keeping the systems at their tip-top shape with the resources available. Do you put in the tool that is the absolute best at what it does but requires a zillion man-hours to maintain, or do you put in the tool that covers everything you need and takes almost no time to maintain? The answer to that question is not always as simple as it sounds like it should be. By way of example, which solution would you like your bank to put between your account and hackers? Probably a different one than the one you would you like your bank to put in for employee timekeeping. An 88 in the desert, compliments of WW2inColor Unlike warfare though, a lot of companies are in the business of making tools for our architecture needs, so we get plenty of options and most spaces have a happy medium. Instead of inserting all the bells and whistles they inserted the bells and made them relatively easy to configure, or they merged products to make your life easier. When the terrain suits a commanders’ needs in wartime, the need for such force multipliers as barbed wire and minefields are eliminated because an attacker can be channeled into the desired defenses by terrain features like cliffs and swamps. The same could be said of your network. There are a few places on the network that are Strategic Points of Control, where so much information (incidentally including attackers, though this is not, strictly speaking, a security blog) is funneled through that you can increase your visibility, level of control, and even implement new functionality. We here at F5 like to talk about three of them… Between your users and the apps they access, between your systems and the WAN, and between consumers of file services and the providers of those services. These are places where you can gather an enormous amount of information and act upon that information without a lot of staff effort – force multipliers, so to speak. When a user connects to your systems, the strategic point of control at the edge of your network can perform pre-application-access security checks, route them to a VPN, determine the best of a pool of servers to service their requests, encrypt the stream (on front, back, or both sides), redirect them to a completely different datacenter or an instance of the application they are requesting that actually resides in the cloud… The possibilities are endless. When a user accesses a file, the strategic point of control between them and the physical storage allows you to direct them to the file no matter where it might be stored, allows you to optimize the file for the pattern of access that is normally present, allows you to apply security checks before the physical file system is ever touched, again, the list goes on and on. When an application like replication or remote email is accessed over the WAN, the strategic point of control between the app and the actual Internet allows you to encrypt, compress, dedupe, and otherwise optimize the data before putting it out of your bandwidth-limited, publicly exposed WAN connection. The first strategic point of control listed above gives you control over incoming traffic and early detection of attack attempts. It also gives you force multiplication with load balancing, so your systems are unlikely to get overloaded unless something else is going on. Finally, you get the security of SSL termination or full-stream encryption. The second point of control gives you the ability to balance your storage needs by scripting movement of files between NAS devices or tiers without the user having to see a single change. This means you can do more with less storage, and support for cloud storage providers and cloud storage gateways extends your storage to nearly unlimited space – depending upon your appetite for monthly payments to cloud storage vendors. The third force-multiplies the dollars you are spending on your WAN connection by reducing the traffic going over it, while offloading a ton of work from your servers because encryption happens on the way out the door, not on each VM. Taking advantage of these strategic points of control, architectural force multipliers offers you the opportunity to do more with less daily maintenance. For instance, the point between users and applications can be hooked up to your ADS or LDAP server and be used to authenticate that a user attempting to access internal resources from… Say… and iPad… is indeed an employee before they ever get to the application in question. That limits the attack vectors on software that may be highly attractive to attackers. There are plenty more examples of multiplying your impact without increasing staff size or even growing your architectural footprint beyond the initial investment in tools at the strategic point of control. For F5, we have LTM at the Application Delivery Network Strategic Point of Control. Once that investment is made, a whole raft of options can be tacked on – APM, WOM, WAM, ASM, the list goes on again (tired of that phrase for this blog yet?). Since each resides on LTM, there is only one “bump in the wire”, but a ton of functionality that can be brought to bear, including integration with some of the biggest names in applications – Microsoft, Oracle, IBM, etc. Adding business value like remote access for devices, while multiplying your IT force. I recommend that you check it out if you haven’t, there is definitely a lot to be gained, and it costs you nothing but a little bit of your precious time to look into it. No matter what you do, looking closely at these strategic points of control and making certain you are using them effectively to meet the needs of your organization is easy and important. The network is not just a way to hook users to machines anymore, so make certain that’s not all you’re using it for. Make the most of the terrain. And yes, if you also read Lori’s blog, we were indeed watching the same shows, and talking about this concept, so no surprise our blogs are on similar wavelengths. Related Blogs: What is a Strategic Point of Control Anyway? Is Your Application Infrastructure Architecture Based on the ... F5 Tech Field Day – Intro To F5 As A Strategic Point Of Control What CIOs Can Learn from the Spartans What We Learned from Anonymous: DDoS is now 3DoS What is Network-based Application Virtualization and Why Do You ... They're Called Black Boxes Not Invisible Boxes Service Virtualization Helps Localize Impact of Elastic Scalability F5 Friday: It is now safe to enable File Upload256Views0likes0CommentsToll Booths and Dams. And Strategic Points of Control
An interesting thing about toll booths, they provide a point at which all sorts of things can happen. When you are stopped to pay a toll, it smooths the flow of traffic by letting a finite number of vehicles through per minute, reducing congestion by naturally spacing things out. Dams are much the same, holding water back on a river and letting it flow through at a rate determined by the operators of the dam. The really interesting bit is the other things that these two points introduce. When necessary, toll booths have been used to find and stop suspected criminals. They have also been used as advertising and information transmission points. None of the above are things toll booths were created for. They were created to collect tolls. And yet by nature of where they sit in the highway system, can be utilized for much more. The same is true of a dam. Dams today almost always generate electricity. Often they function as bridges over the very water they’re controlling. They control the migration of fish, and operate as a check on predatory invasive species. Again, none of these things is the primary reason dams were originally invented, but the nature of their location allows them to be utilized effectively in all of these roles. Toll booths - Wikipedia We’ve talked a bit about strategic points of control. They’re much like toll booths and dams in the sense that their location makes them key to controlling a whole lot of traffic on your LAN. In the case of F5’s defined strategic points of control, they all tie in to the history of F5’s product lineup much like a toll booth was originally to collect tolls. F5BIG-IPLTM sits at the network strategic point of control. Initially LTM was a load balancer, but by virtue of its location and the needs of customers has grown into one of the most comprehensive Application Delivery Controllers on the market – everything from security to uptime monitoring is facilitated by LTM. F5 ARX is much the same, being the file-based storage strategic point of control allows such things as directing some requests to cloud storage and others to storage by vendor A, while still others go to vendor B, and the remainder go to a Linux or Windows machine with a ton of free disk space on it. The WAN strategic point of control is where you can improve performance over the WAN via WOM, but it is also a place where you can extend LTM functionality to remote locations, including the cloud. Budgets for most organizations are not growing due to the state of the economy. Whether you’re government, public, private, or small business, you’ve been doing more with less for so long that doing more with the same would be a nice change. If you’re lucky, you’ll see growth in IT budgeting due to increasing needs of security and growth of application footprints. Some others will see essentially flat budgets, and many – including most government IT orgs - will see shrinking budgets. While that is generally bad news, it does give you the opportunity to look around and figure out how to make more effective use of existing technology. Yes, I have said that before, because you’re living that reality, so it is worth repeating. Since I work for F5, here are a few examples though, something I’ve not done before. From the network strategic point of control, we can help you with DNSSec, AAA, Application Security, Encryption, performance on several levels (from TCP optimizations to compression), HA, and even WAN optimization issues if needed. From the storage strategic point of control we can help you harness cloud storage, implement tiering, and balance load across existing infrastructure to help stave off expensive new storage purchases. Backups and replication can be massively improved (both in terms of time and data transferred) from this location also. We’re not the only vendor that can help you out without having to build a whole new infrastructure. It might be worthwhile to have a vendor day, where you invite vendors in to give presentations about how they can help – larger companies and the federal government do this regularly, you can do the same in a scaled down manner, and what sales person is going to tell you “no, we don’t want to come tell you how you can help and we can sell you more stuff”? Really? Another option is, as I’ve said in the past, make sure you know not just the functionality you are using, but the capabilities of the IT gear, software, and services that you already have in-house. Chances are there are cost savings by using existing functionality of an existing product, with time being your only expense. That’s not free, but it’s about as close as IT gets. Hoover Dam from the air - Wikipedia So far we in IT have been lucky, the global recession hasn’t hit our industry as hard as it has hit most, but it has constricted our ability to spend big, so little things like those above can make a huge difference. Since I am on a computer or Playbook for the better part of 16 hours a day, hitting websites maintained by people like you, I can happily say that you all rock. A highly complex, difficult to manage set of variables rarely produces a stable ecosystem like we have. No matter how good the technology, in the end it is people who did that, and keep it that way. You all rock. And you never know, but you might just find the AllSpark hidden in the basement ;-).253Views0likes0CommentsWhat Is Your Reason for Virtualization and Cloud, Anyway?
Gear shifting in a modern car is a highly virtualized application nowadays. Whether you’re driving a stick or an automatic, it is certainly not the same as your great grandaddy’s shifting (assuming he owned a car). The huge difference between a stick and an automatic is how much work the operator has to perform to get the job done. In the case of an automatic, the driver sets the car up correctly (putting it into drive as opposed to one of the other gears), and then forgets about it other than depressing and releasing the gas and brake pedals. A small amount of up-front effort followed by blissful ignorance – until the transmission starts slipping anyway. In a stick, the driver has much more granular control of the shifting mechanism, but is required to pay attention to dials and the feel of the car, while operating both pedals and the shifting mechanism. Two different solutions with two different strengths and weaknesses. Manual transmissions are much more heavily influenced by the driver, both in terms of operating efficiency (gas mileage, responsiveness, etc) and longevity (a careful driver can keep the clutch from going bad for a very long time, a clutch-popping driver can destroy those pads in near-zero time). Automatic transmissions are less overhead day-to-day, but don’t offer the advantages of a stick. This is the same type of trade-off you have to ask about the goals of your next generation architecture. I’ve touched on this before, and no doubt others have too, but it is worth calling out as its own blog. Are you implementing virtualization and/or cloud technologies to make IT more responsive to the needs of the user, or are you implementing them to give users “put it in drive and don’t worry about it” control over their own application infrastructure? The difference is huge, and the two may have some synergies, but they’re certainly not perfectly complimentary. In the case of making IT more responsive, you want to give your operators a ton of dials and whistles to control the day-to-day operations of applications and make certain that load is distributed well and all applications are responsive in a manner keeping with business requirements. In the case of push-button business provisioning, you want to make the process bullet-proof and not require user interaction. It is a different world to say “It is easy for businesses to provision new applications.” (yes, I do know the questions that statement spawns, but there are people doing it anyway – more in a moment) than it is to say “Our monitoring and virtual environment give us the ability to guarantee uptime and shift load to the servers/locales/geographies that make sense.” While you can do the second as a part of the first, they do not require each other, and unless you know where you’re going, you won’t ever get there. Some of you have been laughing since I first mentioned giving business the ability to provision their own applications. Don’t. There are some very valid cases where this is actually the answer that makes the most sense. Anyone reading this that works at a University knows that this is the emerging standard model for the student virtualization efforts. Let students provision a gazillion servers, because they know what they need, and University IT could never service all of the requests. Then between semesters, wipe the virtual arrays clean and start over. The early results show that for the university model, this is a near-perfect solution. For everyone not at a university, there are groups within your organization capable of putting up applications - a content management server for example - without IT involvement… Except that IT controls the hardware. If you gave them single-button ability to provision a standard image, they may well be willing to throw up their own application. There are still a ton of issues, security and DB access come to mind, but I’m pointing out that there are groups with the desire who believe they have the ability, if IT gets out of their way. Are you aiming to serve them? If so, what do you do for less savvy groups within the organization or those with complex application requirements that don’t know how much disk space or how many instances they’ll need? For increasing IT agility, we’re ready to start that move today. Indeed, virtualization was the start of increasing IT’s responsiveness to business needs, and we’re getting more and more technology on-board to cover the missing pieces of agile infrastructure. By making your infrastructure as adaptable as your VM environment, you can leverage the strategic points of control built into your network to handle ADC functionality, security, storage virtualization, and WAN Optimization to make sure that traffic keeps flowing and your network doesn’t become the bottleneck. You can also leverage the advanced reporting that comes from sitting in one of those strategic points of control to foresee problem areas or catch them as they occur, rather than waiting for user complaints. Most of us are going for IT agility in the short term, but it is worth considering if, for some users, one-click provisioning wouldn’t reduce IT overhead and let you focus on new strategic projects. Giving user groups access to application templates and raw VM images configured for some common applications they might need is not a 100% terrible idea if they can use them with less involvement from IT than is currently the case. Meanwhile, watch this space, F5 is one of the vendors driving the next generation of network automation, and I’ll mention it when cool things are going on here. Or if I see something cool someone else is doing, I occasionally plug it here, like I did for Cirtas when they first came out, or Oracle Goldengate. Make a plan. Execute on it. Stand ready to serve the business in the way that makes the most sense with the least time investment from your already busy staff. And listen to a lot of loud music, it lightens the stress level. I was listening to ZZ Top and Buckcherry writing this. Maybe that says something, I don’t quite know.242Views0likes0CommentsWhat CIOs Can Learn from the Spartans
When your data center is constantly under pressure to address operational risks, try leveraging some ancient wisdom from King Leonidas and William Wallace The Battle of Thermopylae is most often remembered for the valiant stand of the "300". In case you aren't familiar, three hundred Spartans (and a supporting cast of city-state nations) held off the much more impressively numbered armies of Prince Xerces for a total of seven days before being annihilated. A Greek force of approximately 7,000 men marched north to block the pass in the summer of 480 BC. The Persian army, alleged by the ancient sources to have numbered in the millions but today considered to have been much smaller (various figures are given by scholars ranging between about 100,000 and 300,000), arrived at the pass in late August or early September. Vastly outnumbered, the Greeks held off the Persians for seven days in total (including three of battle), before the rear-guard was annihilated in one of history's most famous last stands. During two full days of battle, the small force led by King Leonidas I of Sparta blocked the only road by which the massive Persian army could pass. After the second day of battle, a local resident named Ephialtes betrayed the Greeks by revealing a small path that led behind the Greek lines. Aware that his force was being outflanked, Leonidas dismissed the bulk of the Greek army, and remained to guard the rear with 300 Spartans, 700 Thespians, 400 Thebans and perhaps a few hundred others, the vast majority of whom were killed. -- Wikipedia, The Battle of Thermopylae [emphasis added] Compare that to the Battle of Stirling Bridge, where William Wallace and his much smaller force of Scots prepared to make a stand against Edward I and his English forces. He chose a battleground that afforded him a view of the surrounding area for twenty miles, enabling him to not only see exactly what challenges he faced, but to make his plans accordingly. Leveraging the very narrow bridge at Stirling and some somewhat unconventional tactics at the time, he managed to direct his resources in a way that allowed him to not only control the flow of opponents but ensure victory for the Scottish forces. What CIOs should take away from even a cursory study of these battles is this: strategic control can enable you to meet your goals with far fewer resources than expected. The choice of terrain and tools is commonly accepted as a force multiplier in military tactics. The difference between the two was in visibility; ultimately it was a lack of visibility that caused Leonidas' strategy to fail where Wallace was successful. Leonidas, unable to see sooner that he was being outflanked, could not provision resources or apply tactics in a way that enabled him to defeat the Persians. Wallace, on the other hand, had both visibility and control and ultimately succeeded. What's needed in the data center is similar: finding strategic points of control and leverage them to achieve a positive operational posture that not only addresses implementation and architectural requirements but business requirements as well. IT has to align itself as a means to align with the business. THE STRATEGIC TRIFECTA There inherently exist in the data center strategic points of control; that is, locations at which it's most beneficial to apply and enforce a broad variety of policies to achieve operational and business goals. Like terrain, these points of control can be force multipliers – improving the efficiency and effectiveness of fewer resources. Like high ground, it affords IT the visibility necessary to redeploy resources dynamically. This strategic trifecta comprises business value, architecture and implementation and when identified, these strategic locations can be a powerful tool in realizing IT operational and business goals. Strategic points of control are almost always naturally aggregation points within an architecture; physical and topological locations at which traffic is forced for one reason or another to flow. The locations are ones within the data center in which all three strategic advantages can be achieved simultaneously. Applications and data cannot be controlled nor policies enforced upon them to align with business goals on a per-instance basis. Applications and storage resources today are constructs, comprising multiple infrastructure and application services that cannot be managed effectively to meet business goals individually.Strategic points of control within the data center afford a unique opportunity to view, manage and enforce policies upon application and storage services as a holistic unit. You'll note the similarity here with the battlegrounds chosen by Leonidas and Wallace: Thermopylae and Stirling. Thermopylae was a naturally occurring location that narrowed the path through which the invading army had to travel. Mountains on one side, cliffs on the other, Xerces had no choice but to send his army straight into the eager arms of the Spartans. Stirling is located within the folds of a river with a single, narrow bridge. Edward I had no choice but to send his men two by two across that bridge to form up on the chosen battleground, allowing Wallace and the Scots to control the flow and ultimately decide the moment of attack when it was most likely that the Scots could prevail. As a data center technique, the strategy remains much the same: apply policies regarding security, performance, and reliability in those places where traffic and resources naturally converges. Use the right equipment in the right locations and the investment can multiply the efficiency of the entire data center just as both become force multipliers on the battlefield. The policies implemented at each strategic point of control enable better management of resources, better direction of traffic, and improved control over access those resources. Each point essentially virtualizes resources, and policies that govern how those resources are access, distributed and consumed can be enforced. They optimize the end-to-end delivery of resources across vastly disparate conditions and environments. Such points of control, especially when collaborative in nature, provide a holistic view of and control over top-level business concerns: reliability, availability and performance. Leveraging strategic points of control also affords creates a more agile operational posture in which policies can be adjusted dynamically and rapidly to address a wide variety of data center concerns. All three foci are required; a lack of visibility by concentrating on individual performance, availability and capacity (operational risks) does not afford the opportunity to meet business goals. It is the performance of the application as a whole, not its individual components, that is of import to the business. It is the cost to deliver and secure the application as a whole that determines efficiency, not that of individual components. These strategic points of control also offer the advantage of being contextually aware, which enables policies to be applied based on the resources, the network or the clients. Policies might be applied to all tablets or all applications of a specific type or they might be dynamic based on current operational – or business – parameters. Strategic points of control enable resources to be more effectively and efficiently managed by policies instead of people. This has the effect of tipping the imbalance of burden that currently lies primarily on the shoulders of people toward technology. The goal of IT as a Service and a more dynamic data center is wholly supported by such a strategic trifecta, as it provides the means by which resources can be managed, provisioned, and secured without disruption. The virtualization of resources and their associated policies enables a more responsive IT organization by making it possible to manage resources in a very service-oriented fashion, applying and enforcing policies on an "application" rather than on individual servers, instances, or virtual images. A strategic point of control in the data center is the equivalent of a modern Thermopylae. Like ancient but successful battles whose tactics and strategy have become standard templates for efficiently using resources by leveraging location and visibility, their modern equivalents in the data center can enable a CIO to align IT not only with the business, but its own operational and architectural goals as well. What is a Strategic Point of Control Anyway? Cloud is the How not the What Cloud Control Does Not Always Mean ‘Do it yourself’ The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Data Center Feng Shui: Process Equally Important as Preparation Some Services are More Equal than Others The Battle of Economy of Scale versus Control and Flexibility232Views0likes0CommentsDNSSEC – the forgotten security asset?
An interesting article from CIO Online last month explained how DNS had been used to identify over 700 instances of a managed service provider’s customers being infected with malware. The MSP was able to determine the malware using DNS. As the article points out, a thirty year old technology was being used to defeat twenty-first century computer problems. In short DNS may be a viable means of identifying infections within networks quicker, because as well as security apps relying on DNS, the attackers do as well. DNS however still comes with its own unique security approach. The signature checking procedures outlined in the Domain Name System Security Extensions (DNSSEC) specifications were deemed adequate for the protocols surrounding domain resolution. While the certificates offer security that is authenticated, the data is not encrypted, meaning that data is not confidential. The other problem with DNSSEC is that in the event of Distributed Denial of Service (DDOS) DNS Amplification attack on a DNS server, the processing of validation requests adds to the processor usage and contributes to slowdown. DNSSEC does, however, provide protection against cache poisoning and other malicious activities and remains part of the network security arsenal. At F5, our solution for the DNSSEC load problem was to integrate our DNSSEC to our BIG-IP Global Traffic Manager. The traffic manager handles all of the overhead processing requirements created during a DDOS DNS Amplification attack. The result is that the DNS Server can be left to function with no performance limitation. On top of this the F5 solution is fully compliant with international DNSSEC regulations imposed by governments, organisations and domain registrars. While DNSSEC may seem mature and even outdated for its security specifications, the correct application of technology, such as F5’s BIG-IP Global Traffic Manager delivers peace of mind over security, performance, resource and centralised management of your DNS.227Views0likes0CommentsPolicy is key for protection in the cloud era
Today, companies host mission-critical systems such as email in the cloud, which contain both customer details, company-confidential information and without which, company operations would grind to a halt. Although cloud providers were forced to reconsider their security and continuity arrangements after the large cloud outages and security breaches last year, cloud users still have a number of challenges. Unless organisations work with a small, specialist provider, it is unlikely that they can guarantee where their data is stored, or the data handling policies of the cloud provider in question. Organisations frequently forget that their in-house data policies simply will not be exported to the cloud with their data. Authentication, authorisation and accounting services (AAA) are often cited as major concerns for companies using cloud services. Organisations need assurance of due process of data handling, or else a way to remove the problem so that they lose no sleep over cloud. Aside from problems with location, one of the main problems with cloud is that it does not lend itself to static security policy. For example, one of the most popular uses of cloud is cloudbursting, where excess traffic is directed to cloud resources to avoid overwhelming in-house servers, to spread traffic more economically or to spread the load when several tasks of high importance are being carried out at once. Firm policies about what kind of data can be moved to the cloud, at what capacity threshold, and any modifications which need to be made to data all need to be considered in a very short space of time. All of this needs to be accomplished whilst keeping data secure in transit, and with minimal management to avoid overloading IT managers at already busy times. Furthermore, organisations need to consider AAA concerns, making sure that data is kept in the right hands at all times. Organisations need to secure applications, regardless of location, and to do this, they need to be able to extend policy to the cloud to make sure that data stays safe, wherever it is. Using application delivery control enables companies to control all inbound and outbound application traffic, allowing them to export AAA services to the cloud. They should also make sure that they have a guarantee of secure tunnelling (i.e. via VPNs) which will make sure that data is secure in transit, as well as confirming that only the right users have access to it. Using some kind of secure sign on such as via two-factor authentication can also make sure that the right users are correctly authorised. In future, organisations may begin to juggle multiple cloud environments, balancing data between them for superior resilience, business continuity and pricing offers – often referred to as ‘supercloud’ - and this can be extremely complex. As company usage of cloud becomes more involved, managing and automating key processes will become more important so that cloud is an asset, rather than a millstone around the neck of IT departments.220Views0likes0CommentsHP Discover and what F5 bring to the party
There are only a couple of weeks to go before HP Discover, taking place this year in Frankfurt on 4-6 December. HP is a big organisation with lots of end user and vendor touchpoints. The short video below, by F5's Alasdair Pattinson, lays out the main ways in which F5 and HP collaborate, namely in data centre consolidation projects, Bring Your Own Device initiatives, and smoothing and securing implementations of Microsoft Exchange.219Views0likes0CommentsYou Say Tomato, I Say Network Service Bus
It’s interesting to watch the evolution of IT over time. I have repeatedly been told “you people, we were doing that with X, back before you had a name for it!” And likely, the speaker is telling the truth, as far as it goes. Seriously, while the mechanisms may be different, putting a ton of commodity servers behind a load balancer and tweaking for performance looks an awful lot like having LPARs that can shrink and grow. You put “dynamic cloud” into the conversation and the similarities become more pronounced. The biggest difference is how much you’re paying for hardware and licensing. Back in the day, Enterprise Service Busses (ESB) were all the rage, able to handle communications between a variety of application sources and route things to the correct destination in the correct format, even providing guaranteed delivery if you needed it for transactional services. I trained in several of these tools, most notably IBM MQSeries (now called IBM WebSphere MQ, surprised?) and MS MQ. I was briefed on a ton more during my time at Network Computing. In the end, they’re simply message delivery and routing mechanisms that can translate along the way. Oh sure, with MQSeries Integrator you could include all sorts of other things like security callouts and such, but core functionality was restricted to message flow and delivery. While ESBs are still used today in highly mixed environments or highly complex application infrastructures, they’re not deployed broadly in IT, largely because XML significantly reduced the need for the translation aspect, which was a primary use of them in the enterprise. Today, technology is leading us to a parallel development that will likely turn out much more generically useful than ESBs. Since others have referred to it as several things, but the Network Service Bus is the closest I’ve seen in terms of accuracy, I’ll run with that term. This is routing, translation, and delivery across the network from consumer to the correct service. The service is running on a server somewhere, but that’s increasingly less relevant to the consumer application, merely that their request gets serviced is sufficient. Serviced in a timely and efficient manner is big too. Translated while servicing is seeing a temporary (though not short, in my estimation) bump while IPv4 is slowly supplanted by IPv6, but has other uses – like encrypted to unencrypted, for example. The network of the future will use a few key Strategic Points of Control – like the one between consumers and web servers – to handle routing to a service that is (a) active, (b) responsive, and (c) appropriate to the request. In the interim, while passing the request along, the Strategic point of control will translate the incoming request into a format that the service expects, and if necessary will validate the user in the context of the service being requested and the username/platform/location the request is coming from. This offloads a lot from your apps and your servers. Encryption can be offloaded to the strategic point of control, freeing up a lot of CPU time, and running unencrypted within your LAN, while maintaining encryption on the public Internet. IPv6 packets can be translated to IPv4 on the way in and back to IPv6 on the way out, so you don’t have to switch everything in your datacenter over to IPv6 at once, security checks can occur before the connection is allowed inside your LAN, and scalability gets a major upgrade because you now have a device in place that will route traffic according to the current back-end configuration. Adding and removing servers, upgrading apps, all benefit from the strategic point of control that allows you to maintain a given public IP while changing the machines that service requests as-needed. And then we factor in cloud computing. If all of this functionality – or at least a significant chunk of it – was available in the cloud, regardless of cloud vendor, then you could ship overflow traffic to the cloud. There are a lot of issues to deal with, like security, but they’re manageable if you can handle all of the other service requests as if the cloud servers were part of your everyday infrastructure. That’s a datacenter of the future. Let’s call it a tomato. And in the end it makes your infrastructure more adaptable while giving you a point of control that can harness to implement whatever monitoring or functionality you need. And if you have several of those points of control – one to globally load balance, one for storage, one in front of servers… Then you are offering services that are highly adaptable to fluctuations in usage. Like having a tomato, right in the palm of your hands. Completely irrelevant observation: The US Bureau of Labor Statistics (BLS) mentioned today that IT unemployment is at 3.3%. Now you have a bright spot in our economic doldrums.212Views0likes0Comments