f5 asm
9 TopicsASM - Enforcement Readiness - Export from one ASM to another
We have an ASM in our Production Environment which we have security policies in Learning mode. There are Attack Signatures 'Ready to be Enforced' - We use our Prod Environment to learn (Real traffic hitting our VIPs) then take the learned attributes and build our policies in our QA ASM. Then we test our policies in QA before rolling them back out into Production. Question- In one case, I have 180 Attack Signatures 'Ready to be Enforced' in Prod. Is it possible to export or copy the 'Ready to be Enforced' Attack Signatures out of our Production ASM and import into our QA ASM? Such that once done, all the 'Ready to be Enforced' Attack Signatures that were in Production ASM now show up on our QA ASM? Thank you210Views0likes0CommentsIn the Cloud, It's the Little Things That Get You. Here are nine of them.
#F5 Eight things you need to consider very carefully when moving apps to the cloud. Moving to a model that utilizes the cloud is a huge proposition. You can throw some applications out there without looking back – if they have no ties to the corporate datacenter and light security requirements, for example – but most applications require quite a bit of work to make them both mobile and stable. Just connections to the database raise all sorts of questions, and most enterprise level applications require connections to DC databases. But these are all problems people are talking about. There are ways to resolve them, ugly though some may be. The problems that will get you are the ones no one is talking about. So of course, I’m happy to dive into the conversation with some things that would be keeping me awake were I still running a datacenter with a lot of interconnections and getting beat up with demands for cloudy applications. The last year has proven that cloud services WILL go down, you can’t plan like it won’t, regardless of the hype. When they do, your databases must be 100% in synch, or business will be lost. 100%. Your DNS infrastructure will need attention, possibly for the first time since you installed it. Serving up addresses from both local and cloud providers isn’t so simple. Particularly during downtimes. Security – both network and app - will have to be centralized. You can implement separate security procedures for each deployment environment, but you are only as strong as your weakest link, and your staff will have to remember which policies apply where if you go that route. Failure plans will have to be flexible. What if part of your app goes down? What if the database is down, but the web pages are fine – except for that “failed to connect to database” error? No matter what the hype says, the more places you deploy, the more likelihood that you’ll have an outage. The IT Managers’ role is to minimize that increase. After a failure, recovery plans will also need to be flexible. What if part of your app comes up before the rest? What if the database spins up, but is now out of synch with your backup or alternate database? When (not if) a security breech occurs on a cloud hosted server, how much responsibility does the cloud provider have to help you clean up? Sometimes it takes more than spinning down your server to clean up a mess, after all. If you move mission-critical data to the cloud, how are you protecting it? Contrary to the wild claims of the clouderati, your data is in a location you do not have 100% visibility into, you’re going to have to take extra steps to protect it. If you’re opening connections back to the datacenter from the cloud, how are you protecting those connections? They’re trusted server to trusted server, but “trusted” is now relative. Of course there are solutions brewing for most of these problems. Here are the ones I am aware of, I guarantee that, since I do not “read all of the Internets” each day (Lori does), I’m missing some, but it can get you started. Just include cloud in your DR plans, what will you do if service X disappears? Is the information on X available somewhere else? Can you move the app elsewhere and update DNS quickly enough? Global Server Load Balancing (GSLB) will help with this problem and others on the list – it will eliminate the DNS propagation lag at least. But beware, for many cloud vendors it is harder to do DR. Check what capabilities your provider supports. There are tools available that just don’t get their fair share of thunder, IMO – like Oracle GoldenGate – that replicate each SQL command to a remote database. These systems create a backup that exactly mirrors the original. As long as you don’t get a database modifying attack that looks valid to your security systems, these architectures and products are amazing. People generally don’t care where you host apps, as long as when they type in the URL or click on the URL, it takes them to the correct location. Global DNS and GSLB will take care of this problem for you. Get policy-based security that can be deployed anywhere, including the cloud, or less attractively (and sometimes impractically), code security into the app so the security moves with it. Application availability will have to go through another round like it did when we went distributed and then SOA. Apps will have to be developed with an eye to “is critical service X up?” where service X might well be in a completely different location from the app. If not, remedial steps will have to occur before the App can claim to be up. Or local Load Balancing can buffer you by making service X several different servers/virtuals. What goes down (hopefully) must come back up. But the same safety steps implemented in #5 will cover #6 nicely, for the most part. Database consistency checks are the big exception, do those on recovery. Negotiate this point if you can. Lots of cloud providers don’t feel the need to negotiate anything, but asking the questions will give you more information. Perhaps take your business to someone who will guarantee full cooperation in fixing your problems. If you actually move critical databases to the cloud, encrypt them. Yeah, I do know it’s expensive in processing power, but they’re outside the area you can 100% protect. So take the necessary step. Secure tunnels are your friend. Really. Don’t just open a hole in your firewall and let “trusted” servers in, because it is possible to masquerade as a trusted server. Create secure tunnels, and protect the keys. That’s it for now. The cloud has a lot of promise, but like everything else in mid hype cycle, you need to approach the soaring commentary with realistic expectations. Protect your data as if it is your personal charge, because it is. The cloud provider is not the one (or not the only one) who will be held accountable when things go awry. So use it to keep doing what you do – making your organization hum with daily business – and avoid the pitfalls where ever possible. In my next installment I’ll be trying out the new footer Lori is using, looking forward to your feedback. And yes, I did put nine in the title to test the “put an odd number list in, people love that” theory. I think y’all read my stuff because I’m hitting relatively close to the mark, but we’ll see now, won’t we?206Views0likes0CommentsRemember When Hand Carts Were State Of The Art? Me either.
Funny thing about the advancement of technology, in most of the modern world we enshrine it, spend massive amounts of money to find “the next big thing”, and act as if change is not only inevitable, but rapid. The truth is that change is inevitable, but not necessarily rapid, and sometimes, it’s about necessity. Sometimes it is about productivity. Sometimes, it just plain isn’t about either. Handcarts are still used for serious purposes in parts of the world, by people who are happy to have them, and think a motorized vehicle would be a waste of resources. Think on that for a moment. What high-tech tool that was around 20 years ago are you still using? Let alone 200 years ago. The replacement of handcarts as a medium for transport not only wasn’t instant, it’s still going on 100 years after cars were mass produced. Handcart in use – Mumbai Daily We in high-tech are constantly in a state of flux from this technology to that solution to the other architecture. The question you have to ask yourself – and this is getting more important for enterprise IT in my opinion – is “does this do something good for the company?” It used to be that IT folks could try out all sorts of new doo-dads just to play with them and justify the cost based on the future potential benefit to the company. I’d love to say that this had a powerful positive effect, but frankly, it only rarely paid off. Why? Because we’re geeks. We buy this stuff on our own dime if the company won’t foot for it, and our eclectic tastes don’t necessarily jive with the needs of the organization. These days, the change is pretty intense, and focuses on infrastructure and application deployment architectures. Where can you run this application, and what form will the application take? Virtualized? Dedicated hardware? Cloud? the list goes on. And all of these questions spur thoughts about security, storage, the other bits of infrastructure required to support an application no matter where it is deployed. These are things that you can model in your basement, but can’t really test out, simply because the architecture of an enterprise is far more complex than the architecture of even the geekiest home network. Lori and I have a pretty complex network in our basement, but it doesn’t hold a candle to our employers’ worldwide network supporting dev and sales offices on every continent, users in many languages, and a potpourri of access methods that must be protected and available. Sometimes, change is simply a change of perspective. F5’s new iApps, for example, put the ADC infrastructure bits together for the application, instead of managing application security within the module that handles application security (ASM), it bundles security in with all of the other bits – like load balancing, SSL offload, etc – that an application requires. This is pretty powerful, it speeds deployment and troubleshooting because everything is in one place, and it speeds adding another machine because you simply apply the same iApp Template. That means you spin up another instance of the VM in question, tweak the settings, and apply the template already being used on existing instances, and you’re up. Sometimes, change is more radical. Deploying to the cloud is a good example of this, and cloud deployments suffer for it. Indeed, private and hybrid clouds are growing rapidly precisely because of the radical change that public cloud can introduce. Cloud storage was so radical that very few were willing to use it even as most thought it was a good idea. Along came cloud storage gateways like our ARX Cloud Extender or a variety of others, and suddenly the weakness was ameliorated… Because the radical bit of cloud storage was simply that it didn’t talk like storage traditionally has. With a gateway it does. And with most gateways (check with your provider) you get compression and encryption, making the cloud storage more efficient and secure in the process. But like the handcart, the idea that cloud, or virtualization, or consumerization must take hold overnight and you’re behind the times if you weren’t doing it yesterday are misplaced. Figure out what’s best for your organization, not just in terms of technology, but in terms of timelines also. Sure, some things, like support for the CEOs iPad will take on a life of their own, but in general, you’ve got time to figure out what you need, when you need it, and how best to implement it. As I’ve mentioned before, at the cutting edge of technology, when the hype cycle is way overblown, that’s where you’ll find the largest number of vendors that won’t be around to support you in five years. If you can wait until the noise about a space quiets down, you’ll be better served, because the level of competition will have eliminated the weaker companies and you’ll be dealing with the technological equivalent of the Darwinian most fit. Sure, some of those companies will fail or get merged also, but the chances that your vendor of choice won’t, or their products will live on, are much better after the hype cycle. After all, even though engine powered conveyances have largely replaced hand carts, have you heard of White Motor Company, Autocar Company, or Diamond T Company? All three made automobiles. They lived through boom and were swallowed in bust. Though in automobiles the cycle is much longer than in high-tech (Autocar started in the late 1800s and was purchased by White in the 1950s for example, who was purchased later by Audi), the same process occurs, so count on it. And no, I haven’t developed a sudden interest in automobile history, all of these companies thrived making half-tracks in World War Two, that’s how I knew to look for them amongst the massive number of failed car companies. Stay in touch with the new technologies out there, pay attention to how they can help you, but as I’ve said quite often, what's in the hype cycle isn’t necessarily what is best for your organization. 1908 Autocar XV (Wikipedia.org) Of course I think things like our VE product line and our new V.11 with both iApps and app mobility are just the thing for most organizations, even with those I will say “depending upon your needs”. Because contrary to what most marketing and many analysts want to tell you, it really is about your organization and its needs.210Views0likes0CommentsWhy Developers Should Demand Web App Firewalls.
The Web Application Firewall debate has been raging for a very long time, and we keep hearing the same comments going back and forth. Many organizations have implemented them as a fast-track to compliance, primarily compliance with PCI-DSS, but the developer community is still hesitant to embrace them as a solution to their problems. And that’s because like so many things out there, they are seen as an “either-or” proposition. Either they can relieve a developer of the need to write security code, or they can’t. If they can’t, then why have them? I’m a developer by trade, and I get the sentiment. It’s tough to say “spend all this money, and I’ll keep spending my time”, could almost seem to make no sense. But very little out there is really an either/or proposition. Many of the things that a Web Application Firewall can do for you – like DOS/DDOS attack resistance – are outside the realm of the developer. If you’ve taken the time to write DDOS protection code into your web applications, you might just be in the wrong job. Many other things that Web Application Firewalls (WAF) can do are well within the bounds of the developer domain, but could save time if they were implemented in the centralized location of the WAF instead of over and over in each app. Not relieve you of the burden of writing secure code, but save you time by eliminating bits to focus on and reducing redundant development, freeing up CPU cycles in the process. My Friendly Local Game Store (FLGS) – with doors. And that’s the key. Web App Firewalls don’t have to do everything, they have to do something that makes them worth the time to install and utilize. While many orgs have installed them, I would argue they’re not utilizing them effectively because the installation was a check-box on a compliance report, not something the organization wanted to make adequate use of. Other organizations don’t have them. But there IS one driving reason to install a web app firewall. A significant number of attacks are stopped before they get to your machine. Let me say that again… Risk of a ne’er-do-well getting root and messing up your application and your server by taking advantage of some flaw in the OS or a library is zero for every attack that is stopped before the packets reach your server. And that’s important. Because no matter how secure your code, or the code of your purchased package, or whatever, the overall system is only as strong as the weakest point the attackers can reach. Just like a soldier hiding behind a window is harder to shoot because most of his body is protected, an application resting behind a web application firewall is mostly protected, and thus, harder for attackers to take control of. Yes, services and apps have to be open to the world in order for them to present value to your customer, but the fact that a business needs a front door to let customers in does not mean that all businesses should dispense with the front wall of their shop. Another Branch of the FLGS – Without Doors! So get the protection a Web Application Firewall offers, put it to its maximum uses that you trust (did I mention I was a coder? They can do more than I’d hand off to them ;-)), and spend the time you save on security development doing coding/projects that add business value, because security is only important when it’s breached, the next rev of an important customer portal will be talked about all the time. Hopefully with praise, but talked about, either way. Or you could take the extra time and come to Green Bay to visit those FLGS’. They do rock, whether you’re into Monopoly, RPGs, CCGs, CMGs, Minis, or any of the other XXGs. And no, I don’t have any financial interest in them, and no, they don’t have a web presence, so you’ll have to visit Green Bay to see ‘em – but the top store is right by Packer Stadium, so you can get a double benefit for the trip. I will point out that the store with no front door? Yeah, it’s counting on the protection of the mall’s doors, you know, kind of like an application and a Web App Firewall. Still has a drop down chain door though – because layered security is good.274Views0likes0CommentsIt Is Not What The Market Is Doing, But What You Are.
We spend an obsessive amount of time looking at the market and trying to lean toward accepted technologies. Seriously, when I was in IT management, there were an inordinate number of discussions about the state of market X or Y. While these conversations almost always revolved around what we were doing, and thus were put into context, sometimes an enterprise sits around waiting for everyone else to jump on board before joining in the flood. While sometimes this is commendable behavior, it is just as often self-defeating. If you have a project that could use technology X, then find the best implementation of said technology for your needs, and implement it. Using an alternative or inferior technology just because market adoption hasn’t happened will bite you as often as it will save you. Take PDAs, back in the bad old days when cell phones were either non-existent of just plain phones. Those organizations that used them reaped benefits from them, those that did not… Did not. While you could talk forever about the herky-jerky relationship of IT with small personal devices like PDAs, the fact is that they helped management stay better organized and kept salespeople with a device they could manage while on the road going from appointment to appointment. For those who didn’t wait to see what happened or didn’t raise a bunch of barriers and arguments that, retrospectively, appear almost ridiculous. Yeah, data might leak out on them. Of course, that was before USB sticks and in more than one case entire hard disks of information walked away, proving that a PDA wasn’t as unique in that respect as people wanted to claim. There is a whole selection of technologies that seem to have fallen into that same funky bubble – perhaps because, like PDAs, the value proposition was just not quite right. When cell phones became your PDA also, nearly all restrictions on them were lifted in every industry, simply because the cell phone + PDA was a more complete solution. One tool to rule them all and all that. Palm Pilot, image courtesy of wikipedia Like PDAs, there is benefit to be had from going “no, we need this, let’s do it”. Storage tiering was stuck in the valley of wait-and-see for many years, and finally seems to be climbing out of that valley simply because of the ongoing cost of storage combined with the parallel growth of storage. Still, there are many looking to save money on storage that aren’t making the move – almost like there’s some kind of natural resistance. It is rare to hear of an organization that introduced storage tiering and then dumped it to go back to independent NAS boxes/racks/whatever, so the inhibition seems to be strictly one of inexperience. Likewise, cloud suffers from some reluctance that I personally attribute to not only valid security concerns, but to very poor handling of those concerns by industry. If you read me regularly, you know I was appalled when people started making wild claims like “the cloud is more secure than your DC”, because that lack of touch with reality made people more standoffish, not less. But some of it is people not seeing a need in their organization, which is certainly valid if they’ve checked it out and come to that conclusion. Quite a bit of it, I think, is the same resistance that was applied to SaaS early on – if it’s not in your physical network, is it in your control? And that’s a valid fear that often shuts down the discussion before it starts – particularly if you don’t have an application that requires the cloud – lots of spikes in traffic, for example. Application Firewalls are an easier one in my book – having been a developer, I know that they’re going to be suspicious of canned security protecting their custom app. While I would contend that it isn’t “canned security” in the case of an Application Firewall, I can certainly understand their concern, and it is a communications issue on the part of the Application Firewall vendor community that will have to be resolved if uptake is to spike. Regulatory issues are helping, but far better an organization purchase a product because they believe it helps than because someone forced them to purchase. With HP’s exit from the tablet market, this is another field that is in danger of falling into the valley of waiting. While it’s conjecture, I’ll contend that not every organization will be willing to go with iPads as a corporate roll-out for groups that can benefit from tablet PCs – like field sales staff – and RIM is in such a funk organizations are unlikely to rush their money to them. The only major contender that seems to remain is Samsung with the Galaxy Tab (android-based), but I bought one for Lori for her last birthday, and as-delivered it is really a mini gaming platform, not a productivity tool. Since that is configurable within the bounds set in the Android environment, it might not be such a big deal, but someone will have to custom-install them for corporate use. But the point is this. If you’re spending too much on storage and don’t have tiering implemented, contact a vendor that suits your needs and look into it. I of course recommend F5ARX, but since I’m an F5 employee, expecting anything else would be silly. Along the same lines, find a project and send it to the cloud. Doesn’t matter how big or small it is, the point is to build your expertise so you know when the cloud will be most useful to you. And cloud storage doesn’t count, for whatever reason it is seeing a just peachy uptake (see last Thursday’s blog), and uses a different skill set than cloud for application deployment. Application Firewalls can protect your applications in a variety of ways, and those smart organizations that have deployed them have done so as an additive protection to application development security efforts. If for some odd reason you’re against layered protection, then think about this… They can stop most attacks before they ever get to your servers, meaning even fingerprinting becomes a more difficult chore. Of course I think F5 products rule the roost in this market – see my note above about ARX. As to tablet PCs, well, right now you have a few choices, if you can get a benefit from them now, determine what will work for you for the next couple of years and run with it. You can always do a total refresh after the market has matured those couple of years. Right now I see Apple, RIM, and Samsung as your best choices, with RIM being kind of shaky. Lori and I own Playbooks and love them, but RIM has managed to make a debacle of itself right when they hit the market, and doesn’t seem to be climbing out with any speed. Or you could snatch up a whole bunch of those really inexpensive Web-OS pads and save a lot of money until your next refresh :-). But if you need it, don’t wait for the market. The market is like FaceBook… Eventually consistent, but not guaranteed consistent at any given moment. Think more about what’s best for your organization and less about what everyone else is doing, you’ll be happier that way. And yes, there’s slightly more risk, but risk is not the only element in calculating best for the organization, it is one small input that can be largely mitigated by dealing with companies likely to be around for the next few years.218Views0likes0CommentsForce Multipliers and Strategic Points of Control Revisited
On occasion I have talked about military force multipliers. These are things like terrain and minefields that can make your force able to do their job much more effectively if utilized correctly. In fact, a study of military history is every bit as much a study of battlefields as it is a study of armies. He who chooses the best terrain generally wins, and he who utilizes tools like minefields effectively often does too. Rommel in the desert often used Wadis to hide his dreaded 88mm guns – that at the time could rip through any tank the British fielded. For the last couple of years, we’ve all been inundated with the story of The 300 Spartans that held off an entire army. Of course it was more than just the 300 Spartans in that pass, but they were still massively outnumbered. Over and over again throughout history, it is the terrain and the technology that give a force the edge. Perhaps the first person to notice this trend and certainly the first to write a detailed work on the topic was von Clausewitz. His writing is some of the oldest military theory, and much of it is still relevant today, if you are interested in that type of writing. For those of us in IT, it is much the same. He who chooses the best architecture and makes the most of available technology wins. In this case, as in a war, winning is temporary and must constantly be revisited, but that is indeed what our job is – keeping the systems at their tip-top shape with the resources available. Do you put in the tool that is the absolute best at what it does but requires a zillion man-hours to maintain, or do you put in the tool that covers everything you need and takes almost no time to maintain? The answer to that question is not always as simple as it sounds like it should be. By way of example, which solution would you like your bank to put between your account and hackers? Probably a different one than the one you would you like your bank to put in for employee timekeeping. An 88 in the desert, compliments of WW2inColor Unlike warfare though, a lot of companies are in the business of making tools for our architecture needs, so we get plenty of options and most spaces have a happy medium. Instead of inserting all the bells and whistles they inserted the bells and made them relatively easy to configure, or they merged products to make your life easier. When the terrain suits a commanders’ needs in wartime, the need for such force multipliers as barbed wire and minefields are eliminated because an attacker can be channeled into the desired defenses by terrain features like cliffs and swamps. The same could be said of your network. There are a few places on the network that are Strategic Points of Control, where so much information (incidentally including attackers, though this is not, strictly speaking, a security blog) is funneled through that you can increase your visibility, level of control, and even implement new functionality. We here at F5 like to talk about three of them… Between your users and the apps they access, between your systems and the WAN, and between consumers of file services and the providers of those services. These are places where you can gather an enormous amount of information and act upon that information without a lot of staff effort – force multipliers, so to speak. When a user connects to your systems, the strategic point of control at the edge of your network can perform pre-application-access security checks, route them to a VPN, determine the best of a pool of servers to service their requests, encrypt the stream (on front, back, or both sides), redirect them to a completely different datacenter or an instance of the application they are requesting that actually resides in the cloud… The possibilities are endless. When a user accesses a file, the strategic point of control between them and the physical storage allows you to direct them to the file no matter where it might be stored, allows you to optimize the file for the pattern of access that is normally present, allows you to apply security checks before the physical file system is ever touched, again, the list goes on and on. When an application like replication or remote email is accessed over the WAN, the strategic point of control between the app and the actual Internet allows you to encrypt, compress, dedupe, and otherwise optimize the data before putting it out of your bandwidth-limited, publicly exposed WAN connection. The first strategic point of control listed above gives you control over incoming traffic and early detection of attack attempts. It also gives you force multiplication with load balancing, so your systems are unlikely to get overloaded unless something else is going on. Finally, you get the security of SSL termination or full-stream encryption. The second point of control gives you the ability to balance your storage needs by scripting movement of files between NAS devices or tiers without the user having to see a single change. This means you can do more with less storage, and support for cloud storage providers and cloud storage gateways extends your storage to nearly unlimited space – depending upon your appetite for monthly payments to cloud storage vendors. The third force-multiplies the dollars you are spending on your WAN connection by reducing the traffic going over it, while offloading a ton of work from your servers because encryption happens on the way out the door, not on each VM. Taking advantage of these strategic points of control, architectural force multipliers offers you the opportunity to do more with less daily maintenance. For instance, the point between users and applications can be hooked up to your ADS or LDAP server and be used to authenticate that a user attempting to access internal resources from… Say… and iPad… is indeed an employee before they ever get to the application in question. That limits the attack vectors on software that may be highly attractive to attackers. There are plenty more examples of multiplying your impact without increasing staff size or even growing your architectural footprint beyond the initial investment in tools at the strategic point of control. For F5, we have LTM at the Application Delivery Network Strategic Point of Control. Once that investment is made, a whole raft of options can be tacked on – APM, WOM, WAM, ASM, the list goes on again (tired of that phrase for this blog yet?). Since each resides on LTM, there is only one “bump in the wire”, but a ton of functionality that can be brought to bear, including integration with some of the biggest names in applications – Microsoft, Oracle, IBM, etc. Adding business value like remote access for devices, while multiplying your IT force. I recommend that you check it out if you haven’t, there is definitely a lot to be gained, and it costs you nothing but a little bit of your precious time to look into it. No matter what you do, looking closely at these strategic points of control and making certain you are using them effectively to meet the needs of your organization is easy and important. The network is not just a way to hook users to machines anymore, so make certain that’s not all you’re using it for. Make the most of the terrain. And yes, if you also read Lori’s blog, we were indeed watching the same shows, and talking about this concept, so no surprise our blogs are on similar wavelengths. Related Blogs: What is a Strategic Point of Control Anyway? Is Your Application Infrastructure Architecture Based on the ... F5 Tech Field Day – Intro To F5 As A Strategic Point Of Control What CIOs Can Learn from the Spartans What We Learned from Anonymous: DDoS is now 3DoS What is Network-based Application Virtualization and Why Do You ... They're Called Black Boxes Not Invisible Boxes Service Virtualization Helps Localize Impact of Elastic Scalability F5 Friday: It is now safe to enable File Upload264Views0likes0CommentsThe Security Question
We were sitting and chatting with a fellow geek last night, and he was describing a corporate network he is familiar with. The description was like a tale from the old show “The Twilight Zone”. If it was a security vulnerability, it was present. If it was a standard and accepted security procedure, it was not present. The story got scarier by the minute, and was largely explained when the punch line was “they’ve had 200% admin turnover in the last few years.” Actually, I don’t know if it was 200%, I suspect it was higher as a percentage, but I’m purposely obfuscating the numbers because it’s creepy to talk about how many people they’d lost even though you don’t know who “they” are. Even if your turnover is high, you just can’t do things like “no DMZ, inside is outside all over the place, dual-homed”. You really can’t. And that really is a quote. “a cheap little (fill in F5 competitor here) SMB firewall that doesn’t seem to work with all the rules in place” was another really scary statement. We here at F5 can help you implement standardized Application Security, VPN Security, secure remote tunnels, URL obfuscation on-the-fly, and a wealth of other things… But we can’t help if you don’t have a procedure documented to keep your staff on-the-point, even in times of employee churn. In fact, even if you’ve got high turnover due to rates of pay, benefits, or whatever, might I humbly suggest that you give the security admins golden handcuffs? Really, if you’re going to have an online presence, there are a lot of critical jobs, but the threat from the Internet is institutionalized and large, so the most critical (again assuming you have public facing apps on-site) is, in my not-so-humble-opinion, is the security staff. They can double as a whole lot of other staff in a pinch because they have to know enough to be dangerous about both network and applications, and having a perfectly running app or a finely tuned network does you no good if you are hosting a botnet. Image Compliments of IMDb Of course I could argue the opposite side of that, most of us have worked in one or more places where the security staff wasn’t a staff, it was the developers, systems admins, and network admins, each doing their part. But the complexity and ferocity of attackers has steadily increased, and I think those days are increasingly behind us. Particularly the larger your data center infrastructure, the more important it is to have someone(s) dedicated to watching the security aspect and doing impact analysis when things do go wrong. Like any other job, there’s only so far that security can go as a part-time job. Don’t go through life like the topic of a Twilight Zone story, wondering what is around the next corner, and what surprises are in store for you. Build a solid security team, get quality security products, have a plan and documented standards so that turnover doesn’t create a total mish-mash of security policies that no one can maintain. This is going to get harder, not easier. The word from the street is a resounding “we are trying or moving to the cloud, we are scared to death of the security implications”. That means security for your organization’s bit of the cloud is coming your way. When I say “get quality security products” include “ones built with the cloud in mind” in that equation.161Views0likes0CommentsTechnical Options. Opportunity and Confusion
One of the things that I love about technology is the fact that every time there is a problem, five solutions crop up to solve it. One of the things I hate about technology is the fact that every time there is a problem, five solutions crop up to solve it… And there are marketing geeks and pundits willing to tell you which one to choose before you even know that you have the problem. I was out in Anaheim last week with F5’s rockstar salesforce, telling them about the Future of IT. Or trying to, you’ll have to ask them if I imparted any worthwhile information, since I haven’t seen evaluations of my presentations yet. One thing that struck me from the ensuing discussions though is that there are people in IT who know their stuff, but are still confused about what solutions are best for long-distance problems. The sales team told me repeatedly that their customers sometimes are uncertain of their needs when talking about access control and acceleration. They of course got the F5-biased, product laden answers, I’ll skip that for you all here and just mention that “F5 has products in each of these spaces – talk to your sales folks”. Though I’ve included the F5 product list in this article’s tags if you want an idea what to talk with sales people about. Remote office communications are often slowed by the need for a WAN connection to the home datacenter. They also have more precise security requirements than your average Internet connection – you need to know that those accessing your applications from the remote office actually have the rights to do so, since most often remote office users have access to your core systems. So you need an SSL VPN and/or application level authentication, along with something to make those connections speedy. Normally this would be Application Acceleration, but you might possibly also require WAN optimization if there is a lot of repetitive data being thrown across the line. If you’re not using an SSL VPN, then you need some form of secure tunnel over the line between remote office and datacenter – after all, locking down both ends does you no good if you’re unencrypted in the middle. I didn’t get a picture of any of my sessions, so you’ll have to settle for this PowerPoint image Datacenter to datacenter communications are less user intensive, and thus less browser intensive, so the benefit of Application Acceleration is less, and the benefit of WAN Optimization is commensurately greater. You still need secure connections, but perhaps not an SSL VPN – you might, it all depends upon how the secondary data center systems are managed. If they’re managed from the primary datacenter, then you probably want to have an SSL VPN just to put something between the ne’er-do-wells and your systems. Otherwise, secure, encrypted tunnels to transfer data will do the trick. Of course there are a lot of considerations here, and you know your systems better than anyone else, so consider how many remote logins the remote datacenter has, and that will give you an idea if you need an SSL VPN. For users hitting your website, the requirements are closer to a remote office, but not quite so stringent. You’ll still want an application firewall, and you’ll want to speed things up in a manner that won’t impact browsers negatively – faster is only useful if the page remains unchanged from your implementation. So Application Acceleration and a web application firewall should do the trick. My experience with application acceleration is that you want a tool that has a lot of knobs and dials because no two websites are the same. You’ll want to exclude some content from acceleration, tweak the settings on other content, etc. And with all of these solutions you’ll want frequent updates (particularly to firewalls), and a world-class service organization because the products sit right in your line of production and you don’t want to waste a ton of time figuring out what’s going wrong or waiting for replacement parts. We’re not the only vendor on the planet that offers you solutions in these spaces, so check out the market. Of course I think ours are the best – if I didn’t, I’d be off working where I DID think they were the best. But every organization is different, find a vendor (or some vendors) that suit your organization’s needs the best. And check to see how they support cloud, because it is coming to a datacenter near you.182Views0likes0CommentsMany Stops is Good for Vacation, Not for WAN Opt
Anyone who has children and travels by car will tell you that there is no substitute for the mandatory array of bathroom breaks that must be taken by those children. One of the many reasons I prefer to travel at night when driving long distances is that children who are asleep are not asking to pull into the next rest stop for yet another restroom break. And I was one of those children. My father once told me I had the smallest bladder on the planet… Right before my mother made him stop at a gas station for me. Another favorite is the “tourist trap” stop, where someone in the car with authority decides to stop, even though everyone over the age of 12 knows that the stop will be wasted on sites like “The World’s only three horned steer!” or “The ultimate hedge maze!” These may be things I’ve seen. May be. When driving, these stops slow your drive to your destination and if someone was waiting for you, ultimately make you late, but since you are on a discrete vacation, this is not terribly bad. You get less time on-site, but in the end return home and get into your old routine. This is completely not true with your data. Related Articles and Blogs Latency Definitions on Wikipedia Latency and IP Verizon Business WW Network Latency Chart How to Test Network and Internet Latency in MS-Windows (very basic) Researchers Crack Network Latency Nut With New Algorithm173Views0likes0Comments