f5 edge gateway
9 TopicsWindow Coverings and Security
Note: While talking about this post with Lori during a break, it occurred to me that you might be thinking I meant “MS Windows”. Not this time, but that gives me another blog idea… And I’ll sneak in the windows –> Windows simile somewhere, no doubt. Did you ever ponder the history of simple things like windows? Really? They evolved from open spaces to highly complex triple-paned, UV resistant, crank operated monstrosities. And yet they serve basically the same purpose today that they did when they were just openings in a wall. Early windows were for ventilation and were only really practical in warm locales. Then shutters came along, which solved the warm/cold problem and kept rain off the bare wood or dirt floors, but weren’t very air tight. So to resolve that problem, a variety of materials from greased paper to animal hides were used to cover the holes while letting light in. This progression was not chronologically linear, it happened in fits and starts, with some parts of the world and social classes having glass windows long before the majority of people could afford it. When melted sand turned out to be relatively see-through though, the end was inevitable. Glass was placed into windows so the weather stayed mostly out while the sun came in. The ability to open windows helped to “air out” a residence or business on nice warm days, and closing them avoided excessive heat loss on cold days. At some point, screens came along that kept bugs and leaves out when they were open. Then artificial glass and double-paned windows came along, and now there are triple paned windows that you can buy with blinds built into the frame, that you can open fully, flip down, and clean the outside of without getting a ladder and taking a huge chunk of your day. Where are windows headed next? I don’t know. This development of seemingly unrelated things –screens and artificial glass and crankable windows – came about because people were trying to improve their environment. And that, when it comes down to it, is why we see advancement in any number of fields. In IT security, we have Web Application Firewalls to keep application-targeting attacks out, while we have SSL to keep connections secure, and we have firewalls to keep generic attacks out, while deploying anti-virus to catch anything that makes it through. And that’s kind of like the development of windows, screens, awnings or curtains… All layers built up through experience to tackle the problem of letting the good (sunshine) in, while keeping the bad (weather, dust, cold) out. Curtains even provide an adjustable filter for sunlight to come through. Open them to get more light in, close them to get less… Because there is a case where too much of a good thing can be bad. Particularly if your seat at the dining room table is facing the window and the window is facing directly east or west. We’re at a point in the evolution of corporate security where we need to deploy these various technologies together to do much the same with our public network that windows do with the outside. Filter out the bad in its various forms and allow the good in. Even have the ability to crank down on the good so we can avoid getting too much of a good thing. Utilizing an access solution to allow your employees access to the systems they require from anywhere or any device enables the business to do their job, while protecting against any old hacker hopping into your systems – it’s like a screen that allows the fresh air in, but filters out the pests. Utilizing a solution that can protect publicly facing applications from cross site scripting and SQL injection attacks is also high on the list of requirements – or should be. Even if you have policies to force your developers into checking for such attacks in all of their code, you still have purchased apps that might need exposing, or a developer might put in an emergency fix to a bug that wasn’t adequately security tested. It’s just a good idea to have this functionality in the network. That doesn’t even touch upon certification and audit reasons for running one, and they are perhaps the biggest drivers. Since I mentioned compliance, a tool that offers reporting is like when the sun shining in the window makes things too warm. You know when you need to shut the curtains – or tighten your security policy, as the case may be. XML firewalls are handy when you’re using XML as a communications method and want to make certain that a hacker didn’t mock up anything from an SQL Injection attack hidden in XML to an “XML bomb” type attack, and when combined with access solutions and web application firewalls, they’re another piece of our overall window(s) covering. If you’re a company whose web presence is of utmost importance, or one where a sizeable or important part of your business is conducted through your Internet connection, then DoS/DDoS protection is just plain and simply a good idea. Indeed, if your site isn’t available, it doesn’t matter why, so DDoS protection should be on the mandatory checklist. SSL encryption is a fact of life in the modern world, and another one of those pieces of the overall window(s) covering that allows you to communicate with your valid users but shut out the invalid or unwanted ones. If you have employees accessing internal systems, or customers making purchases on your website, SSL encryption is pretty much mandatory. If you don’t have either of those use cases, there are still plenty of good reasons to keep a secure connection with your users, and it is worth considering, if you have access to the technology and the infrastructure to handle it. Of course, it is even cooler if you can do all of the above and more on a single high-performance platform designed for application delivery and security. Indeed, a single infrastructure point that could handle these various tasks would be very akin to a window with all of the bells and whistles. It would keep out the bad, let in the good, and through the use of policies (think of them as cur tains) allow you to filter the good so that you are not letting too much in. That platform would be F5BIG-IPLTM, ASM, and APM. Maybe with some EDGE Gateways thrown in there if you have remote offices. All in one place, all on a single bit of purpose-built high-performance Application Delivery Network hardware. It is indeed worth considering. In this day and age, the external environment of the Internet is hostile, make certain you have all of the bits of security/window infrastructure necessary to keep your organization from being the next corporation to have to send data breach notifications out. Not all press is good press, and that’s one we’d all like to avoid. Review your policies, review your infrastructure, make sure you’re getting the most from your security architecture, and go home at the end of the day knowing you’re protecting corporate assets AND enabling business users. Because in the end, that’s all part of IT’s job. Just remember to go back and look it over again next year if you are one of the many companies who doesn’t have dedicated security staff watching this stuff. It’s an ugly Internet out there, you and your organization be careful…294Views0likes0CommentsNo Really. Broadband.
In nature, things seek a balance that is sustainable. In the case of rivers, if there is too much pressure from water flowing, they either flood or open streams to let off the pressure. Both are technically examples of erosion, but we’re not here to discuss that particular natural process, we’re here to consider the case of a stream off a river when there is something changing the natural balance. Since I grew up around a couple of man-made lakes – some dug, some created when the mighty AuSable River was dammed, I’ll use man-made lakes as my examples, but there are plenty of more natural examples – such as earthquakes – that create the same type of phenomenon. Now that I’ve prattled a bit, we’ll get down to the science. A river will sometimes create off-shoots that run to relieve pressure. When these off-shoots stay and have running water, they’re streams or creeks. Take the river in the depiction below: The river flows right to left, and the stream is not a tributary – it is not dumping water into the river, it is a pressure relief stream taking water out. These form in natural depressions when, over time, the flow of a river is more than erosion can adjust for. They’re not at all a problem, and indeed distribute water away from the source river and into what could be a booming forest or prime agricultural land. But when some event – such as man dredging a man-made lake – creates a vacuum at the end of the stream, then the dynamic changes. Take, for example the following depiction. When the bulbous lake at the top is first dug, it is empty. The stream will have the natural resistance of its banks removed, and will start pulling a LOT more water out of the river. This can have the effect of widening the stream in areas with loose-packed soil, or of causing it to flow really very fast in less erosion-friendly environments like stone or clay. Either way, there is a lot more flowing through that stream. Make the lake big enough, and you can divert the river – at least for a time, and depending upon geography, maybe for good. This happens because water follows the path of least resistance, and if the pull from that gaping hole that you dug is strong enough, you will quickly cause the banks of the stream to erode and take the entire river’s contents into your hole. And that is pretty much what public cloud adoption promises to do to your Internet connection. At 50,000 feet, your network environment today looks like this: Notice how your Internet connection is comparable to the stream in the first picture? Where it’s only taking a tiny fraction of the traffic that your LAN is utilizing? Well adding in public cloud is very much like digging a lake. It creates more volume running through your Internet connection. If you can’t grow the width of your connection (due to monthly overhead implications), then you’re going to have to make it go much faster. This is going to be a concern, since most applications of cloud – from storage to apps – are going to require two-way communication with your datacenter. Whether it be for validating users or accessing archived files, there’s going to be more traffic going through your WAN connection and your firewall. Am I saying “don’t use public cloud”? Absolutely not. It is a tool like any other, if you are not already piloting a project out there, I suggest you do so, just so you know what it adds to your toolbox and what new issues it creates. But the one thing that is certain, the more you’re going “out there” for apps and data, the more you’ll need to improve performance of your Internet connections. Mandatory plug: F5 sells products like WOM, EDGE Gateway, and WAM to help you improve the throughput of your WAN connection, and they would be my first stop in researching how to handle increased volumes generated by cloud usage… But if you are a “Vendor X” shop, look at their WAN Optimization and Web Acceleration solutions. Don’t wait until this becomes an actual problem rather than a potential one – when you set up a project team to do a production project out in the public cloud, along with security and appdev, make sure to include a WAN optimization specialist, so you can make certain your Internet connection is not the roadblock that sank the project. This is also the point where I direct your attention to that big firewall in the above diagram. Involve your security staff early in any cloud project. Most of the security folks I have worked with are really smart cookies, but they can’t guarantee the throughput of the firewall if they don’t know you’re about to open up the floodgates on them. Give them time to consider more than just how to authenticate cloud application users. I know I’ve touched on this topic before, but wanted it to be graphically drawn out, so you got to see my weak MS-Paint skills in action, and hopefully I gave you a bit more obvious view of why this is so important.231Views0likes0CommentsAs NetWork Speeds Increase, Focus Shifts
Someone said something interesting to me the other day, and they’re right “at 10 Gig WAN connections with compression turned on, you’re not likely to fill the pipe, the key is to make certain you’re not the bottleneck.” (the other day is relative – I’ve been sitting on this post for a while) I saw this happen when 1 Gig LANs came about, applications at the time were hard pressed to actually use up a Gigabit of bandwidth, so the focus became how slow the server and application were, if the backplane on the switch was big enough to handle all that was plugged into it, etc. After this had gone on for a while, server hardware became so fast that we chucked application performance under the bus in most enterprises. And then those applications were running on the WAN, where we didn’t have really fast connections, and we started looking at optimizing those connections in lieu of optimizing the entire application. But there is only so much that an application developer can do to speed network communications. Most of the work of network communications is out of their hands, and all they control is the amount of data they send over the pipe. Even then, if persistence is being maintained, even how much data they send may be dictated by the needs of the application. And if you are one of those organizations that has situations where databases are communicating over your WAN connection, that is completely outside the control of application developers. So the speed bottleneck became the WAN. For every problem in high tech, there is a purchasable solution though, and several companies (including F5) offer solutions for both WAN Acceleration and Application Acceleration. The cool thing about solutions like BIG-IPWebAccelerator, EDGE Gateway, and WOM are that they speed application performance (WebAccelerator for web based applications and WOM for more back-end applications or remote office), while reducing the amount of data being sent over the wire – without requiring work on the part of developers. As I’ve said before: If developers can focus on solving the business problems at hand and not the technical issues that sit in the background, they are more productive. Now that WAN connections are growing again, you would think we would be poised to shift the focus back to some other piece of the huge performance puzzle, but this stuff doesn’t happen in a vacuum, and there are other pressures growing on your WAN connection that keep the focus squarely on how much data it can pass. Those pressures are multi-core, virtualization and cloud. Multi-core increases the CPU cycles available to applications. To keep up, server vendors have been putting more NICs in every given server, increasing potential traffic on both the LAN and the WAN. With virtualization we have a ton more applications running on the network, and the comparative ease with which they can be brought online implies this trend will continue, and cloud not only does the same thing, but puts the instances on a remote network that requires trips back to your datacenter for integration and database access (yeah, there are exceptions. I would argue not many). Both of these trends mean that the size of your pipe out to the world is not only important, but because it is a monthly expense, it must be maximized. By putting in both WAN Optimization and Web Application Acceleration, you stand a chance of keeping your pipe from growing to the size of the Alaska pipeline, and that means savings for you on a monthly basis. You’ll also see that improved performance that is so elusive. Never mind that as soon as one bottleneck is cleared another will crop up, that comes with the territory. By clearing this one you’ll have improved performance until you hit the next plateau, and you can then focus on settling it, secure in the knowledge that the WAN is not the bottleneck. And with most technologies – certainly with those offered by F5 – you’ll have the graphs and data to show that the WAN link isn’t the bottleneck. Meanwhile, your developers will be busy solving business problems, and all of those cores won’t go to waste. Photo of caribou walking alongside the, taken July 1998 by Stan Shebs210Views0likes0CommentsHow Developers Will Impact Cloud Expenses
We developers used to be obsessed with optimizations. Like a child with an Erector Set and a whole lot of spare parts, we always wanted to “make it better”. In our case, better was faster and using less memory/CPU resources. Where development came from – a few Kilobytes of memory, a much slower CPU, and non-optimizing compilers, this all made sense. But the rest of IT, and indeed, the business, didn’t want to see us build our Erector set higher, or make our code more complex buy more efficient, machines were speeding up at a relatively constant rate and the need was no longer there. Flash forward to today, and we have multiple cores running at hundreds of times the speed of the 286 and 386 families, memory that would have been called “infinite” or “unbelievable” in those days, compilers that optimize, the web server and networking layers in front of most apps, and everything from the bus to the hard disk running faster. You would think that the need to optimize was 100% behind us, right?179Views0likes0CommentsDell Buys Ocarina Networks. Dedupe For All?
Storage at rest de-duplication has been a growing point of interest for most IT staffs over the last year or so, just because de-duplication allows you to purchase less hardware over time, and if that hardware is a big old storage array sucking a ton of power and costing a not-insignificant amount to install and maintain, well, it’s appealing. Most of the recent buzz has been about primary storage de-duplication, but that is merely a case of where the market is. Backup de-duplication has existed for a good long while, and secondary storage de-duplication is not new. Only recently have people decided that at-rest de-dupe was stable enough to give it a go on their primary storage – where all the most important and/or active information is kept. I don’t think I’d call it a “movement” yet, but it does seem that the market’s resistance to anything that obfuscates data storage is eroding at a rapid rate due to the cost of the hardware (and attendant maintenance) to keep up with storage growth. Related Articles and Blogs Dell-Ocarina deal will alter landscape of primary storage deduplication Data dedupe technology helps curb virtual server sprawl Expanding Role of Data Deduplication The Reality of Primary Storage Deduplication212Views0likes0CommentsStop Repeating Yourself. Deduping WAN-Opt Style
Ever hang out with the person who just wants to make their point, and no matter what the conversation says the same thing over and over in slightly different ways? Ever want to tell them they were doing their favorite cause/point/whatever a huge disfavor by acting like a repetitive fool? That’s what your data is doing when you send it across the WAN. Ever seen the data in a database file? Or in your corporate marketing documents? R E P E T I T I V E. And under a normal backup or replication scenario – or a remote office scenario – you are sending the same sequence of bytes over and over and over. Machines may be quad word these days, but your pipe is still measured in bits. That means even most of your large integers have 32 bits of redundant zeroes. Let’s not talk about all the places your corporate logo is in files, or how many times the word “the” appears in your documents. It is worth noting for those of you just delving into this topic that WAN deduplication shares some features and even technologies with storage deduplication, but because the WAN has to handle an essentially unlimited stream of data running through it, and it does not have to store that data and keep differentials or anything moving forward, it is a very different beast than disk-based deduplication. WAN deduplication is more along the lines of “fire and forget” (though forget is the wrong word, since it keeps duplicate info for future reference) than storage which is “fire and remember exactly what we did”. Thankfully, your data doesn’t have feelings, so we can offer a technological solution to its repetitive babbling. There are a growing number of products out there that tell your data “Hey! Say it once and move on!” these products either are or implement in-flight data deduplication. These devices require a system on each end – one to dedupe, one to rehydrate – and there are a variety of options the developer can choose, along with a few that you can choose, to make the deduplication of higher or lower quality. Interestingly, some of these options are perfect for one customers’ data set and not at all high-return for others. So I thought we’d talk through them generically, giving you an idea of what to ask your vendor when you consider deduplication as part of your WAN Optimization strategy. Related Articles and Blogs: WAN Optimization Continues to Evolve Best Practices for Deploying WAN Optimization with Data Replication Like a Matrushka, WAN Optimization is Nested195Views0likes0CommentsCloud Storage Gateways. Short term win, but long term…?
In the rush to cloud, there are many tools and technologies out there that are brand new. I’ve covered a few, but that’s nowhere near a complete list, but it’s interesting to see what is going on out there from a broad-spectrum view. I have talked a bit about Cloud Storage Gateways here. And I’m slowly becoming a fan of this technology for those who are considering storing in the cloud tier. There are a couple of good reasons to consider these products, and I was thinking about the reasons and their standing validity. Thought I’d share with you where I stand on them at this time, and what I see happening that might impact their value proposition. The two vendors I have taken some time to research while preparing this blog for you are Nasuni and Panzura. No doubt there are plenty of others, but I’m writing you a blog here, not researching a major IT initiative. So I researched two of them to have some points of comparison, and leave the in-depth vendor selection research to you and your staff. These two vendors present similar base technology and very different additional feature sets. Both rely heavily upon local caching in the controller box, and both work with multiple cloud vendors, and both claim to manage compression. Nasuni delivers as a Virtual Appliance, includes encryption on your network before transmitting to the cloud, automated cloud provisioning, and caching that has timed updates to the cloud, but can perform a forced update if the cache gets full. It presents the cloud storage you’ve provisioned as a NAS on your end. Panzura delivers a hardware appliance that also presents the cloud as a NAS, works with multiple cloud vendors, handles encryption on-device, and claims global dedupe. I say claims, because “global” has a meaning that is “all” and in their case “all” means “all the storage we know about”, not “all the storage you know”. I would prefer a different term, but I get what they mean. Like everything else, they can’t de-dupe what they don’t control. They too present the cloud storage you’ve provisioned as a NAS on your end, but claim to accelerate CIFS and NFS also. Panzura is also trying to make a big splash about speeding access to MS-Sharepoint, but honestly, as a TMM for F5, a company that makes two astounding products that speed access to Sharepoint and nearly everything else on the Internet (LTM and WOM), I’m not impressed by Sharepoint acceleration. In fact, our Sharepoint Application Ready Solution is here, and our list of Application Ready Solutions is here. Those are just complete architectures we support directly, and don’t touch on what you can do with the products through Virtuals, iRules, profiles, and the host of other dials and knobs. I could go on and on about this topic, but that’s not the point of this blog, so suffice it to say there are some excellent application acceleration and WAN Optimization products out there, so this point solution alone should not be a buying criteria. There are some compelling reasons to purchase one of these products if you are considering cloud storage as a possible solution. Let’s take a look at them. Present cloud storage as a NAS – This is a huge benefit right now, but over time the importance will hopefully decrease as standards for cloud storage access emerge. Even if there is no actual standard that everyone agrees to, it will behoove smaller players to emulate the larger players that are allowing access to their storage in a manner that is similar to other storage technologies. Encryption – As far as I can see this will always be a big driver. They’re taking care of encryption for you, so you can sleep at night as they ship your files to the public cloud. If you’re considering them for non-public cloud, this point may still be huge if your pipe to the storage is over the public Internet. Local Caching – With current broadband bandwidths, this will be a large driver for the foreseeable future. You need your storage to be responsive, and local caching increases responsiveness, depending upon implementation, cache size, and how many writes you are doing this could be a huge improvement. De-duplication – I wish I had more time to dig into what these vendors mean by dedupe. Replacing duplicate files with a symlink is simplest and most resembles existing file systems, but it is also significantly less effective than partial file de-dupe. Let’s face it, most organizations have a lot more duplication laying around in files named Filename.Draft1.doc through Filename.DraftX.doc than they do in completely duplicate files. Check with the vendors if you’re considering this technology to find out what you can hope to gain from their de-dupe. This is important for the simple reason that in the cloud, you pay for what you use. That makes de-duplication more important than it has historically been. The largest caution sign I can see is vendor viability. This is a new space, and we have plenty of history with early entry players in a new space. Some will fold, some will get bought up by companies in adjacent spaces, some will be successful… at something other than Cloud Storage Gateways, and some will still be around in five or ten years. Since these products compress, encrypt, and de-dupe your data, and both of them manage your relationship with the cloud vendor, losing them is a huge risk. I would advise some due diligence before signing on with one – new companies in new market spaces are not always a risky proposition, but you’ll have to explore the possibilities to make sure your company is protected. After all, if they’re as good as they seem, you’ll soon have more data running through them than you’ll have free space in your data center, making eliminating them difficult at best. I haven’t done the research to say which product I prefer, and my gut reaction may well be wrong, so I’ll leave it to you to check into them if the topic interests you. They would certainly fit well with an ARX, as I mentioned in that other blog post. Here’s a sample architecture that would make “the Cloud Tier” just another piece of your virtual storage directory under ARX, complete with automated tiering and replication capabilities that ARX owners thrive on. This sample architecture shows your storage going to a remote data center over EDGE Gateway, to the cloud over Nasuni, and to NAS boxes, all run through an ARX to make the client (which could be a server or a user – remember this is the NAS client) see a super-simplified, unified directory view of the entire thing. Note that this is theoretical, to my knowledge no testing has occurred between Nasuni and ARX, and usually (though certainly not always) the storage traffic sent over EDGE Gateway will be from a local NAS to a remote one, but there is no reason I can think of for this not to work as expected – as long as the Cloud Gateway really presents itself as a NAS. That gives you several paths to replicate your data, and still presents client machines with a clean, single-directory NAS that participates in ADS if required. In this case Tier one could be NAS Vendor 1, Tier two NAS Vendor 2, your replication targets securely connected over EDGE Gateway, and tier 3 (things you want to save but no longer need to replicate for example) is the cloud as presented by the Cloud Gateway. The Cloud Gateway would arbitrate between actual file systems and whatever idiotic interface the cloud provider decided to present and tell you to deal with, while the ARX presents all of these different sources as a single-directory-tree NAS to the clients, handling tiering between them, access control, etc. And yes, if you’re not an F5 shop, you could indeed accomplish pieces of this architecture with other solutions. Of course, I’m biased, but I’m pretty certain the solution would not be nearly as efficient, cool, or let you sleep as well at night. Storage is complicated, but this architecture cleans it up a bit. And that’s got to be good for you. And all things considered, the only issue that is truly concerning is failure of a startup cloud gateway vendor. If another vendor takes one over, they’ll either support it or provide a migration path, if they are successful at something else, you’ll have plenty of time to move off of their storage gateway product, so only outright failure is a major concern. Related Articles and Blogs Panzura Launches ANS, Cloud Storage Enabled Alternative to NAS Nasuni Cloud Storage Gateway InfoSmack Podcasts #52: Nasuni (Podcast) F5’s BIG-IP Edge Gateway Solution Takes New Approach to Unifying, Optimizing Data Center Access Tiering is Like Tables or Storing in the Cloud Tier453Views0likes1CommentIT Managers: Good Ideas Need Guidance
It is Memorial Day here in the US, where we remember those who served our country in the military – particularly those who gave their lives in military service. So I thought I’d tell you a cautionary tale of a good idea gone horribly wrong… Related Articles and Blogs Why IT Managers Need to Take Control of Public Cloud Computing Outsourcing IT: A Debate on NetworkWorld and How Cloud Fits Never Outsource Control Like Garth, We Fear Change Emergent Cloud Computing Business Models (actually a good blog overall)195Views0likes0CommentsOur data is so deduped that no two bits are alike!
Related Articles and Blogs Dedupe Ratios Do Matter (NWC) Ask Dr Dedupe: NetApp Deduplication Crosses the Exabyte Mark (NetApp) Dipesh on Dedupe: Deduplication Boost or Bust? (CommVault) Deduplication Ratios and their Impact on DR Cost Savings (About Restore) Make the Right Call (Online Storage Optimization) – okay, that one’s a joke BIG-IP WAN Optimization Module (f5 – PDF) Like a Matrushka, WAN Optimization is Nested (F5 DevCentral)189Views0likes0Comments