arx
70 TopicsDevCentral Top5 02/15/2012
Welcome to a special "yes I know it's Wednesday but I won't be here Friday" edition of the Top5. There has already been some great content in the last week or so, which makes it easy to do an edition mid-week, but that's not unusual. Given the amount of awesome content that can generally be found roaming the wilds of DevCentral, it isn't uncommon to have enough to fill up the Top5 by Wednesday. This week I am taking advantage of that fact. Though I have no doubt there will be still more goodness to come this week, you'll have to manage for yourselves...so dig deep and see what's out there! In the meantime, here are a few great pieces with which to get started: iRules Concepts: Tcl, The How and Why http://bit.ly/z9j18P One of the questions that we get asked from time to time is, "Why Tcl". Those people are referring to the interpreter we chose as the underlying infrastructure for iRules, of course. I've answered this question several times, and frankly it belies many solid, deep dive style concepts about iRules: how they work at their core, TMM interaction, byte code compilation and more, that are worth discussing. So...that's what I did. This article looks to shed some light on iRules history, anatomy, our choices in regards to their underpinnings, and why we do what we do how we do it. What it lacks in code samples and graphs, it makes up for in sheer word count (if, you know, that's your thing) but hopefully others find it useful content...I certainly did. Google reCaptcha Verification With Sideband Connections http://bit.ly/A4PAma One of the many awesome Tech Tips that George has written recently...this one eluded the Top5 in previous weeks as there was just too much good stuff to share. Having read through it again this week, though, I decided it needs to make the hit list. This shows off one of the key features in iRules for v11, sideband connections, and how to do something very handy with them. Real world applications of bleeding edge iRules features in a consumable, organized, easy to follow format ... yep, that's kind of my thing. So here it is, better late than never. Take a read and see what else George has been up to, it's definitely worth the time. F5 ARX WAN Optimization with WOM http://bit.ly/w9bhsg Pushing out an example from the field is a treat for me, and this week is no exception. Michael Fabiano, one of the FSEs here at F5, put together a very solid article on ARX WAN Opt with WOM. If you've been curious about possible solutions for multi-data center storage, this is the article for you. There are many things from an F5 perspective that can be done to streamline and optimize the general multi-location storage deployment, and those benefits are broken out here in an easy to follow (and implement) format. Whether it's ARX, WOM or both that you're looking to deploy or investigate, this picture is a good one, especially given the ways they work together. Michael does a good job of making this approachable and interesting, so take a look and learn something. F5 Friday: What's Inside an F5? http://bit.ly/yAwUi0 Lori came through last week with a solid answer to a question that seems to take many forms with this look at what actually goes on inside F5 devices. We have come a long, long way from the old 4.x and before days. As she points out things have changed all the way up and down the stack from hardware to software, as well as many massive leaps forward conceptually, allowing us to deliver a whole new level of power. Many people don't fully understand what it is that these products we talk about all the time offer at a somewhat base level. Lori does a good job here of giving some insight into that without going so deep that she loses the passengers on the trip. If you've ever wondered about TMOS, vCMP, or any of the other magic that happens internally...take a look. New iOS Edge Client http://bit.ly/wE68Lv Last but not least Pete delivered a friendly reminder today that there is a new iOS Edge Client available for download in the App store. If you, like me, are one of the many folks making use of the Edge Client from an iOS device, this new version adds some worthwhile features. I love seeing the effort being put into making our products easier to use and more accessible not just for the administrators, but for the end users as well. This new release won't change the lives of the people running the systems, but it makes it just that much easier for those of us using the products as an end user (yes, I'm an end user too), and that is valuable. I just updated my device, and figured I'd pass on the heads up as a nice way to round out the Top5 for this week. That's it for this week, as always feel free to drop me some feedback or suggestions. #Colin214Views0likes0CommentsF5 Friday: Big Data? Big Risk…
#bigdata #infosec Storing sensitive data in the cloud is made more palatable by applying a little security before the data leaves the building… When corporate hardware, usually laptops, are stolen, one of the first questions asked by information security professionals is whether or not the data on the drive was encrypted. While encryption of data is certainly not a panacea, it’s a major deterrent to those who would engage in the practice of stealing data for dollars. Many organizations are aware of this and use encryption judiciously when data is at rest in the data center storage network. But as the Corollary to Hoff’s Law states, even “if your security practices don’t suck in the physical realm, you’ll be concerned by the inability to continue that practice when you move to Cloud.” It’s not that you can’t encrypt data being moved to cloud storage services, it’s that doing so isn’t necessarily a part of the processes or APIs used to do so. This makes it much more difficult to enforce such a policy and, for some organizations, unless they are guaranteed data will be secured at rest they aren’t going to give the okay. A recent Ponemon study speaks to just this issue: According to the report entitled "Data Security in the Cloud Survey of U.S. IT Operations, IT Security and Compliance Practitioners", only one third of IT security practitioners believe cloud infrastructure (IaaS) environments are as secure as on premise datacenters, while half of compliance officers think IaaS is as secure. -- Ponemon Institute Survey on Cloud Data Security Exposes Gulf between IT Security and Compliance Officers INTEGRATION and REPLICATION In order to make cloud a more palatable option it is necessary to ensure that data can be stored securely off-premise. A tried and true method is to encrypt the data before it leaves the building. And yet the same Ponemon study found that less than one-third of respondents’ organizations do just that. A possible explanation for organizations’ failure to encrypt data being transferred to the cloud is a lack of process and integration with the ways in which the data is transferred. Storing data in “the cloud” is generally accomplished via an API, and rarely do these APIs include a flag for “hey, encrypt my data.” There are technical reasons why this is the case; encryption – at least encryption worth the effort and compute consumed – often makes use of certificates and keys. Those keys should be unique to the organization. Using a general cloud storage service encryption API would require either sharing of that key (bad idea) or the use of a common provider key (yet another bad idea), neither of which is an acceptable solution. The answer is, of course, to encrypt the data before transfer to the cloud storage service. The cloud storage service, after all, doesn’t care what the data is – it just cares that it has to store it for you. This brings us back to the problem of process and integration at the infrastructure layer. What organizations need to leverage cloud storage services is the means to automatically encrypt data as it’s headed for the cloud. What organizations need is for that cloud storage service to be integrated with their own, data center based storage in a way that makes it possible to leverage cloud storage automatically, encrypting the data when it’s bound for the cloud. Organizations need a common, overarching storage solution that can seamlessly integrate cloud storage into operational processes and automatically provide a layer of security through encryption of the data when that data might be stored off-site, in a cloud storage service. F5 ARX and ARX Cloud Extender (CE) is that solution. In addition to its core aggregation and intelligent tiering capabilities, adding ARX CE to the architecture will allow for the seamless extension of storage to the cloud securely. When ARX CE is preparing to send data to public cloud destinations, the data is encrypted using AES-256 bit encryption for each object. Further, all transfers from the ARX CE-enabled Windows file server to public cloud storage occur over SSL (HTTPS), which provides network layer encryption. -- Securing Data in the Cloud with ARX CE The Ponemon study revealed that “less than half of IT practitioners (35%) and compliance officers (42%) believe their organizations have adequate technologies to secure their IaaS environments.” So not only do organizations believe the cloud is less secure, they also believe they don’t have the right tools to secure it and thus take advantage of it. F5 ARX and ARX CE addresses the operational risk associated with storage in the cloud – by integrating cloud storage services into operational processes it alleviates the manual burden imposed on IT to schedule transfers and prioritize files across tiers. With the ability to automatically apply encryption to data and use a secure transport channel to cloud storage services, it adds a layer of security to data stored in the cloud that would otherwise not exist, giving IT the confidence required to take advantage of lower cost storage in the cloud and realize its benefits. F5 ARX Cloud Extender Resources Securing Data in the Cloud with ARX CE (How To) ARX Tiered Storage: Best Practices Getting Up And Running With F5 ARX Virtual Edition F5 Storage Solutions F5 ARX 1500 and 2500 F5’s New ARX Platforms Help Organizations Reap the Benefits of File Virtualization Network World – F5 Rolls Out New File Virtualization Appliances Analyzing Performance Metrics for File Virtualization Strategies for a Seamless and Secure Transition to Enterprise Cloud Storage Building a Cloud-Ready File Storage Infrastructure SSDs, Velocity and the Rate of Change F5 Friday: If Data is King then Storage Virtualization is the Castellan F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage F5 Friday: ARX VE Offers New Opportunities Disk May Be Cheap but Storage is Not All F5 Friday Posts on DevCentral Tiering is Like Tables, or Storing in the Cloud Tier165Views0likes0CommentsRemember When Hand Carts Were State Of The Art? Me either.
Funny thing about the advancement of technology, in most of the modern world we enshrine it, spend massive amounts of money to find “the next big thing”, and act as if change is not only inevitable, but rapid. The truth is that change is inevitable, but not necessarily rapid, and sometimes, it’s about necessity. Sometimes it is about productivity. Sometimes, it just plain isn’t about either. Handcarts are still used for serious purposes in parts of the world, by people who are happy to have them, and think a motorized vehicle would be a waste of resources. Think on that for a moment. What high-tech tool that was around 20 years ago are you still using? Let alone 200 years ago. The replacement of handcarts as a medium for transport not only wasn’t instant, it’s still going on 100 years after cars were mass produced. Handcart in use – Mumbai Daily We in high-tech are constantly in a state of flux from this technology to that solution to the other architecture. The question you have to ask yourself – and this is getting more important for enterprise IT in my opinion – is “does this do something good for the company?” It used to be that IT folks could try out all sorts of new doo-dads just to play with them and justify the cost based on the future potential benefit to the company. I’d love to say that this had a powerful positive effect, but frankly, it only rarely paid off. Why? Because we’re geeks. We buy this stuff on our own dime if the company won’t foot for it, and our eclectic tastes don’t necessarily jive with the needs of the organization. These days, the change is pretty intense, and focuses on infrastructure and application deployment architectures. Where can you run this application, and what form will the application take? Virtualized? Dedicated hardware? Cloud? the list goes on. And all of these questions spur thoughts about security, storage, the other bits of infrastructure required to support an application no matter where it is deployed. These are things that you can model in your basement, but can’t really test out, simply because the architecture of an enterprise is far more complex than the architecture of even the geekiest home network. Lori and I have a pretty complex network in our basement, but it doesn’t hold a candle to our employers’ worldwide network supporting dev and sales offices on every continent, users in many languages, and a potpourri of access methods that must be protected and available. Sometimes, change is simply a change of perspective. F5’s new iApps, for example, put the ADC infrastructure bits together for the application, instead of managing application security within the module that handles application security (ASM), it bundles security in with all of the other bits – like load balancing, SSL offload, etc – that an application requires. This is pretty powerful, it speeds deployment and troubleshooting because everything is in one place, and it speeds adding another machine because you simply apply the same iApp Template. That means you spin up another instance of the VM in question, tweak the settings, and apply the template already being used on existing instances, and you’re up. Sometimes, change is more radical. Deploying to the cloud is a good example of this, and cloud deployments suffer for it. Indeed, private and hybrid clouds are growing rapidly precisely because of the radical change that public cloud can introduce. Cloud storage was so radical that very few were willing to use it even as most thought it was a good idea. Along came cloud storage gateways like our ARX Cloud Extender or a variety of others, and suddenly the weakness was ameliorated… Because the radical bit of cloud storage was simply that it didn’t talk like storage traditionally has. With a gateway it does. And with most gateways (check with your provider) you get compression and encryption, making the cloud storage more efficient and secure in the process. But like the handcart, the idea that cloud, or virtualization, or consumerization must take hold overnight and you’re behind the times if you weren’t doing it yesterday are misplaced. Figure out what’s best for your organization, not just in terms of technology, but in terms of timelines also. Sure, some things, like support for the CEOs iPad will take on a life of their own, but in general, you’ve got time to figure out what you need, when you need it, and how best to implement it. As I’ve mentioned before, at the cutting edge of technology, when the hype cycle is way overblown, that’s where you’ll find the largest number of vendors that won’t be around to support you in five years. If you can wait until the noise about a space quiets down, you’ll be better served, because the level of competition will have eliminated the weaker companies and you’ll be dealing with the technological equivalent of the Darwinian most fit. Sure, some of those companies will fail or get merged also, but the chances that your vendor of choice won’t, or their products will live on, are much better after the hype cycle. After all, even though engine powered conveyances have largely replaced hand carts, have you heard of White Motor Company, Autocar Company, or Diamond T Company? All three made automobiles. They lived through boom and were swallowed in bust. Though in automobiles the cycle is much longer than in high-tech (Autocar started in the late 1800s and was purchased by White in the 1950s for example, who was purchased later by Audi), the same process occurs, so count on it. And no, I haven’t developed a sudden interest in automobile history, all of these companies thrived making half-tracks in World War Two, that’s how I knew to look for them amongst the massive number of failed car companies. Stay in touch with the new technologies out there, pay attention to how they can help you, but as I’ve said quite often, what's in the hype cycle isn’t necessarily what is best for your organization. 1908 Autocar XV (Wikipedia.org) Of course I think things like our VE product line and our new V.11 with both iApps and app mobility are just the thing for most organizations, even with those I will say “depending upon your needs”. Because contrary to what most marketing and many analysts want to tell you, it really is about your organization and its needs.211Views0likes0CommentsSSDs, Velocity and the Rate of Change.
The rate of change in a mathematical equation can vary immensely based upon the equation and the inputs to the equation. Certainly the rate of change for f(x) = x^2 is a far different picture than the rate of change for f(x)=2x, for example. The old adage “the only constant is change” is absolutely true in high tech. The definition of “high” in tech changes every time something becomes mainstream. You’re working with tools and systems that even ten years ago were hardly imaginable. You’re carrying a phone that Alexander Graham Bell would not recognize – or know how to use. You have tablets with the power that was not so long ago only held by mainframes. But that change did not occur overnight. Apologies to iPhone fans, but all the bits Apple put together to produce the iPhone had existed before, Apple merely had the foresight to see how they could be put together in a way customers would love. The changes happen over time, and we’re in the midst of them, sometimes that’s difficult to remember. Sometimes that’s really easy to remember, as our brand-new system or piece of architecture gives us headaches. Depends upon the day. Image generated at Cool Math So what is coming of age right now? Well, SSDs for one. They’re being deployed in the numbers that were expected long ago, largely because prices have come down far enough to make them affordable. We offer an SSD option for some of our systems these days, and since the stability of our products is of tantamount to our customers’ interests, we certainly aren’t out there on the cutting edge with this development. They’re stable enough for mission critical use, and the uptick in sales reflects that fact. If you have a high-performance application that relies upon speedy database access, you might look into them. There are a lot of other valid places to deploy SSDs – Tier one for example – but a database is an easy win. If access times are impacting application performance, it is relatively easy to drop in an SSD drive and point the DB (cache or the whole DB) at them, speeding performance of every application that relies on that DBMS. That’s an equation that is pretty simple to figure out, even if the precise numbers are elusive. Faster disk access = faster database response times = faster applications. That is the same type of equation that led us to offer SSDs for some of our products. They sit in the network between data and the applications that need the data. Faster is better, assuming reliability, which after years of tweaking and incremental development, SSDs offer. Another place to consider SSDs is in your virtual environment. If you have twenty VMs on a server, and two of them have high disk access requirements, putting SSDs into place will lighten the load on the overall system simply by reducing the blocking time waiting for disk responses. While there are some starting to call for SSDs everywhere, remember that there were some who said cloud computing meant no one should ever build out a datacenter again also. The price of HDs has gone down with the price of SSDs pushing them from the top, so there is still a significant cost differential, and frankly, a lot of applications just don’t need the level of performance that SSDs offer. The final place I’ll offer up for SSDs is if you are implementing storage tiering such as that available through our ARX product. If you have high-performance NAS needs, placing an SSD array as tier one behind a tiering device can significantly speed access to the files most frequently used. And that acceleration is global to the organization. All clients/apps that access the data receive the performance boost, making it another high-gain solution. Will we eventually end up in a market where old-school HDDs are a thing of the past and we’re all using SSDs for everything? I honestly can’t say. We have plenty of examples in high-tech where as demand went down, the older technology started to cost more because margins plus volume equals profit. Tube monitors versus LCDs, a variety of memory types, and even big old HDDs – the 5.25 inch ones. But the key is whether SDDs can fulfill all the roles of HDDs, and whether you and I believe they can. That has yet to be seen, IMO. The arc of price reduction for both HDDs and SSDs plays in there also – if quality HDDs remain cheaper, they’ll remain heavily used. If they don’t, that market will get eaten by SSDs just because all other things being roughly equal, speed wins. It’s an interesting time. I’m trying to come up with a plausible use for this puppy just so I can buy one and play with it. Suggestions are welcome, our websites don’t have enough volume to warrant it, and this monster for laptop backups would be extreme – though it would shorten my personal backup window ;-). OCZ Technology 1 TB SSD. The Golden Age of Data Mobility? What Do You Really Need? Use the Force Luke. (Zzzaap) Don't Confuse A Rubber Stamp With Validation On Cloud, Integration and Performance Data Center Feng Shui: Architecting for Predictable Performance F5 Friday: Performance, Throughput and DPS F5 Friday: Performance Analytics–More Than Eye-Candy Reports Audio White Paper - High-Performance DNS Services in BIG-IP ... Analyzing Performance Metrics for File Virtualization277Views0likes0CommentsIn The End, You Have to Clean.
Lori and I have a large technical reference library, both in print and electronic. Part of the reason it is large is because we are electronics geeks. We seriously want to know what there is to know about computers, networks, systems, and development tools. Part of the reason is that we don’t often enough sit down and decide to pare the collection down by those books that no longer have a valid reason for sitting on our (many) bookshelves of technical reference. The collection runs the gamut from the outdated to the state of the art, from the old stand-byes to the obscure, and we’ve been at it for 20 years… So many of them just don’t belong any more. One time we went through and cleaned up. The few books we got rid of were not only out of date (mainframe pascal data structures was one of them), but weren’t very good when they were new. And we need to do it again.From where I sit at my desk, I can see an OSF DCE reference, the Turbo Assembler documentation, A Perl 5 reference, a MicroC/OS-II reference, and Mastering Web Server Security. All of which are just not relevant anymore. There’s more, but I’ll save you the pain, you get the point. The thing is, I’m more likely to take a ton of my valuable time and sort through these books, recycling those that no longer make sense unless they have sentimental value - Lori and I wrote an Object Oriented Programming book back in 1996, that’s not going to recycling – than you are to go through your file system and clean the junk out of it. Two of ten… Funny thing happens in highly complex areas of human endeavor, people start avoiding ugly truths by thinking they’re someone else’s problem. In my case (and Lori’s), I worry about recycling a book that she has a future use for. Someone else’s problem syndrome (or an SEP field if you read Douglas Adams) has been the source of tremendous folly throughout mankind’s history, and storage at enterprises is a prime example of just such folly. Now don’t bet me wrong, I’ve been around the block, responsible for an ever-growing pool of storage, know that IT management has to worry that the second they start deleting unused files they’re going to end up in the hotseat because someone thought they needed the picture of the sign in front of the building circa 1995… But if IT (who owns the storage space) isn’t doing it, and business unit leaders (who own the files on the storage) aren’t doing it… Well, you’re going to have a nice big stack of storage building up over the next couple of years. Just like the last couple. I could – and will - tell you that you can use our ARX product to help you solve the problem, particularly with ARX Cloud Extender and a trusted cloud provider, by shuffling out to the cloud. But in the longer term, you’ve got to clean up the bookshelf, so-to-speak. ARX is very good at many things, but not making those extra files disappear. You’re going to pay for more disk, or you’re going to pay a cloud provider until you delete them. I haven’t been in IT management for a while, but if I were right now, I’d get the storage guys to build me a pie-chart showing who owns how much data, then gather a couple of outrageous examples of wasted space (a PowerPoint that is more than five years old is good, better than the football pool for marketing from ten years ago, because PowerPoint uses a ton more disk space), and then talk with business leaders about the savings they can bring the company by cleaning up. While you can’t make it their priority, you can give them the information they need. If marketing is responsible for 30% of the disk usage on NAS boxes (or I suppose unstructured storage in general, though this exercise is more complex with mixed SAN/NAS numbers, not terribly more complex), and you can show that 40% of the files owned by Marketing haven’t been touched in a year… That’s compelling at the C-level. 12% of your disk is sitting there just from one department with easy to identify unused files on it. Some CIOs I’ve known have laid the smackdown – “delete X percent by Y date or we will remove this list of files” is actually from a CIOs memo – but that’s just bad PR in my opinion. Convincing business leaders that they’re costing the company money – what’s 12% of your NAS investment for example, plus 12% of the time of the storage staff dedicated to NAS – is a much better plan, because you’re not the bad guy, you’re the person trying to save money while not negatively impacting their jobs. So yeah, install ARX, because it has a ton of other benefits, but go to the bookshelf, dust off that copy of the Fedora 2 Admin Guide, and finally put it to rest. That’s what I’ll be doing this weekend, I know that.189Views0likes0CommentsWhen The Walls Come Tumbling Down.
When horrid disasters strike and both people and corporations are put on notice that they suddenly have a lot more important things to do, will you be ready? It is a testament to man’s optimism that with very few exceptions we really don’t, not at the personal level, not at the corporate level. I’ve worked a lot of places, and none of them had a complete, ready to rock DR plan. The insurance company I worked at was the closest – they had an entire duplicate datacenter sitting dark in a location very remote from HQ, awaiting need. Every few years they would refresh it to make certain that the standby DC had the correct equipment to take over, but they counted on relocating staff from what would be a ravaged area in the event of a catastrophe, and were going to restore thousands of systems from backups before the remote DC could start running. At the time it was a good plan. Today it sounds quaint. And it wasn’t that long ago. There are also a lot of you who have yet to launch a cloud initiative of any kind. This is not from lack of interest, but more because you have important things to do that are taking up your time. Most organizations are dragging their feet replacing people, and few – according to a recent survey, very few – are looking to add headcount (proud plug that F5 is – check out our careers page if you’re looking). It’s tough to run off and try new things when you can barely keep up with the day-to-day workloads. Some organizations are lucky enough to have R&D time set aside. I’ve worked at a couple of those too, and honestly, they’re better about making use of technology than those who do not have such policies. Though we could debate if they’re better because they take the time, or take the time because they’re better. And the combination of these two items brings us to a possible pilot project. You want to be able to keep your organization online or be able to bring it back online quickly in the event of an emergency. Technology is making it easier and easier to complete this arrangement without investing in an entire datacenter and constantly refreshing the hardware to have quick recovery times. Global DNS in various forms is available to redirect users from the disabled datacenter to a datacenter that is still capable of handling the load, if you don’t have multiple datacenters, then it can redirect elsewhere – like to virtual servers running in the cloud. ADCs are starting to be able to work similarly whether they are cloud deployed or DC deployed, that leaves keeping a copy of your necessary data and applications in the cloud, and cloud storage with a cloud storage gateway such as the Cloud Extender functionality in our ARX product allow for this to be done with a minimum of muss and fuss. These technologies, used together, yield a DR architecture that looks something like this: Notice that the cloud extender isn’t listed here, because it is useful for getting the data copied, but would most likely reside in your damaged datacenter. Assuming that the cloud provider was one like our partner Rackspace, who does both cloud VMs and cloud storage, this architecture is completely viable. You’ll still have to work some things out, like guaranteeing that security in the cloud is acceptable, but we’re talking about an emergency DR architecture here, not a long-running solution, so app-level security and functionality to block malicious attacks at the ADC layer will cover most of what you need. AND it’s a cloud project. The cost is far, far lower than a full blown DR project, and you’ll be prepared in case you need it. This buys you time to ingest the fact that your datacenter has been wiped out. I’ve lived through it, there is so much that must be done immediately – finding a new location, dealing with insurance, digging up purchase documentation, recovering what can be recovered… Having a plan like this one in place is worth your while. Seriously. It’s a strangely emotional time, and having a plan is a huge help in keeping people focused. Simply put, disasters come, often without warning – mine was a flood caused by a broken pipe. We found out when our monitoring equipment fried from being soaked and sent out a raft of bogus messages. The monitoring equipment was six feet above the floor at the time. You can’t plan for everything, but to steal and twist a famous phrase, “he who plans for nothing protects nothing.”202Views0likes0CommentsForget Performance IN the Cloud, What About Performance TO the Cloud?
In an N-Tiered architecture, the network connection between tiers becomes a truly important part of the overall application performance equation. This is a fact we have known for a couple of decades now. If your network performance is down for some reason (from mis-wiring to hardware mis-configuration to over utilization), your application performance will, by definition, suffer. I ran a test once while writing for Network Computing where the last person to use the lab had programmed the ports I was using to shunt bandwidth over threshold X onto a different VLAN. It took weeks and the help of one of the vendors under test – Juniper – to help me figure out exactly what was going wrong. Only after they observed that it was topping out and then dropping packets were we able to track down the one changed configuration setting. The funny thing is that this particular change would not have impacted the vast majority of testing that we did in the lab, I was just unlucky enough to pick the wrong ports for a test that purposefully overloaded the network. Until the Juniper crew said “no, that’s not right, we’ve done this test before”, I was blaming the products for the failures under high load… A symptom of the fact that when network performance degrades, systems appear to degrade. Thankfully, we caught the problem and I did not go to print with misinformation. But it does highlight the need for network performance to be top-notch if your application performance is to be top-notch. And we’re starting to head into uncharted territory where this fact is concerned. The problem is that enterprises don’t generally throw a whole bunch of data over the WAN, and if they do, they have specially provisioned ways to do so – because they’re going to a remote datacenter or partner network that is known and data volumes can be calculated for. But as we approach cloud computing we need to be more aware of the network performance aspects than we would be either in the datacenter or transferring between two datacenters. The reasons for this are manifold. First, you don’t have control of the connection to the cloud provider. You own one end, and with the right tools (skipping plug for F5WOM here, look it up if you have a need) you can “control” both ends of the connection, but you don’t really have control of what is in-between. The volume of traffic, the size of the pipes, etc. are all dictated by outside forces. For some DC-to-DC connections this is true also, but unlike cloud, DC-to-DC is point-to-point. If you are dealing with one of the major cloud vendors, you can’t even be certain what country your app resides in at the moment, let alone the route to that application. Some of this concern is automatically mitigated. If the platform provider is large enough to have datacenters in multiple countries, they have pipes bigger than New York sewers connected to those datacenters. Some of it can be mitigated by those same “right tools” that help at the ends of the connection, because they can create tunnels or p2p connections to handle communications, and offer bandwidth reduction capabilities to make certain you are only sending smaller amounts of data, reducing the impact of the network by having less round trips across it. In cloud storage, you have a bigger issue because the whole point is to send massive amounts of data across the WAN. While the right products will reduce the footprint of that data with compression (skipping plug for F5 ARX, look it up if you have a need), you still are sending a lot, or you wouldn’t have to go to a cloud platform, you’d just store it locally. So the question becomes how to make certain that performance to the cloud storage vendor is optimal when you don’t own both ends of the connection? That’s a tricky question, because it is not just a question for today, it is a question forever. You see, the more you store in a cloud storage providers’ space, the less likely you are to want to change providers. But the more business a cloud storage provider receives, the more companies are cramming huge volumes of data in and out of their DC. Which could cause performance problems for you… Unless you have SLAs that are rock-solid and you’re willing to enforce. The long and the short of this post is some advice. Get tools to reduce the amount you’re sending over the wire, and make certain you have an SLA that will cover you if your usage (or the other people using the providers’ usage) jumps. And yeah, if your vendor charges by the megabit or megabyte, check out some products that might reduce your throughput. I might have mentioned a couple here, but there are more out there.205Views0likes0CommentsToll Booths and Dams. And Strategic Points of Control
An interesting thing about toll booths, they provide a point at which all sorts of things can happen. When you are stopped to pay a toll, it smooths the flow of traffic by letting a finite number of vehicles through per minute, reducing congestion by naturally spacing things out. Dams are much the same, holding water back on a river and letting it flow through at a rate determined by the operators of the dam. The really interesting bit is the other things that these two points introduce. When necessary, toll booths have been used to find and stop suspected criminals. They have also been used as advertising and information transmission points. None of the above are things toll booths were created for. They were created to collect tolls. And yet by nature of where they sit in the highway system, can be utilized for much more. The same is true of a dam. Dams today almost always generate electricity. Often they function as bridges over the very water they’re controlling. They control the migration of fish, and operate as a check on predatory invasive species. Again, none of these things is the primary reason dams were originally invented, but the nature of their location allows them to be utilized effectively in all of these roles. Toll booths - Wikipedia We’ve talked a bit about strategic points of control. They’re much like toll booths and dams in the sense that their location makes them key to controlling a whole lot of traffic on your LAN. In the case of F5’s defined strategic points of control, they all tie in to the history of F5’s product lineup much like a toll booth was originally to collect tolls. F5BIG-IPLTM sits at the network strategic point of control. Initially LTM was a load balancer, but by virtue of its location and the needs of customers has grown into one of the most comprehensive Application Delivery Controllers on the market – everything from security to uptime monitoring is facilitated by LTM. F5 ARX is much the same, being the file-based storage strategic point of control allows such things as directing some requests to cloud storage and others to storage by vendor A, while still others go to vendor B, and the remainder go to a Linux or Windows machine with a ton of free disk space on it. The WAN strategic point of control is where you can improve performance over the WAN via WOM, but it is also a place where you can extend LTM functionality to remote locations, including the cloud. Budgets for most organizations are not growing due to the state of the economy. Whether you’re government, public, private, or small business, you’ve been doing more with less for so long that doing more with the same would be a nice change. If you’re lucky, you’ll see growth in IT budgeting due to increasing needs of security and growth of application footprints. Some others will see essentially flat budgets, and many – including most government IT orgs - will see shrinking budgets. While that is generally bad news, it does give you the opportunity to look around and figure out how to make more effective use of existing technology. Yes, I have said that before, because you’re living that reality, so it is worth repeating. Since I work for F5, here are a few examples though, something I’ve not done before. From the network strategic point of control, we can help you with DNSSec, AAA, Application Security, Encryption, performance on several levels (from TCP optimizations to compression), HA, and even WAN optimization issues if needed. From the storage strategic point of control we can help you harness cloud storage, implement tiering, and balance load across existing infrastructure to help stave off expensive new storage purchases. Backups and replication can be massively improved (both in terms of time and data transferred) from this location also. We’re not the only vendor that can help you out without having to build a whole new infrastructure. It might be worthwhile to have a vendor day, where you invite vendors in to give presentations about how they can help – larger companies and the federal government do this regularly, you can do the same in a scaled down manner, and what sales person is going to tell you “no, we don’t want to come tell you how you can help and we can sell you more stuff”? Really? Another option is, as I’ve said in the past, make sure you know not just the functionality you are using, but the capabilities of the IT gear, software, and services that you already have in-house. Chances are there are cost savings by using existing functionality of an existing product, with time being your only expense. That’s not free, but it’s about as close as IT gets. Hoover Dam from the air - Wikipedia So far we in IT have been lucky, the global recession hasn’t hit our industry as hard as it has hit most, but it has constricted our ability to spend big, so little things like those above can make a huge difference. Since I am on a computer or Playbook for the better part of 16 hours a day, hitting websites maintained by people like you, I can happily say that you all rock. A highly complex, difficult to manage set of variables rarely produces a stable ecosystem like we have. No matter how good the technology, in the end it is people who did that, and keep it that way. You all rock. And you never know, but you might just find the AllSpark hidden in the basement ;-).261Views0likes0CommentsYou Say Tomato, I Say Network Service Bus
It’s interesting to watch the evolution of IT over time. I have repeatedly been told “you people, we were doing that with X, back before you had a name for it!” And likely, the speaker is telling the truth, as far as it goes. Seriously, while the mechanisms may be different, putting a ton of commodity servers behind a load balancer and tweaking for performance looks an awful lot like having LPARs that can shrink and grow. You put “dynamic cloud” into the conversation and the similarities become more pronounced. The biggest difference is how much you’re paying for hardware and licensing. Back in the day, Enterprise Service Busses (ESB) were all the rage, able to handle communications between a variety of application sources and route things to the correct destination in the correct format, even providing guaranteed delivery if you needed it for transactional services. I trained in several of these tools, most notably IBM MQSeries (now called IBM WebSphere MQ, surprised?) and MS MQ. I was briefed on a ton more during my time at Network Computing. In the end, they’re simply message delivery and routing mechanisms that can translate along the way. Oh sure, with MQSeries Integrator you could include all sorts of other things like security callouts and such, but core functionality was restricted to message flow and delivery. While ESBs are still used today in highly mixed environments or highly complex application infrastructures, they’re not deployed broadly in IT, largely because XML significantly reduced the need for the translation aspect, which was a primary use of them in the enterprise. Today, technology is leading us to a parallel development that will likely turn out much more generically useful than ESBs. Since others have referred to it as several things, but the Network Service Bus is the closest I’ve seen in terms of accuracy, I’ll run with that term. This is routing, translation, and delivery across the network from consumer to the correct service. The service is running on a server somewhere, but that’s increasingly less relevant to the consumer application, merely that their request gets serviced is sufficient. Serviced in a timely and efficient manner is big too. Translated while servicing is seeing a temporary (though not short, in my estimation) bump while IPv4 is slowly supplanted by IPv6, but has other uses – like encrypted to unencrypted, for example. The network of the future will use a few key Strategic Points of Control – like the one between consumers and web servers – to handle routing to a service that is (a) active, (b) responsive, and (c) appropriate to the request. In the interim, while passing the request along, the Strategic point of control will translate the incoming request into a format that the service expects, and if necessary will validate the user in the context of the service being requested and the username/platform/location the request is coming from. This offloads a lot from your apps and your servers. Encryption can be offloaded to the strategic point of control, freeing up a lot of CPU time, and running unencrypted within your LAN, while maintaining encryption on the public Internet. IPv6 packets can be translated to IPv4 on the way in and back to IPv6 on the way out, so you don’t have to switch everything in your datacenter over to IPv6 at once, security checks can occur before the connection is allowed inside your LAN, and scalability gets a major upgrade because you now have a device in place that will route traffic according to the current back-end configuration. Adding and removing servers, upgrading apps, all benefit from the strategic point of control that allows you to maintain a given public IP while changing the machines that service requests as-needed. And then we factor in cloud computing. If all of this functionality – or at least a significant chunk of it – was available in the cloud, regardless of cloud vendor, then you could ship overflow traffic to the cloud. There are a lot of issues to deal with, like security, but they’re manageable if you can handle all of the other service requests as if the cloud servers were part of your everyday infrastructure. That’s a datacenter of the future. Let’s call it a tomato. And in the end it makes your infrastructure more adaptable while giving you a point of control that can harness to implement whatever monitoring or functionality you need. And if you have several of those points of control – one to globally load balance, one for storage, one in front of servers… Then you are offering services that are highly adaptable to fluctuations in usage. Like having a tomato, right in the palm of your hands. Completely irrelevant observation: The US Bureau of Labor Statistics (BLS) mentioned today that IT unemployment is at 3.3%. Now you have a bright spot in our economic doldrums.217Views0likes0CommentsSometimes, If IT Isn’t Broken, It Still Needs Fixing.
In our first house, we had a set of stairs that were horrible. They were unfinished, narrow, and steep. Lori went down them once with a vacuum cleaner, they were just not what we wanted in the house. They came out into the kitchen, so you were looking at these half-finished steps while sitting at the kitchen table. We covered them so they at least weren’t showing bare treads, and then we… Got used to them. Yes, that is what I said. We adapted. They were covered, making them minimally acceptable, they served their purpose, so we enjoyed them. Then we had the house remodeled. Nearly all of it. And the first thing the general contractor did was rip out those stairs and put in a sweeping staircase that turned and came into the living room. The difference was astonishing. We had agreed to him moving the stairs, but hadn’t put much more thought into it beyond his argument that it would save space upstairs and down, and they would no longer come out in the kitchen. This acceptance of something “good enough” is what happens in business units when you deliver an application that doesn’t perfectly suit their needs. They push for changes, and then settle into a restless truce. “That’s the way it is” becomes the watch-word. But do not get confused, they are not happy with it. There is a difference between acceptance and enjoyment. Stairs in question, before on left, after on right. Another issue that we discovered while making changes to that house was “the incredible shrinking door”. The enclosed porch on the back of the house was sitting on rail road ties from about a century ago, and they were starting into accelerated degradation. The part of the porch not attached to the house was shrinking yearly. Twice I sawed off the bottom of the door to the porch so that it would open and close. It really didn’t bother us overly much, because it happened over the course of years, and we adapted to the changes as they occurred. When we finally had that porch ripped off to put an actual addition on the house, we realized how painful dealing with the porch and its outer door had been. This too is what happens in business units when over time the usability of a given application slowly degrades or the system slowly becomes out of date. Users adapt, making it do what they want because, like our door, the changes occur day-to-day, not in one big catastrophic heap. So it is worth your time to occasionally look over your application portfolio and consider the new technologies you’ve brought in since each application was implemented. Decide if there are ways you can improve the experience without a ton of overhead. Your users may not even realize you’re causing them pain anymore, which means you may be able to offer them help they don’t know they’re looking for. Consider, would a given application perform better if placed behind an ADC, would putting a Web Application Firewall in front of an application make it more secure simply because the vendor is updating the Web App Firewall to adapt to new threats and your developers only update the application on occasion? Would shortening the backup window with storage tiering such as F5’s ARX offers improve application performance by reducing network traffic during backups and/or replication? Would changes in development libraries benefit existing applications? Granted, that one can be a bit more involved and has more potential for going wrong, but it is possible that the benefits are worth the investment/risk – that’s what the evaluation is for. Would turning on WAN Optimization between datacenters increase available bandwidth and thus improve application performance of all applications utilizing that connection? Would offloading encryption to an ADC decrease CPU utilization and thus improve performance of a wide swath of applications in the DC – particularly VM-based applications that are already sharing a CPU and could gain substantially from offloading encryption? These are the things that in the day-to-day crush of serving the business units and making certain the organizations’ systems are on-line we don’t generally think of, but some of them are simple to implement and offer a huge return – both in terms of application stability/performance and in terms of inter-department relations. Business units love to hear “we made that better” when they didn’t badger you to do so, and if the time investment is small they won’t ask why you weren’t doing what they did badger you to do. Always a fresh look. Your DC is not green field, but it is also not curing cement. Consider all the ways that something benefitting application X can benefit other applications, and what the costs of doing so will be. It is a powerful way to stay dynamic without rip-and-replace upgrades. If you’re an IT Architect, this is just part of your job, if you’re not, it’s simply good practice. Related Blogs: If I Were in IT Management Today… IT Management is Not Called Change Management for a Reason Challenges of SOA Management Nothing New Cloud Changes Everything IPv6 Does Not Mean The End of IPv4 It Is Not What The Market Is Doing, But What You Are.222Views0likes0Comments