arx
73 TopicsNetapp c-mode cifs on ARX
Netapp cifs share assigned to ARX and exported. The share is visible in the ARX namespace but ALL Access is denied. Any ideas on how to remedy this would be appreciated. I would like to be able to add c-mode shares to ARX for "multi-volume-share" migrations off old 7-mode systems. The POC using the netapp simulator is not going so well. Netapp 8.2 cluster mode simulator ARX code level V6.03.000.14792 SVM cifs joined to Active Directory domain. The cifs share policy on the Netapp SVM is wide open to "everyone". The cifs share is accessible normally via any windows system through the SVM namespace. There is an AD service account for the ARX. I can add a 7mode share to the test volume on ARX.... when I add a c-mode share to the same volume it breaks the 7mode share via ARX also. I've tried it sooo many different ways. I've also tried disabling smb2 on the c-dot share with no change.203Views0likes0CommentsDoes ARX need to be in the TMOS Operations Guide outline?
In my initial meetings with the subject matter experts on the Ops Guide project, I'm getting feedback that the ARX section in the outline (Monitoring-->6. Networking-->h.Module-Specific Monitoring-->7.ARX) is not required. Does anyone object to removing ARX from the guide?283Views0likes2CommentsDevCentral Top5 02/15/2012
Welcome to a special "yes I know it's Wednesday but I won't be here Friday" edition of the Top5. There has already been some great content in the last week or so, which makes it easy to do an edition mid-week, but that's not unusual. Given the amount of awesome content that can generally be found roaming the wilds of DevCentral, it isn't uncommon to have enough to fill up the Top5 by Wednesday. This week I am taking advantage of that fact. Though I have no doubt there will be still more goodness to come this week, you'll have to manage for yourselves...so dig deep and see what's out there! In the meantime, here are a few great pieces with which to get started: iRules Concepts: Tcl, The How and Why http://bit.ly/z9j18P One of the questions that we get asked from time to time is, "Why Tcl". Those people are referring to the interpreter we chose as the underlying infrastructure for iRules, of course. I've answered this question several times, and frankly it belies many solid, deep dive style concepts about iRules: how they work at their core, TMM interaction, byte code compilation and more, that are worth discussing. So...that's what I did. This article looks to shed some light on iRules history, anatomy, our choices in regards to their underpinnings, and why we do what we do how we do it. What it lacks in code samples and graphs, it makes up for in sheer word count (if, you know, that's your thing) but hopefully others find it useful content...I certainly did. Google reCaptcha Verification With Sideband Connections http://bit.ly/A4PAma One of the many awesome Tech Tips that George has written recently...this one eluded the Top5 in previous weeks as there was just too much good stuff to share. Having read through it again this week, though, I decided it needs to make the hit list. This shows off one of the key features in iRules for v11, sideband connections, and how to do something very handy with them. Real world applications of bleeding edge iRules features in a consumable, organized, easy to follow format ... yep, that's kind of my thing. So here it is, better late than never. Take a read and see what else George has been up to, it's definitely worth the time. F5 ARX WAN Optimization with WOM http://bit.ly/w9bhsg Pushing out an example from the field is a treat for me, and this week is no exception. Michael Fabiano, one of the FSEs here at F5, put together a very solid article on ARX WAN Opt with WOM. If you've been curious about possible solutions for multi-data center storage, this is the article for you. There are many things from an F5 perspective that can be done to streamline and optimize the general multi-location storage deployment, and those benefits are broken out here in an easy to follow (and implement) format. Whether it's ARX, WOM or both that you're looking to deploy or investigate, this picture is a good one, especially given the ways they work together. Michael does a good job of making this approachable and interesting, so take a look and learn something. F5 Friday: What's Inside an F5? http://bit.ly/yAwUi0 Lori came through last week with a solid answer to a question that seems to take many forms with this look at what actually goes on inside F5 devices. We have come a long, long way from the old 4.x and before days. As she points out things have changed all the way up and down the stack from hardware to software, as well as many massive leaps forward conceptually, allowing us to deliver a whole new level of power. Many people don't fully understand what it is that these products we talk about all the time offer at a somewhat base level. Lori does a good job here of giving some insight into that without going so deep that she loses the passengers on the trip. If you've ever wondered about TMOS, vCMP, or any of the other magic that happens internally...take a look. New iOS Edge Client http://bit.ly/wE68Lv Last but not least Pete delivered a friendly reminder today that there is a new iOS Edge Client available for download in the App store. If you, like me, are one of the many folks making use of the Edge Client from an iOS device, this new version adds some worthwhile features. I love seeing the effort being put into making our products easier to use and more accessible not just for the administrators, but for the end users as well. This new release won't change the lives of the people running the systems, but it makes it just that much easier for those of us using the products as an end user (yes, I'm an end user too), and that is valuable. I just updated my device, and figured I'd pass on the heads up as a nice way to round out the Top5 for this week. That's it for this week, as always feel free to drop me some feedback or suggestions. #Colin215Views0likes0CommentsF5 Friday: Big Data? Big Risk…
#bigdata #infosec Storing sensitive data in the cloud is made more palatable by applying a little security before the data leaves the building… When corporate hardware, usually laptops, are stolen, one of the first questions asked by information security professionals is whether or not the data on the drive was encrypted. While encryption of data is certainly not a panacea, it’s a major deterrent to those who would engage in the practice of stealing data for dollars. Many organizations are aware of this and use encryption judiciously when data is at rest in the data center storage network. But as the Corollary to Hoff’s Law states, even “if your security practices don’t suck in the physical realm, you’ll be concerned by the inability to continue that practice when you move to Cloud.” It’s not that you can’t encrypt data being moved to cloud storage services, it’s that doing so isn’t necessarily a part of the processes or APIs used to do so. This makes it much more difficult to enforce such a policy and, for some organizations, unless they are guaranteed data will be secured at rest they aren’t going to give the okay. A recent Ponemon study speaks to just this issue: According to the report entitled "Data Security in the Cloud Survey of U.S. IT Operations, IT Security and Compliance Practitioners", only one third of IT security practitioners believe cloud infrastructure (IaaS) environments are as secure as on premise datacenters, while half of compliance officers think IaaS is as secure. -- Ponemon Institute Survey on Cloud Data Security Exposes Gulf between IT Security and Compliance Officers INTEGRATION and REPLICATION In order to make cloud a more palatable option it is necessary to ensure that data can be stored securely off-premise. A tried and true method is to encrypt the data before it leaves the building. And yet the same Ponemon study found that less than one-third of respondents’ organizations do just that. A possible explanation for organizations’ failure to encrypt data being transferred to the cloud is a lack of process and integration with the ways in which the data is transferred. Storing data in “the cloud” is generally accomplished via an API, and rarely do these APIs include a flag for “hey, encrypt my data.” There are technical reasons why this is the case; encryption – at least encryption worth the effort and compute consumed – often makes use of certificates and keys. Those keys should be unique to the organization. Using a general cloud storage service encryption API would require either sharing of that key (bad idea) or the use of a common provider key (yet another bad idea), neither of which is an acceptable solution. The answer is, of course, to encrypt the data before transfer to the cloud storage service. The cloud storage service, after all, doesn’t care what the data is – it just cares that it has to store it for you. This brings us back to the problem of process and integration at the infrastructure layer. What organizations need to leverage cloud storage services is the means to automatically encrypt data as it’s headed for the cloud. What organizations need is for that cloud storage service to be integrated with their own, data center based storage in a way that makes it possible to leverage cloud storage automatically, encrypting the data when it’s bound for the cloud. Organizations need a common, overarching storage solution that can seamlessly integrate cloud storage into operational processes and automatically provide a layer of security through encryption of the data when that data might be stored off-site, in a cloud storage service. F5 ARX and ARX Cloud Extender (CE) is that solution. In addition to its core aggregation and intelligent tiering capabilities, adding ARX CE to the architecture will allow for the seamless extension of storage to the cloud securely. When ARX CE is preparing to send data to public cloud destinations, the data is encrypted using AES-256 bit encryption for each object. Further, all transfers from the ARX CE-enabled Windows file server to public cloud storage occur over SSL (HTTPS), which provides network layer encryption. -- Securing Data in the Cloud with ARX CE The Ponemon study revealed that “less than half of IT practitioners (35%) and compliance officers (42%) believe their organizations have adequate technologies to secure their IaaS environments.” So not only do organizations believe the cloud is less secure, they also believe they don’t have the right tools to secure it and thus take advantage of it. F5 ARX and ARX CE addresses the operational risk associated with storage in the cloud – by integrating cloud storage services into operational processes it alleviates the manual burden imposed on IT to schedule transfers and prioritize files across tiers. With the ability to automatically apply encryption to data and use a secure transport channel to cloud storage services, it adds a layer of security to data stored in the cloud that would otherwise not exist, giving IT the confidence required to take advantage of lower cost storage in the cloud and realize its benefits. F5 ARX Cloud Extender Resources Securing Data in the Cloud with ARX CE (How To) ARX Tiered Storage: Best Practices Getting Up And Running With F5 ARX Virtual Edition F5 Storage Solutions F5 ARX 1500 and 2500 F5’s New ARX Platforms Help Organizations Reap the Benefits of File Virtualization Network World – F5 Rolls Out New File Virtualization Appliances Analyzing Performance Metrics for File Virtualization Strategies for a Seamless and Secure Transition to Enterprise Cloud Storage Building a Cloud-Ready File Storage Infrastructure SSDs, Velocity and the Rate of Change F5 Friday: If Data is King then Storage Virtualization is the Castellan F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage F5 Friday: ARX VE Offers New Opportunities Disk May Be Cheap but Storage is Not All F5 Friday Posts on DevCentral Tiering is Like Tables, or Storing in the Cloud Tier165Views0likes0CommentsRemember When Hand Carts Were State Of The Art? Me either.
Funny thing about the advancement of technology, in most of the modern world we enshrine it, spend massive amounts of money to find “the next big thing”, and act as if change is not only inevitable, but rapid. The truth is that change is inevitable, but not necessarily rapid, and sometimes, it’s about necessity. Sometimes it is about productivity. Sometimes, it just plain isn’t about either. Handcarts are still used for serious purposes in parts of the world, by people who are happy to have them, and think a motorized vehicle would be a waste of resources. Think on that for a moment. What high-tech tool that was around 20 years ago are you still using? Let alone 200 years ago. The replacement of handcarts as a medium for transport not only wasn’t instant, it’s still going on 100 years after cars were mass produced. Handcart in use – Mumbai Daily We in high-tech are constantly in a state of flux from this technology to that solution to the other architecture. The question you have to ask yourself – and this is getting more important for enterprise IT in my opinion – is “does this do something good for the company?” It used to be that IT folks could try out all sorts of new doo-dads just to play with them and justify the cost based on the future potential benefit to the company. I’d love to say that this had a powerful positive effect, but frankly, it only rarely paid off. Why? Because we’re geeks. We buy this stuff on our own dime if the company won’t foot for it, and our eclectic tastes don’t necessarily jive with the needs of the organization. These days, the change is pretty intense, and focuses on infrastructure and application deployment architectures. Where can you run this application, and what form will the application take? Virtualized? Dedicated hardware? Cloud? the list goes on. And all of these questions spur thoughts about security, storage, the other bits of infrastructure required to support an application no matter where it is deployed. These are things that you can model in your basement, but can’t really test out, simply because the architecture of an enterprise is far more complex than the architecture of even the geekiest home network. Lori and I have a pretty complex network in our basement, but it doesn’t hold a candle to our employers’ worldwide network supporting dev and sales offices on every continent, users in many languages, and a potpourri of access methods that must be protected and available. Sometimes, change is simply a change of perspective. F5’s new iApps, for example, put the ADC infrastructure bits together for the application, instead of managing application security within the module that handles application security (ASM), it bundles security in with all of the other bits – like load balancing, SSL offload, etc – that an application requires. This is pretty powerful, it speeds deployment and troubleshooting because everything is in one place, and it speeds adding another machine because you simply apply the same iApp Template. That means you spin up another instance of the VM in question, tweak the settings, and apply the template already being used on existing instances, and you’re up. Sometimes, change is more radical. Deploying to the cloud is a good example of this, and cloud deployments suffer for it. Indeed, private and hybrid clouds are growing rapidly precisely because of the radical change that public cloud can introduce. Cloud storage was so radical that very few were willing to use it even as most thought it was a good idea. Along came cloud storage gateways like our ARX Cloud Extender or a variety of others, and suddenly the weakness was ameliorated… Because the radical bit of cloud storage was simply that it didn’t talk like storage traditionally has. With a gateway it does. And with most gateways (check with your provider) you get compression and encryption, making the cloud storage more efficient and secure in the process. But like the handcart, the idea that cloud, or virtualization, or consumerization must take hold overnight and you’re behind the times if you weren’t doing it yesterday are misplaced. Figure out what’s best for your organization, not just in terms of technology, but in terms of timelines also. Sure, some things, like support for the CEOs iPad will take on a life of their own, but in general, you’ve got time to figure out what you need, when you need it, and how best to implement it. As I’ve mentioned before, at the cutting edge of technology, when the hype cycle is way overblown, that’s where you’ll find the largest number of vendors that won’t be around to support you in five years. If you can wait until the noise about a space quiets down, you’ll be better served, because the level of competition will have eliminated the weaker companies and you’ll be dealing with the technological equivalent of the Darwinian most fit. Sure, some of those companies will fail or get merged also, but the chances that your vendor of choice won’t, or their products will live on, are much better after the hype cycle. After all, even though engine powered conveyances have largely replaced hand carts, have you heard of White Motor Company, Autocar Company, or Diamond T Company? All three made automobiles. They lived through boom and were swallowed in bust. Though in automobiles the cycle is much longer than in high-tech (Autocar started in the late 1800s and was purchased by White in the 1950s for example, who was purchased later by Audi), the same process occurs, so count on it. And no, I haven’t developed a sudden interest in automobile history, all of these companies thrived making half-tracks in World War Two, that’s how I knew to look for them amongst the massive number of failed car companies. Stay in touch with the new technologies out there, pay attention to how they can help you, but as I’ve said quite often, what's in the hype cycle isn’t necessarily what is best for your organization. 1908 Autocar XV (Wikipedia.org) Of course I think things like our VE product line and our new V.11 with both iApps and app mobility are just the thing for most organizations, even with those I will say “depending upon your needs”. Because contrary to what most marketing and many analysts want to tell you, it really is about your organization and its needs.212Views0likes0CommentsSSDs, Velocity and the Rate of Change.
The rate of change in a mathematical equation can vary immensely based upon the equation and the inputs to the equation. Certainly the rate of change for f(x) = x^2 is a far different picture than the rate of change for f(x)=2x, for example. The old adage “the only constant is change” is absolutely true in high tech. The definition of “high” in tech changes every time something becomes mainstream. You’re working with tools and systems that even ten years ago were hardly imaginable. You’re carrying a phone that Alexander Graham Bell would not recognize – or know how to use. You have tablets with the power that was not so long ago only held by mainframes. But that change did not occur overnight. Apologies to iPhone fans, but all the bits Apple put together to produce the iPhone had existed before, Apple merely had the foresight to see how they could be put together in a way customers would love. The changes happen over time, and we’re in the midst of them, sometimes that’s difficult to remember. Sometimes that’s really easy to remember, as our brand-new system or piece of architecture gives us headaches. Depends upon the day. Image generated at Cool Math So what is coming of age right now? Well, SSDs for one. They’re being deployed in the numbers that were expected long ago, largely because prices have come down far enough to make them affordable. We offer an SSD option for some of our systems these days, and since the stability of our products is of tantamount to our customers’ interests, we certainly aren’t out there on the cutting edge with this development. They’re stable enough for mission critical use, and the uptick in sales reflects that fact. If you have a high-performance application that relies upon speedy database access, you might look into them. There are a lot of other valid places to deploy SSDs – Tier one for example – but a database is an easy win. If access times are impacting application performance, it is relatively easy to drop in an SSD drive and point the DB (cache or the whole DB) at them, speeding performance of every application that relies on that DBMS. That’s an equation that is pretty simple to figure out, even if the precise numbers are elusive. Faster disk access = faster database response times = faster applications. That is the same type of equation that led us to offer SSDs for some of our products. They sit in the network between data and the applications that need the data. Faster is better, assuming reliability, which after years of tweaking and incremental development, SSDs offer. Another place to consider SSDs is in your virtual environment. If you have twenty VMs on a server, and two of them have high disk access requirements, putting SSDs into place will lighten the load on the overall system simply by reducing the blocking time waiting for disk responses. While there are some starting to call for SSDs everywhere, remember that there were some who said cloud computing meant no one should ever build out a datacenter again also. The price of HDs has gone down with the price of SSDs pushing them from the top, so there is still a significant cost differential, and frankly, a lot of applications just don’t need the level of performance that SSDs offer. The final place I’ll offer up for SSDs is if you are implementing storage tiering such as that available through our ARX product. If you have high-performance NAS needs, placing an SSD array as tier one behind a tiering device can significantly speed access to the files most frequently used. And that acceleration is global to the organization. All clients/apps that access the data receive the performance boost, making it another high-gain solution. Will we eventually end up in a market where old-school HDDs are a thing of the past and we’re all using SSDs for everything? I honestly can’t say. We have plenty of examples in high-tech where as demand went down, the older technology started to cost more because margins plus volume equals profit. Tube monitors versus LCDs, a variety of memory types, and even big old HDDs – the 5.25 inch ones. But the key is whether SDDs can fulfill all the roles of HDDs, and whether you and I believe they can. That has yet to be seen, IMO. The arc of price reduction for both HDDs and SSDs plays in there also – if quality HDDs remain cheaper, they’ll remain heavily used. If they don’t, that market will get eaten by SSDs just because all other things being roughly equal, speed wins. It’s an interesting time. I’m trying to come up with a plausible use for this puppy just so I can buy one and play with it. Suggestions are welcome, our websites don’t have enough volume to warrant it, and this monster for laptop backups would be extreme – though it would shorten my personal backup window ;-). OCZ Technology 1 TB SSD. The Golden Age of Data Mobility? What Do You Really Need? Use the Force Luke. (Zzzaap) Don't Confuse A Rubber Stamp With Validation On Cloud, Integration and Performance Data Center Feng Shui: Architecting for Predictable Performance F5 Friday: Performance, Throughput and DPS F5 Friday: Performance Analytics–More Than Eye-Candy Reports Audio White Paper - High-Performance DNS Services in BIG-IP ... Analyzing Performance Metrics for File Virtualization278Views0likes0CommentsIn The End, You Have to Clean.
Lori and I have a large technical reference library, both in print and electronic. Part of the reason it is large is because we are electronics geeks. We seriously want to know what there is to know about computers, networks, systems, and development tools. Part of the reason is that we don’t often enough sit down and decide to pare the collection down by those books that no longer have a valid reason for sitting on our (many) bookshelves of technical reference. The collection runs the gamut from the outdated to the state of the art, from the old stand-byes to the obscure, and we’ve been at it for 20 years… So many of them just don’t belong any more. One time we went through and cleaned up. The few books we got rid of were not only out of date (mainframe pascal data structures was one of them), but weren’t very good when they were new. And we need to do it again.From where I sit at my desk, I can see an OSF DCE reference, the Turbo Assembler documentation, A Perl 5 reference, a MicroC/OS-II reference, and Mastering Web Server Security. All of which are just not relevant anymore. There’s more, but I’ll save you the pain, you get the point. The thing is, I’m more likely to take a ton of my valuable time and sort through these books, recycling those that no longer make sense unless they have sentimental value - Lori and I wrote an Object Oriented Programming book back in 1996, that’s not going to recycling – than you are to go through your file system and clean the junk out of it. Two of ten… Funny thing happens in highly complex areas of human endeavor, people start avoiding ugly truths by thinking they’re someone else’s problem. In my case (and Lori’s), I worry about recycling a book that she has a future use for. Someone else’s problem syndrome (or an SEP field if you read Douglas Adams) has been the source of tremendous folly throughout mankind’s history, and storage at enterprises is a prime example of just such folly. Now don’t bet me wrong, I’ve been around the block, responsible for an ever-growing pool of storage, know that IT management has to worry that the second they start deleting unused files they’re going to end up in the hotseat because someone thought they needed the picture of the sign in front of the building circa 1995… But if IT (who owns the storage space) isn’t doing it, and business unit leaders (who own the files on the storage) aren’t doing it… Well, you’re going to have a nice big stack of storage building up over the next couple of years. Just like the last couple. I could – and will - tell you that you can use our ARX product to help you solve the problem, particularly with ARX Cloud Extender and a trusted cloud provider, by shuffling out to the cloud. But in the longer term, you’ve got to clean up the bookshelf, so-to-speak. ARX is very good at many things, but not making those extra files disappear. You’re going to pay for more disk, or you’re going to pay a cloud provider until you delete them. I haven’t been in IT management for a while, but if I were right now, I’d get the storage guys to build me a pie-chart showing who owns how much data, then gather a couple of outrageous examples of wasted space (a PowerPoint that is more than five years old is good, better than the football pool for marketing from ten years ago, because PowerPoint uses a ton more disk space), and then talk with business leaders about the savings they can bring the company by cleaning up. While you can’t make it their priority, you can give them the information they need. If marketing is responsible for 30% of the disk usage on NAS boxes (or I suppose unstructured storage in general, though this exercise is more complex with mixed SAN/NAS numbers, not terribly more complex), and you can show that 40% of the files owned by Marketing haven’t been touched in a year… That’s compelling at the C-level. 12% of your disk is sitting there just from one department with easy to identify unused files on it. Some CIOs I’ve known have laid the smackdown – “delete X percent by Y date or we will remove this list of files” is actually from a CIOs memo – but that’s just bad PR in my opinion. Convincing business leaders that they’re costing the company money – what’s 12% of your NAS investment for example, plus 12% of the time of the storage staff dedicated to NAS – is a much better plan, because you’re not the bad guy, you’re the person trying to save money while not negatively impacting their jobs. So yeah, install ARX, because it has a ton of other benefits, but go to the bookshelf, dust off that copy of the Fedora 2 Admin Guide, and finally put it to rest. That’s what I’ll be doing this weekend, I know that.190Views0likes0CommentsWhen The Walls Come Tumbling Down.
When horrid disasters strike and both people and corporations are put on notice that they suddenly have a lot more important things to do, will you be ready? It is a testament to man’s optimism that with very few exceptions we really don’t, not at the personal level, not at the corporate level. I’ve worked a lot of places, and none of them had a complete, ready to rock DR plan. The insurance company I worked at was the closest – they had an entire duplicate datacenter sitting dark in a location very remote from HQ, awaiting need. Every few years they would refresh it to make certain that the standby DC had the correct equipment to take over, but they counted on relocating staff from what would be a ravaged area in the event of a catastrophe, and were going to restore thousands of systems from backups before the remote DC could start running. At the time it was a good plan. Today it sounds quaint. And it wasn’t that long ago. There are also a lot of you who have yet to launch a cloud initiative of any kind. This is not from lack of interest, but more because you have important things to do that are taking up your time. Most organizations are dragging their feet replacing people, and few – according to a recent survey, very few – are looking to add headcount (proud plug that F5 is – check out our careers page if you’re looking). It’s tough to run off and try new things when you can barely keep up with the day-to-day workloads. Some organizations are lucky enough to have R&D time set aside. I’ve worked at a couple of those too, and honestly, they’re better about making use of technology than those who do not have such policies. Though we could debate if they’re better because they take the time, or take the time because they’re better. And the combination of these two items brings us to a possible pilot project. You want to be able to keep your organization online or be able to bring it back online quickly in the event of an emergency. Technology is making it easier and easier to complete this arrangement without investing in an entire datacenter and constantly refreshing the hardware to have quick recovery times. Global DNS in various forms is available to redirect users from the disabled datacenter to a datacenter that is still capable of handling the load, if you don’t have multiple datacenters, then it can redirect elsewhere – like to virtual servers running in the cloud. ADCs are starting to be able to work similarly whether they are cloud deployed or DC deployed, that leaves keeping a copy of your necessary data and applications in the cloud, and cloud storage with a cloud storage gateway such as the Cloud Extender functionality in our ARX product allow for this to be done with a minimum of muss and fuss. These technologies, used together, yield a DR architecture that looks something like this: Notice that the cloud extender isn’t listed here, because it is useful for getting the data copied, but would most likely reside in your damaged datacenter. Assuming that the cloud provider was one like our partner Rackspace, who does both cloud VMs and cloud storage, this architecture is completely viable. You’ll still have to work some things out, like guaranteeing that security in the cloud is acceptable, but we’re talking about an emergency DR architecture here, not a long-running solution, so app-level security and functionality to block malicious attacks at the ADC layer will cover most of what you need. AND it’s a cloud project. The cost is far, far lower than a full blown DR project, and you’ll be prepared in case you need it. This buys you time to ingest the fact that your datacenter has been wiped out. I’ve lived through it, there is so much that must be done immediately – finding a new location, dealing with insurance, digging up purchase documentation, recovering what can be recovered… Having a plan like this one in place is worth your while. Seriously. It’s a strangely emotional time, and having a plan is a huge help in keeping people focused. Simply put, disasters come, often without warning – mine was a flood caused by a broken pipe. We found out when our monitoring equipment fried from being soaked and sent out a raft of bogus messages. The monitoring equipment was six feet above the floor at the time. You can’t plan for everything, but to steal and twist a famous phrase, “he who plans for nothing protects nothing.”203Views0likes0CommentsForget Performance IN the Cloud, What About Performance TO the Cloud?
In an N-Tiered architecture, the network connection between tiers becomes a truly important part of the overall application performance equation. This is a fact we have known for a couple of decades now. If your network performance is down for some reason (from mis-wiring to hardware mis-configuration to over utilization), your application performance will, by definition, suffer. I ran a test once while writing for Network Computing where the last person to use the lab had programmed the ports I was using to shunt bandwidth over threshold X onto a different VLAN. It took weeks and the help of one of the vendors under test – Juniper – to help me figure out exactly what was going wrong. Only after they observed that it was topping out and then dropping packets were we able to track down the one changed configuration setting. The funny thing is that this particular change would not have impacted the vast majority of testing that we did in the lab, I was just unlucky enough to pick the wrong ports for a test that purposefully overloaded the network. Until the Juniper crew said “no, that’s not right, we’ve done this test before”, I was blaming the products for the failures under high load… A symptom of the fact that when network performance degrades, systems appear to degrade. Thankfully, we caught the problem and I did not go to print with misinformation. But it does highlight the need for network performance to be top-notch if your application performance is to be top-notch. And we’re starting to head into uncharted territory where this fact is concerned. The problem is that enterprises don’t generally throw a whole bunch of data over the WAN, and if they do, they have specially provisioned ways to do so – because they’re going to a remote datacenter or partner network that is known and data volumes can be calculated for. But as we approach cloud computing we need to be more aware of the network performance aspects than we would be either in the datacenter or transferring between two datacenters. The reasons for this are manifold. First, you don’t have control of the connection to the cloud provider. You own one end, and with the right tools (skipping plug for F5WOM here, look it up if you have a need) you can “control” both ends of the connection, but you don’t really have control of what is in-between. The volume of traffic, the size of the pipes, etc. are all dictated by outside forces. For some DC-to-DC connections this is true also, but unlike cloud, DC-to-DC is point-to-point. If you are dealing with one of the major cloud vendors, you can’t even be certain what country your app resides in at the moment, let alone the route to that application. Some of this concern is automatically mitigated. If the platform provider is large enough to have datacenters in multiple countries, they have pipes bigger than New York sewers connected to those datacenters. Some of it can be mitigated by those same “right tools” that help at the ends of the connection, because they can create tunnels or p2p connections to handle communications, and offer bandwidth reduction capabilities to make certain you are only sending smaller amounts of data, reducing the impact of the network by having less round trips across it. In cloud storage, you have a bigger issue because the whole point is to send massive amounts of data across the WAN. While the right products will reduce the footprint of that data with compression (skipping plug for F5 ARX, look it up if you have a need), you still are sending a lot, or you wouldn’t have to go to a cloud platform, you’d just store it locally. So the question becomes how to make certain that performance to the cloud storage vendor is optimal when you don’t own both ends of the connection? That’s a tricky question, because it is not just a question for today, it is a question forever. You see, the more you store in a cloud storage providers’ space, the less likely you are to want to change providers. But the more business a cloud storage provider receives, the more companies are cramming huge volumes of data in and out of their DC. Which could cause performance problems for you… Unless you have SLAs that are rock-solid and you’re willing to enforce. The long and the short of this post is some advice. Get tools to reduce the amount you’re sending over the wire, and make certain you have an SLA that will cover you if your usage (or the other people using the providers’ usage) jumps. And yeah, if your vendor charges by the megabit or megabyte, check out some products that might reduce your throughput. I might have mentioned a couple here, but there are more out there.207Views0likes0Comments