vegas
5 TopicsAutomation Is Not Your Enemy.
Sun Tzu wrote that you cannot win if you do not know your enemy and yourself. In his sense, he was talking about knowing your army and its capabilities, but this rule seriously applies to nearly every endeavor, and certainly every competitive endeavor. Knowing your own strengths and weaknesses - In our case the strengths and weaknesses of IT staff and architecture – is imperative if you are to meet the challenges that your IT department faces every day. It is not enough to know that you must do X, you must know how X fits (or doesn’t!) into your architecture, and how easily your staff will be able to absorb the knowledge necessary to implement X. Take RSS feeds for example. RSS is largely automated. But if you receive a requirement to implement RSS in the corporate intranet or web portal, the first question is “can the system handle it?” If the answer is no, the next question is “can staff today implement it?” If the answer is no, the next question is “do we buy something to do this for us, or train staff to implement a solution?” Remember this is all hypothetical. Unless you had very specific needs, I would not recommend training staff to write an RSS parser. At best I’d say get a library and train them to use calls to it… Which does indicate a corollary to this point of Sun Tzu’s… Know the terrain (in this case the RSS ecosystem) in which you will meet your enemies. Sun Tzu, courtesy of Wikipedia By extension, knowing the terrain implies “have some R&D time in normal workloads”. I’ve said that before, but it’s worth saying over and over. Sure, some employees might waste that R&D time. Some won’t. Ask Google. It doesn’t have to be some huge percentage, just don’t ask your staff to be up-to-date on things they don’t have time to go research. But I digress. As virtualization and cloud grow in importance, so too does the ability to automate some functionality. As end user computing starts to utilize a growing breadth of devices, automation starts to gain even more imperative. Seriously, on my team alone we have Android, Blackberry, and Apple tablets, Apple and Blackberry phones… And we’re all hitting websites originally designed for Windows. The ability to serve all of these devices intelligently is facilitated by the ability to detect and route them to the correct location – and to be able to monitor usage and switch infrastructure resources to the place that they’re most needed. Some IT staff reasonably worry that automation is going to negatively impact their job prospects. Network Admins in particular have seen many jobs other than theirs shipped off-shore or automated out of existence, and don’t want to end up doing the same. But there are two types of automation advancement, those that eliminate or minimize the need for people – as factory automation often does to keep expenses down – and the type that frees people up to handle greater volumes or more complex tasks – as virtualization did. Virtualization reduced the time to bring up a new server to near zero. That eliminated approximately zero systems admin jobs. The reason is that there was a pent up demand for more servers, and once IT wasn’t holding requests up with cost and timing bottlenecks, demand exploded. Also, admins had more responsibilities – now there were the host systems and dozens of resident VMs. The same will be true of increasing network automation. Yes, some of the tasks regularly done by network admins will get automated out of existence, but in return, managing the system that automates those tasks will fall upon the shoulders of the very administrators that have more time. And the complexity of networks in the age of cloud and virtualization is headed up, meaning the specialized knowledge required to keep these networks not just working, but performing well will end up with the network admins. Making network automation an opportunity, not a risk. An opportunity to better serve customers, an opportunity to learn new things, an opportunity to take on greater responsibility. And make things happen that need to happen at 2am, without the dreaded on-call phone call. We at F5 have been calling it “ABLE infrastructure” to reference our network automation efforts, and that’s really what it boils down to – make the network ABLE to do what network admins have been doing, so they can do the next step, integrating WAN and cloud as if it was on the LAN, and dealing with the ever-growing number of VMs requesting IP addresses. And some R&D. After all, once automation is in place, another “must have” project will come along. They always do, and for most of us in IT, that’s a good thing.222Views0likes0CommentsUseful Cloud Advice, Part Two. Applications
This is the second part of this series talking about things you need to consider, and where cloud usage makes sense given the current state of cloud evolution. The first one, Cloud Storage, can be found here. The point of the series is to help you figure out what you can do now, and what you have to consider when moving to the cloud. This will hopefully help you to consider your options when pressure from the business or management to “do something” mounts. Once again, our definition of cloud is Infrastructure as a Service (IaaS) - “VM containers”, not SOA or other variants of Cloud. For our purposes, we’ll also assume “public cloud”. The reasoning here is simple, if you’re implementing internal cloud, you’re likely already very virtualized, and you don’t have the external vendor issues, so you don’t terribly need this advice – though some of it will still apply to you, so read on anyway. Related Articles and Blogs Maybe Ubuntu Enterprise Cloud Makes Cloud Computing Too Easy Cloud Balancing, Cloud Bursting, and Intercloud Bursting the Cloud The Impossibility of CAP and Cloud Amazon Makes the Cloud Sticky Cloud, Standards, and Pants The Inevitable Eventual Consistency of Cloud Computing Infrastructure 2.0 + Cloud + IT as a Service = An Architectural ... Cloud Computing Makes Servers Obsolete Cloud Computing's Other Achilles' Heel: Software Licensing172Views0likes0CommentsIn Times Of Change, IT Can Lead, Follow, Or Get Out of the Way.
Information Technology – geeks like you and I – have been responsible for an amazing transformation of business over the last thirty or forty years. The systems that have been put into place since computers became standard fare for businesses have allowed the business to scale out in almost every direction. Greater production, more customers, better marketing and sales follow-through, even insanely targeted marketing for those of you selling to consumers. There is not a piece of the business that would be better off without us. With that change came great responsibility though. Inability to access systems and/or data brings the organization to a screeching halt. So we spend a lot of time putting in redundant systems – for all of its power as an Advanced Application Delivery Controller, many of F5’s customers rely on BIG-IPLTM to keep their systems online even if a server fails. Because it’s good at that (among other things), and they need redundancy to keep the business running. When computerization first came about, and later when Palm and Blackberry were introducing the first personal devices, people – not always IT people – advocated change, and those changes impacted every facet of the business, and provide you and I with steady work. The people advocating were vocal, persistent, and knew that there would be long-term benefit from the systems, or even short-term benefit to dealing with ever increasing workloads. Many of them were rewarded with work maintaining and improving the systems they had advocated for, and all of them were leaders. As we crest the wave of virtualization and start to seriously consider cloud computing on a massive scale – be it cloud storage, cloud applications, or SOA applications that have been cloud-washed – it is time to seriously consider IT’s role in this process once again. Those leaders of the past pushed at business management until they got the systems they thought the organization needed, and another group of people will do the same this time. So as I’ve said before, you need to facilitate this activity. Don’t make them go outside the IT organization, because history says that any application or system allowed to grow outside the IT organization will inevitably fall upon the shoulders of IT to manage. Take that bull by the horns, frame the conversation in the manner that makes the most sense to your business, your management, and your existing infrastructure. Companies like F5 can help you move to the cloud with products like ARX Cloud Extender to make cloud storage look like local NAS, and BIG-IP LTM VE to make cloud apps able to partake of load balancing and other ADC functionality, but all the help in the world doesn’t do you any good if you don’t have a plan. Look at the cloud options available, they’re certainly telling you about themselves right now so that should be easy, then look at your organization’s acceptance of risk, and the policies of cloud service providers in regards to that risk, and come up with ideas on how to utilize the cloud. One thing about a new market that includes a cool buzz word like cloud, if you aren’t proposing where it fits, someone in your organization is. And that person is never going to be as qualified as IT to determine which applications and data belong outside the firewall. Never. I’ve said make a plan before, but many organizations don’t seem to be listening, so I’m saying it again. Whether Cloud is an enabling technology for your organization or a disruptive one for IT is completely in your hands. Be the leader of the past, it’s exciting stuff if managed properly, and like many new technologies, scary stuff if not managed in the context of the rest of your architecture. So build a checklist, pick some apps and even files that could sit in the cloud without a level of risk greater than your organization is willing to accept, and take the list to business leaders. Tell them that cloud is helping to enable IT to better serve them and ask if they’d like to participate in bringing cloud to the enterprise. It doesn’t have to be big stuff, just enough to make them feel like you’re leading the effort, and enough to make you feel like you’re checking cloud out with out “going all in”. After a few pilots, you’ll find you have one more set of tools to solve business problems. And that is almost never a bad thing. Even if you decide cloud usage isn’t for your organization, you chose what was put out there, not a random business person who sees the possibilities but doesn’t know the steps required and the issues to confront. Related Blogs: Risk is not a Synonym for “Lack of Security” Cloud Changes Cost of Attacks Cloud Computing: Location is important, but not the way you think Cloud Storage Gateways, stairway to (thin provisioning) heaven? If Security in the Cloud Were Handled Like Car Accidents Operational Risk Comprises More Than Just Security Quarantine First to Mitigate Risk of VM App Stores CloudFucius Tunes into Radio KCloud Risk Averse or Cutting Edge? Both at Once.208Views0likes0CommentsLoad Balancing For Developers: Security and TCP Optimizations
It has been a while since I wrote a Load Balancing for Developers installment, and since they’re pretty popular and there’s still a lot about Application Delivery Controllers (ADCs) that are taken for granted in the Networking industry but relatively unknown in the development world, I thought I’d throw one out about making your security more resilient with ADCs. For those who are just joining this series, here’s the full list of posts I’ve tagged as Load Balancing for Developers, though only the ones whose title starts with “Load Balancing for Developers” or “Advance Load Balancing for Developers” were actually written from this perspective, utilizing our fictional web application Zap’N’Go! as an example. This post, like most of them, doesn’t require that you read the other entries in the “Load Balancers for Developers” series, but if you’re interested in the topic, they are all written from the developer’s perspective, and only bring in the networking/ops portions where it makes sense. So your organization has a truly successful web application called Zap’N’Go! That has taken the Internet by storm. Your hits are in the thousands an hour, and orders are rolling in. All was going well until your server couldn’t keep up and you went to a load balanced scenario so that multiple servers could share the load. The problem is that with the money you’ve generated off of Zap’N’Go, you’ve bought a competitor and started several new web applications, set up a forum or portal for your customers to communicate with you and each other directly, and are using the old datacenter from the company you purchased as a redundant datacenter in case the worst should happen. And all of that means that you are suffering server (and VM) sprawl. The CPU cycles being eaten up by your applications are truly astounding, and you’re looking into ways to drive them down. Virtualization helped you to be more agile in responding to the requests of the business, but also brings a lot of management overhead in making certain servers aren’t overloaded with too high a virtual density. One of the cool bits about an ADC is that they do a lot more than load balance, and much of that can be utilized to improve application performance without re-architecting the entire system. While there are a lot of ways that an ADC can improve application performance, we’ll look at a couple of easy ones here, and leave some of the more difficult or involved ones for another time. That keeps me in writing topics, and makes certain that I can give each one the attention it deserves in the space available. The biggest and most obvious improvement in an ADC is of course load balancing. This blog assumes you already have an ADC in place, and load balancing was your primary reason for purchasing it. While I don’t have market numbers in front of me, it is my experience that this is true of the vast majority of ADC customers. If you have overburdened web applications and have not looked into load balancing, before you go rewriting your entire system, take a look at the rest of this series. There really are options out there to help. After that win, I think the biggest place – in a virtualized environment – that developers can reap benefits from an ADC is one that developers wouldn’t normally think of. That’s the reason for this series, so I suppose that would be a good thing. Nearly every application out there hits a point where SSL is enabled. That point may be simply the act of accessing it, or it may be when they go to the “shopping cart” section of the web site, but they all use SSL to protect sensitive user data being passed over the Internet. As a developer, you don’t have to care too much about this fact. Pay attention to the protocol if you’re writing at that level and to the ports if you have reason to, but beyond that you don’t have to care. Networking takes care of all of that for you. But what if you could put a request in to your networking group that would greatly improve performance without changing a thing in your code and from a security perspective wouldn’t change much – most companies would see it as not changing anything, while a few will want to talk about it first? What if you could make this change over lunch and users wouldn’t know the difference? Here’s the background. SSL Encryption is expensive in terms of CPU cycles. No doubt you know that, most developers have to face this issue head-on at some point. It takes a lot of power to do encryption, and while commodity hardware is now fast enough that it isn’t a problem on a stand-alone server, in a VM environment, the number of applications requesting SSL encryption on the same physical hardware is many times what it once was. That creates a burden that, at this time at least, often drags on the hardware. It’s not the fault of any one application or a rogue programmer, it is the summation of the burdens placed by each application requiring SSL translation. One solution to this problem is to try and manage VM deployment such that encryption is only required on a couple of applications per physical server, but this is not a very appealing long-term solution as loads shift and priorities change. From a developers’ point of view, do you trust the systems/network teams to guarantee your application is not sharing hardware with a zillion applications that all require SSL encryption? Over time, this is not going to be their number one priority, and when performance troubles crop up, the first place that everyone looks in an in-house developed app is at the development team. We could argue whether that’s the right starting point or not, but it certainly is where we start. Another, more generic solution is to take advantage of a non-development feature of your ADC. This feature is SSL termination. Since the ADC sits between your application and the Internet, you can tell your ADC to handle encryption for your application, and then not worry about it again. If your network team sets this up for all of your applications, then you have no worries that SSL is burning up your CPU cycles behind your back. Is there a negative? A minor one that most organizations (as noted above) just won’t see as an issue. That is that from the ADC to your application, communications will happen in the clear. If your application is internal, this really isn’t a big deal at all. If you suspect a bad-guy on your internal network, you have much more to worry about than whether communications between two boxes are in the clear. If you application is in the cloud, this concern is more realistic, but in that case, SSL termination is limited in usefulness anyway because you can’t know if the other apps on the same hardware are utilizing it. So you simply flick a switch on your ADC to turn on SSL termination, and then turn it off on your applications, and you have what the ADC industry calls “SSL offload”. If your ADC is purpose-built hardware (like our BIG-IP), then there is encryption hardware in the box and you don’t have to worry about the impact to the ADC of overloading it with SSL requests, it’s built to handle the load. If your ADC is software or a VM (like our BIG-IP LTM VE), then you’ll have to do a bit of testing to see what the tolerance level for SSL load is on the hardware you deployed it on – but you can ask the network staff to worry about all of that, once you’ve started the conversation. Is this the only security-based performance boost you can get? No, but it is the easy one. Everything on the Internet remains encrypted, but your application is not burdening the server’s CPU with encryption requests each time communications in or out occur. The other easy one is TCP optimizations. This one requires less talk because it is completely out of the realm of the developer. Simply put, TCP is a well designed protocol that sometimes gets bogged down communicating and has a lot of overhead in those situations. Turning on TCP optimizations in your ADC can reduce the overhead – more or less, depending upon what is on the other end of the communications network – and improve perceived performance, which honestly is one of the most important measures of web application availability. By making it seem to load faster, you’ve improved your customer experience, and nothing about your development has to change. TCP optimizations are not new, and thus the ones that are turned on when you activate the option on most ADCs are stable and won’t disrupt most applications. Of course you should run a short test cycle with them enabled, just to be certain, but I would be surprised if you saw any issues. They’re not unheard of, but they are very rare. That’s enough for now, I think. I don’t want these to get so long that you wander off to develop some more. Keep doing what you do. And strive to keep your users from doing this. Slow apps anger users234Views0likes0CommentsMounting Offline VM Drives
Most of the files I use in my virtual desktop environment are centrally located in a share I make accessible to the host and all the guests for ease of transfer between them. However, there is one guest I keep fairly isolated for security reasons. This is great, but when I need a file, it (has previously) required me to start that guest, wait, login, move the files I need to the share, then shutdown. It’s frequent enough to be annoying. I’d leave it up, but I prefer to keep my BIG-IP LTM VE and a couple linux guests running and there’s only so many resources. Anyway, the annoyance hit the tipping point today and I found out that in my VMware Workstation, there is a tool to mount the guest drives in the host OS.203Views0likes0Comments